Skip to main content

2026 NetOps Predictions - Part 2

Industry experts offer predictions on how NetOps and Network Performance Management (NPM) will evolve and impact business in 2026. Part 2 covers NetOps challenges and the edge.

Listen to Episode 20 of the MTTI Podcast: 2026 NetOps Predictions

NETOPS CHALLENGE: TRAINING AI

Training AI is about to give corporate networks a workout. With more companies adopting agents creating AI apps, the onus will be on IT and NetOps to condition their networks for the big lift in training AI. When AI apps are in learning mode they can access terabytes or petabytes of data very quickly, and they need high speeds to do it. Companies may need to alter their architecture to leverage the GPU on user machines, and create a time-sharing GPU infrastructure that distributes the AI processing towards users of AI rather than centralized data centers. With AI-capable devices and laptops taking some of the load, all users will get a better experience.
Prakash Mana
CEO, Cloudbrink

WEBINAR: Beyond the VPN - Why ZTNA Alone Isn't Enough — and What's Next 

NETOPS CHALLENGE: PHYSICAL SPACE

Addressing The Real Network Bottleneck - Physical Space: AI networks use dramatically more fiber than traditional cloud systems — in fact, ten times more in the GPU back-end alone — and they require predictable power distribution across expanding footprints. In 2026, operators will concentrate on extracting more capacity from the assets they already have, whether that's upgrading existing long-haul routes with low-loss fiber or maximizing conduit and rack space with high-density cabling. This is where glass becomes the hidden performance engine: how far, how densely, and how efficiently data can travel will hinge on advancements in fiber design. Instead of asking, "How fast is the link?" operators will increasingly ask, "How can we increase network capacity can we fit into the space we already have?" 
Brian Rhoney
Data Center Market Development Director, Corning

NETOPS CHALLENGE: WORKFORCE SHORTAGE

Because There Aren't Enough Technicians, Networks Must Become Easier to Build: One of the biggest challenges next year will be the shortage of trained installers. AI networks are growing faster than the engineering workforce can support. In 2026, more operators will adopt plug-and-play, modular, and error-resistant fiber systems that reduce the need for highly specialized labor. This shift isn't just about efficiency — it's about survival. Without simpler, faster ways to connect high-density systems, AI buildouts will hit deployment bottlenecks long before they hit hardware limits. These solutions will speed up installation, reduce mistakes, and help teams build larger networks with fewer technicians.
Brian Rhoney
Data Center Market Development Director, Corning

NETOPS CHALLENGE: CONNECTED DEVICES

More connected devices on more people will put security to the test. Personal connected devices like smart glasses, translation capable airpods, and personal robots will put more load on already straining networks and require new security processes and protocols. IT will need to compensate for the increase in PII in video, audio, and other formats, while maintaining an excellent user experience for employees on their networks. 
Prakash Mana
CEO, Cloudbrink

NETOPS CHALLENGE: DEEPFAKE ATTACKS

Deepfake-driven attacks will become the norm in the corporate world as cybercriminals embrace AI. Imagine attacks that use real-time voice and video cloning to impersonate executives, or fake "live" Zoom/Teams scams, or AI-written business email compromise (BEC) attacks that adapt mid-conversation. If you can imagine it, cybercriminals can do it. Not only are these attacks more difficult to detect, they are cheaper and easier for criminals who can now focus on compromising people to get at a company. Add these individual AI attacks to employees that work from anywhere and it becomes critical for corporate security controls to move away from protecting just the office or the organization with perimeter or network security. Every user, and every device, should be verified every time, regardless of location.
Prakash Mana
CEO, Cloudbrink

NETOPS CONVERGES WITH SECOPS

NetOps will (hopefully) fully converge with SecOps under a single goal: maintaining secure network intent across hybrid infrastructure. As automation deepens, network teams will adopt observability models that continuously validate connectivity, performance, and compliance. I believe that by 2026, successful NetOps organizations will rely on real-time topology awareness and policy-driven automation to reduce both downtime and exposure windows, ensuring agility doesn't come at the cost of control.
Erez Tadmor
Field CTO, Tufin

CLOUD-LIKE EXPERIENCE FOR THE EDGE

Edge computing will become a bigger part of the narrative in 2026. It's been reported, but my conversations with clients are starting to lean heavily on how to bring cloud-like experiences into a hybrid edge environment. AI, whether generative, agentic, or traditional, is becoming a bigger part of the conversation at the edge. Innovating, managing, and scaling solutions for a large fleet of devices/locations will be a big ask and clients want a similar experience for those environments as they have with their cloud operations.
Juan Orlandini
Chief Technology Officer, North America, Insight Enterprises

FRONTIER EDGE

The industry's definition of "the Edge" is now obsolete; 2026 is here, and so is the "Frontier Edge." This shift is driven by several compounding pressures: the explosion of AI-generated content, the massive amount of data for inference (from immersive 8K media to critical AI model updates), and the necessity of connecting the never thought before locations, such as the deep sea, outer space, and the quantum world. However, consumer immersive traffic always takes center stage and competes with critical data, leading to a choke point within cellular 5G, Wi-Fi, and satellite networks. Avoiding this contention requires an immediate architectural shift away from congested and designed for one-to-one communication systems (e.g., cellular 5G, Wi-Fi, satellite), to a scalable, one-to-many distribution like the broadcast networks to ensure seamless and reliable connectivity in the Frontier Edge era.
Apoorva Jain
CPO, EdgeBeam Wireless

DIGITAL TWINS

The digital twin is evolving from a visualization tool into a practical workspace for network planning. It's becoming the operational backbone that unifies teams, accelerates design cycles and drives smarter decisions throughout the entire lifecycle of a network. Although still in the early stages, digital twins are rapidly evolving into a key enabler for AI-driven network lifecycle management, powering faster and more precise strategic planning.
Kelly Burroughs
Director of Strategy and Market Development, iBwave Solutions

WORK ANYTIME

Work from anywhere will become work anytime. Back-to-office mandates have pulled many workers back to the office, but WFH habits die hard. Many tech workers are used to logging in at times convenient for their schedule or work habits. Our usage data early this year showed heavy transfer of data on Fridays, an indication that "work from anywhere" employees actually put in longer hours than their "9 to 5" counterparts — with heavy usage starting at 7:00 am and continuing to 7:00 pm. In 2026 we expect to see more workers logging in both at the office and at home in their off-hours, which may temporarily increase productivity, but burn workers out more quickly. Companies will need to focus on worker experience as well as productivity.
Prakash Mana
CEO, Cloudbrink

Go to: 2026 Cloud Predictions

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

2026 NetOps Predictions - Part 2

Industry experts offer predictions on how NetOps and Network Performance Management (NPM) will evolve and impact business in 2026. Part 2 covers NetOps challenges and the edge.

Listen to Episode 20 of the MTTI Podcast: 2026 NetOps Predictions

NETOPS CHALLENGE: TRAINING AI

Training AI is about to give corporate networks a workout. With more companies adopting agents creating AI apps, the onus will be on IT and NetOps to condition their networks for the big lift in training AI. When AI apps are in learning mode they can access terabytes or petabytes of data very quickly, and they need high speeds to do it. Companies may need to alter their architecture to leverage the GPU on user machines, and create a time-sharing GPU infrastructure that distributes the AI processing towards users of AI rather than centralized data centers. With AI-capable devices and laptops taking some of the load, all users will get a better experience.
Prakash Mana
CEO, Cloudbrink

WEBINAR: Beyond the VPN - Why ZTNA Alone Isn't Enough — and What's Next 

NETOPS CHALLENGE: PHYSICAL SPACE

Addressing The Real Network Bottleneck - Physical Space: AI networks use dramatically more fiber than traditional cloud systems — in fact, ten times more in the GPU back-end alone — and they require predictable power distribution across expanding footprints. In 2026, operators will concentrate on extracting more capacity from the assets they already have, whether that's upgrading existing long-haul routes with low-loss fiber or maximizing conduit and rack space with high-density cabling. This is where glass becomes the hidden performance engine: how far, how densely, and how efficiently data can travel will hinge on advancements in fiber design. Instead of asking, "How fast is the link?" operators will increasingly ask, "How can we increase network capacity can we fit into the space we already have?" 
Brian Rhoney
Data Center Market Development Director, Corning

NETOPS CHALLENGE: WORKFORCE SHORTAGE

Because There Aren't Enough Technicians, Networks Must Become Easier to Build: One of the biggest challenges next year will be the shortage of trained installers. AI networks are growing faster than the engineering workforce can support. In 2026, more operators will adopt plug-and-play, modular, and error-resistant fiber systems that reduce the need for highly specialized labor. This shift isn't just about efficiency — it's about survival. Without simpler, faster ways to connect high-density systems, AI buildouts will hit deployment bottlenecks long before they hit hardware limits. These solutions will speed up installation, reduce mistakes, and help teams build larger networks with fewer technicians.
Brian Rhoney
Data Center Market Development Director, Corning

NETOPS CHALLENGE: CONNECTED DEVICES

More connected devices on more people will put security to the test. Personal connected devices like smart glasses, translation capable airpods, and personal robots will put more load on already straining networks and require new security processes and protocols. IT will need to compensate for the increase in PII in video, audio, and other formats, while maintaining an excellent user experience for employees on their networks. 
Prakash Mana
CEO, Cloudbrink

NETOPS CHALLENGE: DEEPFAKE ATTACKS

Deepfake-driven attacks will become the norm in the corporate world as cybercriminals embrace AI. Imagine attacks that use real-time voice and video cloning to impersonate executives, or fake "live" Zoom/Teams scams, or AI-written business email compromise (BEC) attacks that adapt mid-conversation. If you can imagine it, cybercriminals can do it. Not only are these attacks more difficult to detect, they are cheaper and easier for criminals who can now focus on compromising people to get at a company. Add these individual AI attacks to employees that work from anywhere and it becomes critical for corporate security controls to move away from protecting just the office or the organization with perimeter or network security. Every user, and every device, should be verified every time, regardless of location.
Prakash Mana
CEO, Cloudbrink

NETOPS CONVERGES WITH SECOPS

NetOps will (hopefully) fully converge with SecOps under a single goal: maintaining secure network intent across hybrid infrastructure. As automation deepens, network teams will adopt observability models that continuously validate connectivity, performance, and compliance. I believe that by 2026, successful NetOps organizations will rely on real-time topology awareness and policy-driven automation to reduce both downtime and exposure windows, ensuring agility doesn't come at the cost of control.
Erez Tadmor
Field CTO, Tufin

CLOUD-LIKE EXPERIENCE FOR THE EDGE

Edge computing will become a bigger part of the narrative in 2026. It's been reported, but my conversations with clients are starting to lean heavily on how to bring cloud-like experiences into a hybrid edge environment. AI, whether generative, agentic, or traditional, is becoming a bigger part of the conversation at the edge. Innovating, managing, and scaling solutions for a large fleet of devices/locations will be a big ask and clients want a similar experience for those environments as they have with their cloud operations.
Juan Orlandini
Chief Technology Officer, North America, Insight Enterprises

FRONTIER EDGE

The industry's definition of "the Edge" is now obsolete; 2026 is here, and so is the "Frontier Edge." This shift is driven by several compounding pressures: the explosion of AI-generated content, the massive amount of data for inference (from immersive 8K media to critical AI model updates), and the necessity of connecting the never thought before locations, such as the deep sea, outer space, and the quantum world. However, consumer immersive traffic always takes center stage and competes with critical data, leading to a choke point within cellular 5G, Wi-Fi, and satellite networks. Avoiding this contention requires an immediate architectural shift away from congested and designed for one-to-one communication systems (e.g., cellular 5G, Wi-Fi, satellite), to a scalable, one-to-many distribution like the broadcast networks to ensure seamless and reliable connectivity in the Frontier Edge era.
Apoorva Jain
CPO, EdgeBeam Wireless

DIGITAL TWINS

The digital twin is evolving from a visualization tool into a practical workspace for network planning. It's becoming the operational backbone that unifies teams, accelerates design cycles and drives smarter decisions throughout the entire lifecycle of a network. Although still in the early stages, digital twins are rapidly evolving into a key enabler for AI-driven network lifecycle management, powering faster and more precise strategic planning.
Kelly Burroughs
Director of Strategy and Market Development, iBwave Solutions

WORK ANYTIME

Work from anywhere will become work anytime. Back-to-office mandates have pulled many workers back to the office, but WFH habits die hard. Many tech workers are used to logging in at times convenient for their schedule or work habits. Our usage data early this year showed heavy transfer of data on Fridays, an indication that "work from anywhere" employees actually put in longer hours than their "9 to 5" counterparts — with heavy usage starting at 7:00 am and continuing to 7:00 pm. In 2026 we expect to see more workers logging in both at the office and at home in their off-hours, which may temporarily increase productivity, but burn workers out more quickly. Companies will need to focus on worker experience as well as productivity.
Prakash Mana
CEO, Cloudbrink

Go to: 2026 Cloud Predictions

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.