Skip to main content

What Is Driving Edge Computing and Edge Performance Monitoring?

Keith Bromley

There is a fundamental shift currently happening in operational technology today — it's the shift from core computing to edge computing. This shift is being driven by a completely massive growth in data that has already started to take place. According to Cisco Systems, network traffic will reach 4.8 zettabytes (i.e. 4.8 billion terabytes) by 2022.

Businesses cannot continue as usual and still keep up with network performance, security threats, and business decisions. So, in response, network architects are starting to move as much of the core compute resources as they can to the edge of the network. This helps IT reduce costs, improve network performance and maintain a secure network.

However, is the shifting of resources to the edge the right approach?

It could have a negative impact to the network in terms of new security holes, performance issues due to remote equipment, and reduced network visibility.

At the same time, if the network changes are done right, the pendulum could swing to the other side and great there could be great improvements to network security, performance, visibility that take place.

The answer comes down to the deployment of the new architecture. The pivotal tactic is to deploy a visibility architecture that can support the application services and monitoring functions needed. You need network visibility more than ever to: access the data you need, filter it properly, inspect for security threats, and manage SLAs to keep the latency low from the core to the edge.

Two key components are necessary to a successful visibility in this situation — a network packet broker (NPB) and SD-WAN. The NPB provides data aggregation and filtering, application filtering, and performance monitoring all the way to edge devices. SD-WAN services can (and probably should) then be layered on top of the IP-based links to guarantee link performance, as Internet-based services can introduce unacceptable levels of latency and packet loss into the network.

Edge computing deployments have already started to begin. According to a report from Gartner Research, by year-end of 2021, more than 50% of large enterprises will deploy at least one edge computing use case to support IoT or immersive experiences, versus the less than 5% that are currently performing this in 2019.

When it comes down to it, while the promise of edge computing is real, the actual deployment scenario (and whether or not you build network visibility into your network) is what is going to make or break the performance of your new architecture.

Hot Topics

The Latest

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Artificial Intelligence (AI) is reshaping observability, and observability is becoming essential for AI. This is a two-way relationship that is increasingly relevant as enterprises scale generative AI ... This dual role makes AI and observability inseparable. In this blog, I cover more details of each side ...

Poor DEX directly costs global businesses an average of 470,000 hours per year, equivalent to around 226 full-time employees, according to a new report from Nexthink, Cracking the DEX Equation: The Annual Workplace Productivity Report. This indicates that digital friction is a vital and underreported element of the global productivity crisis ...

What Is Driving Edge Computing and Edge Performance Monitoring?

Keith Bromley

There is a fundamental shift currently happening in operational technology today — it's the shift from core computing to edge computing. This shift is being driven by a completely massive growth in data that has already started to take place. According to Cisco Systems, network traffic will reach 4.8 zettabytes (i.e. 4.8 billion terabytes) by 2022.

Businesses cannot continue as usual and still keep up with network performance, security threats, and business decisions. So, in response, network architects are starting to move as much of the core compute resources as they can to the edge of the network. This helps IT reduce costs, improve network performance and maintain a secure network.

However, is the shifting of resources to the edge the right approach?

It could have a negative impact to the network in terms of new security holes, performance issues due to remote equipment, and reduced network visibility.

At the same time, if the network changes are done right, the pendulum could swing to the other side and great there could be great improvements to network security, performance, visibility that take place.

The answer comes down to the deployment of the new architecture. The pivotal tactic is to deploy a visibility architecture that can support the application services and monitoring functions needed. You need network visibility more than ever to: access the data you need, filter it properly, inspect for security threats, and manage SLAs to keep the latency low from the core to the edge.

Two key components are necessary to a successful visibility in this situation — a network packet broker (NPB) and SD-WAN. The NPB provides data aggregation and filtering, application filtering, and performance monitoring all the way to edge devices. SD-WAN services can (and probably should) then be layered on top of the IP-based links to guarantee link performance, as Internet-based services can introduce unacceptable levels of latency and packet loss into the network.

Edge computing deployments have already started to begin. According to a report from Gartner Research, by year-end of 2021, more than 50% of large enterprises will deploy at least one edge computing use case to support IoT or immersive experiences, versus the less than 5% that are currently performing this in 2019.

When it comes down to it, while the promise of edge computing is real, the actual deployment scenario (and whether or not you build network visibility into your network) is what is going to make or break the performance of your new architecture.

Hot Topics

The Latest

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Artificial Intelligence (AI) is reshaping observability, and observability is becoming essential for AI. This is a two-way relationship that is increasingly relevant as enterprises scale generative AI ... This dual role makes AI and observability inseparable. In this blog, I cover more details of each side ...

Poor DEX directly costs global businesses an average of 470,000 hours per year, equivalent to around 226 full-time employees, according to a new report from Nexthink, Cracking the DEX Equation: The Annual Workplace Productivity Report. This indicates that digital friction is a vital and underreported element of the global productivity crisis ...