Skip to main content

Why You Should Consider Visibility and Performance Monitoring for Edge Computing

Keith Bromley

Edge computing usage is starting to increase. See my previous posting from September 2019 that illustrates what is driving this network change. The obvious follow-up question is, "So, what can I do with edge computing?" I'm glad you asked. There are lots of things you can do.

In fact, here are six fundamental use cases that you allow you to:

1. Improve network visibility

2. Improve network performance monitoring

3. Reduce the cost of MPLS circuits for transport

4. Improve troubleshooting capabilities

5. Enhance endpoint security

6. Upgrade compliance support

Improving network visibility is the first use case. Use of IP enables NOC engineers to see all the way out to the edge of the network. They can use application intelligence to look at application performance and NetFlow information to these locations. Currently, many (maybe most) enterprises lose visibility for the "last mile" of their network. This is especially true when using Telco circuits.

So why is this important? Are there potential problems (outages) getting ready to happen? Without visibility — who knows. It's easy to know once it happens but this puts IT into a reactive position that consumes more time, more money, and creates unnecessary problems for customers and senior management. It would be better if you could start to "see" the problem before everything goes bad.

Taking this one step further, a network packet broker (NPB) equipped with proactive performance monitoring features integrated into the architecture provides the NOC with an easy way to check latent network performance and also the ability to actively test performance at will all the way to the edge using synthetic traffic.

Network and IT teams need remote access to server and network traffic activity for performance monitoring and troubleshooting. Active monitoring (also known as "synthetic monitoring") is used to actively monitor latency/performance of WAN/SD-WAN links. This type of tool simulates traffic by sending synthetic packet data to various endpoints across the network to measure performance metrics.

Enterprises also want to reduce, if not eliminate, MPLS circuit costs and move to IP links. Remote sites typically have low speed internet access (100 MB). IP gives them more flexibility, less headaches (as they don't have to strip off MPLS headers), and lower cost to get IP links from ISPs and CLECs.

Troubleshooting can also be improved with edge computing. The shift to IP links allows the NOC to use IP-based tools and application intelligence to troubleshoot problems as fast as possible, all the way to the edge of the network.

Network security can be improved by placing next generation firewalls (NGFW) right up to the edge. A NPB is very useful here to integrate the security device along with other edge devices and capabilities into the network.

With regard to regulatory compliance, several organizations (including utilities) require that all control traffic to remotely manageable systems to be monitored, logged and analyzed. Data needs to be replicated and sent to different locations. A small NPB and taps can be placed at the last routing hop, or even the last switch and the controller.

Join the "shift" and live on the edge!

Hot Topics

The Latest

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Why You Should Consider Visibility and Performance Monitoring for Edge Computing

Keith Bromley

Edge computing usage is starting to increase. See my previous posting from September 2019 that illustrates what is driving this network change. The obvious follow-up question is, "So, what can I do with edge computing?" I'm glad you asked. There are lots of things you can do.

In fact, here are six fundamental use cases that you allow you to:

1. Improve network visibility

2. Improve network performance monitoring

3. Reduce the cost of MPLS circuits for transport

4. Improve troubleshooting capabilities

5. Enhance endpoint security

6. Upgrade compliance support

Improving network visibility is the first use case. Use of IP enables NOC engineers to see all the way out to the edge of the network. They can use application intelligence to look at application performance and NetFlow information to these locations. Currently, many (maybe most) enterprises lose visibility for the "last mile" of their network. This is especially true when using Telco circuits.

So why is this important? Are there potential problems (outages) getting ready to happen? Without visibility — who knows. It's easy to know once it happens but this puts IT into a reactive position that consumes more time, more money, and creates unnecessary problems for customers and senior management. It would be better if you could start to "see" the problem before everything goes bad.

Taking this one step further, a network packet broker (NPB) equipped with proactive performance monitoring features integrated into the architecture provides the NOC with an easy way to check latent network performance and also the ability to actively test performance at will all the way to the edge using synthetic traffic.

Network and IT teams need remote access to server and network traffic activity for performance monitoring and troubleshooting. Active monitoring (also known as "synthetic monitoring") is used to actively monitor latency/performance of WAN/SD-WAN links. This type of tool simulates traffic by sending synthetic packet data to various endpoints across the network to measure performance metrics.

Enterprises also want to reduce, if not eliminate, MPLS circuit costs and move to IP links. Remote sites typically have low speed internet access (100 MB). IP gives them more flexibility, less headaches (as they don't have to strip off MPLS headers), and lower cost to get IP links from ISPs and CLECs.

Troubleshooting can also be improved with edge computing. The shift to IP links allows the NOC to use IP-based tools and application intelligence to troubleshoot problems as fast as possible, all the way to the edge of the network.

Network security can be improved by placing next generation firewalls (NGFW) right up to the edge. A NPB is very useful here to integrate the security device along with other edge devices and capabilities into the network.

With regard to regulatory compliance, several organizations (including utilities) require that all control traffic to remotely manageable systems to be monitored, logged and analyzed. Data needs to be replicated and sent to different locations. A small NPB and taps can be placed at the last routing hop, or even the last switch and the controller.

Join the "shift" and live on the edge!

Hot Topics

The Latest

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...