What Is Driving Edge Computing and Edge Performance Monitoring?
September 23, 2019

Keith Bromley
Ixia

Share this

There is a fundamental shift currently happening in operational technology today — it's the shift from core computing to edge computing. This shift is being driven by a completely massive growth in data that has already started to take place. According to Cisco Systems, network traffic will reach 4.8 zettabytes (i.e. 4.8 billion terabytes) by 2022.

Businesses cannot continue as usual and still keep up with network performance, security threats, and business decisions. So, in response, network architects are starting to move as much of the core compute resources as they can to the edge of the network. This helps IT reduce costs, improve network performance and maintain a secure network.

However, is the shifting of resources to the edge the right approach?

It could have a negative impact to the network in terms of new security holes, performance issues due to remote equipment, and reduced network visibility.

At the same time, if the network changes are done right, the pendulum could swing to the other side and great there could be great improvements to network security, performance, visibility that take place.

The answer comes down to the deployment of the new architecture. The pivotal tactic is to deploy a visibility architecture that can support the application services and monitoring functions needed. You need network visibility more than ever to: access the data you need, filter it properly, inspect for security threats, and manage SLAs to keep the latency low from the core to the edge.

Two key components are necessary to a successful visibility in this situation — a network packet broker (NPB) and SD-WAN. The NPB provides data aggregation and filtering, application filtering, and performance monitoring all the way to edge devices. SD-WAN services can (and probably should) then be layered on top of the IP-based links to guarantee link performance, as Internet-based services can introduce unacceptable levels of latency and packet loss into the network.

Edge computing deployments have already started to begin. According to a report from Gartner Research, by year-end of 2021, more than 50% of large enterprises will deploy at least one edge computing use case to support IoT or immersive experiences, versus the less than 5% that are currently performing this in 2019.

When it comes down to it, while the promise of edge computing is real, the actual deployment scenario (and whether or not you build network visibility into your network) is what is going to make or break the performance of your new architecture.

Keith Bromley is Senior Manager, Solutions Marketing at Ixia Solutions Group, a Keysight Technologies business
Share this

The Latest

September 25, 2020

Michael Olson on the AI+ITOPS Podcast: "I really see AIOps as being a core requirement for observability because it ... applies intelligence to your telemetry data and your incident data ... to potentially predict problems before they happen."

September 24, 2020

Enterprise ITOM and ITSM teams have been welcoming of AIOps, believing that it has the potential to deliver great value to them as their IT environments become more distributed, hybrid and complex. Not so with DevOps teams. It's safe to say they've kept AIOps at arm's length, because they don't think it's relevant nor useful for what they do. Instead, to manage the software code they develop and deploy, they've focused on observability ...

September 23, 2020

The post-pandemic environment has resulted in a major shift on where SREs will be located, with nearly 50% of SREs believing they will be working remotely post COVID-19, as compared to only 19% prior to the pandemic, according to the 2020 SRE Survey Report from Catchpoint and the DevOps Institute ...

September 22, 2020

All application traffic travels across the network. While application performance management tools can offer insight into how critical applications are functioning, they do not provide visibility into the broader network environment. In order to optimize application performance, you need a few key capabilities. Let's explore three steps that can help NetOps teams better support the critical applications upon which your business depends ...

September 21, 2020

In Episode 8, Michael Olson, Director of Product Marketing at New Relic, joins the AI+ITOPS Podcast to discuss how AIOps provides real benefits to IT teams ...

September 18, 2020

Will Cappelli on the AI+ITOPS Podcast: "I'll predict that in 5 years time, APM as we know it will have been completely mutated into an observability plus dynamic analytics capability."

September 17, 2020
One of the benefits of doing the EMA Radar Report: AIOps- A Guide for Investing in Innovation was getting data from all 17 vendors on critical areas ranging from deployment and adoption challenges, to cost and pricing, to architectural and functionality insights across everything from heuristics, to automation, and data assimilation ...
September 16, 2020

When you consider that the average end-user interacts with at least 8 applications, then think about how important those applications are in the overall success of the business and how often the interface between the application and the hardware needs to be updated, it's a potential minefield for business operations. Any single update could explode in your face at any time ...

September 15, 2020

Despite the efforts in modernizing and building a robust infrastructure, IT teams routinely deal with the application, database, hardware, or software outages that can last from a few minutes to several days. These types of incidents can cause financial losses to businesses and damage its reputation ...

September 14, 2020

In Episode 7, Will Cappelli, Field CTO of Moogsoft and Former Gartner Research VP, joins the AI+ITOPS Podcast to discuss the future of APM, AIOps and Observability ...