Streamlining Anomaly Detection and Remediation with Edge Observability
June 07, 2022

Ozan Unlu
Edge Delta

Share this

Over the past several years, architectures have become increasingly distributed and datasets have grown at unprecedented rates. Despite these shifts, the tools available to detect issues within your most critical applications and services have remained stuck in a centralized model. In this centralized model, teams must collect, ingest, and index datasets before asking questions upon them to derive any value.

This approach worked well five years ago for most use cases, and now, it still suffices for batching, common information models, correlation, threat feeds, and more. However, when it comes to real-time analytics at large scale — specifically anomaly detection and resolution — there are inherent limitations. As a result, it has become increasingly difficult for DevOps and SRE teams to minimize the impact of issues and ensure high-quality end-user experiences.

In this blog, I'm going to propose a new approach to support real-time use cases — edge observability — that enables you to detect issues as they occur and resolve them in minutes. But first, let' s walk through the current centralized model and the limitations it imposes on DevOps and SRE teams.

Centralized Observability Limits Visibility, Proactive Alerting, and Performance

The challenges created by centralized observability are largely a byproduct of exponential data growth. Shipping, ingesting, and indexing terabytes or even petabytes of data each day is difficult and cost-prohibitive for many businesses. So, teams are forced to predict which datasets meet the criteria to be centralized. The rest is banished to a cold storage destination, where you cannot apply real-time analytics on top of the dataset. For DevOps and SRE teams, this means less visibility and creates the potential that an issue could be present in a non-indexed dataset — meaning the team is unable to detect it.

On top of that, engineers must manually define monitoring logic within their observability platforms to uncover issues in real-time. This is not only time-consuming but puts the onus on the engineer to know every pattern they' d like to alert on upfront. This approach is reactive in nature since teams are often looking for behaviors they' re aware of or have seen before.

Root causing an issue and writing an effective unit test for it has been around for ages, but what happens when you need to detect and resolve an issue that' s never occurred before?

Lastly, the whole process is slow and begs the question, "how fast is real-time?"

Engineers must collect, compress, encrypt, and transfer data to a centralized cloud or data center. Then, they must unpack, ingest, index, and query the data before they can dashboard and alert. These steps naturally create a delta between when an issue actually occurs and when it's alerted upon. This delta grows as volumes increase and query performance degrades.

What is Edge Observability?

To detect issues in real-time and repair them in minutes, teams need to complement traditional observability with distributed stream processing and machine learning. Edge observability uses these technologies to push intelligence upstream to the data source. In other words, it calls for starting the analysis on raw telemetry within an organization' s computing environment before routing to downstream platforms.

By starting to analyze your telemetry data at the source, you no longer need to choose which datasets to centralize and which to neglect. Instead, you can process data as it' s created unlocking complete visibility into every dataset — and in turn, every issue.

Machine learning complements this approach by automatically:

■ baselining the datasets

■ detecting changes in behavior

■ determining the likelihood of an anomaly or issue

■ triggering an alert in real-time

Because these operations are all running at the source, alerts are triggered orders of magnitude faster than is possible with the old centralized approach.

It' s critical to point out that the use of machine learning wipes out the need for engineers to build and maintain complex monitoring logic within an observability platform. Instead, the machine learning picks up on negative patterns — even unknown unknowns — and surfaces the full context of the issue (including the raw data associated with it) to streamline root-cause analysis. Though operationalizing machine learning for real-time insights into high volumes has always proved a challenge at scale, distributing this machine learning gives teams the ability to have full access and deep views into all data sets.

Edge Observability Cuts MTTR from Hours to Minutes

Taking this approach, teams can detect anomalous changes in system behavior as soon as they occur and then pinpoint the affected systems/components in a few clicks — all without requiring an engineer to build regex, define parse statements, or run manual queries.

Organizations of all sizes and backgrounds are seeing the value of edge observability. Some are using it to dramatically reduce debugging times while others are gaining visibility into issues they didn' t know were going on. In all situations, it' s clear that analyzing massive volumes of data in real-time calls for a new approach — and this will only become clearer as data continues to grow exponentially. This new approach starts at the edge.

Ozan Unlu is CEO of Edge Delta
Share this

The Latest

February 06, 2023

This year 2023, at a macro level we are moving from an inflation economy to a recession and uncertain economy and the general theme is certainly going to be "Doing More with Less" and "Customer Experience is the King." Let us examine what trends and technologies will play a lending hand in these circumstances ...

February 02, 2023

As organizations continue to adapt to a post-pandemic surge in cloud-based productivity, the 2023 State of the Network report from Viavi Solutions details how end-user awareness remains critical and explores the benefits — and challenges — of cloud and off-premises network modernization initiatives ...

February 01, 2023

In the network engineering world, many teams have yet to realize the immense benefit real-time collaboration tools can bring to a successful automation strategy. By integrating a collaboration platform into a network automation strategy — and taking advantage of being able to share responses, files, videos and even links to applications and device statuses — network teams can leverage these tools to manage, monitor and update their networks in real time, and improve the ways in which they manage their networks ...

January 31, 2023

A recent study revealed only an alarming 5% of IT decision makers who report having complete visibility into employee adoption and usage of company-issued applications, demonstrating they are often unknowingly careless when it comes to software investments that can ultimately be costly in terms of time and resources ...

January 30, 2023

Everyone has visibility into their multi-cloud networking environment, but only some are happy with what they see. Unfortunately, this continues a trend. According to EMA's latest research, most network teams have some end-to-end visibility across their multi-cloud networks. Still, only 23.6% are fully satisfied with their multi-cloud network monitoring and troubleshooting capabilities ...

January 26, 2023

As enterprises work to implement or improve their observability practices, tool sprawl is a very real phenomenon ... Tool sprawl can and does happen all across the organization. In this post, though, we'll focus specifically on how and why observability efforts often result in tool sprawl, some of the possible negative consequences of that sprawl, and we'll offer some advice on how to reduce or even avoid sprawl ...

January 25, 2023

As companies generate more data across their network footprints, they need network observability tools to help find meaning in that data for better decision-making and problem solving. It seems many companies believe that adding more tools leads to better and faster insights ... And yet, observability tools aren't meeting many companies' needs. In fact, adding more tools introduces new challenges ...

January 24, 2023

Driven by the need to create scalable, faster, and more agile systems, businesses are adopting cloud native approaches. But cloud native environments also come with an explosion of data and complexity that makes it harder for businesses to detect and remediate issues before everything comes to a screeching halt. Observability, if done right, can make it easier to mitigate these challenges and remediate incidents before they become major customer-impacting problems ...

January 23, 2023

The spiraling cost of energy is forcing public cloud providers to raise their prices significantly. A recent report by Canalys predicted that public cloud prices will jump by around 20% in the US and more than 30% in Europe in 2023. These steep price increases will test the conventional wisdom that moving to the cloud is a cheap computing alternative ...

January 19, 2023

Despite strong interest over the past decade, the actual investment in DX has been recent. While 100% of enterprises are now engaged with DX in some way, most (77%) have begun their DX journey within the past two years. And most are early stage, with a fourth (24%) at the discussion stage and half (49%) currently transforming. Only 27% say they have finished their DX efforts ...