Streamlining Anomaly Detection and Remediation with Edge Observability
June 07, 2022

Ozan Unlu
Edge Delta

Share this

Over the past several years, architectures have become increasingly distributed and datasets have grown at unprecedented rates. Despite these shifts, the tools available to detect issues within your most critical applications and services have remained stuck in a centralized model. In this centralized model, teams must collect, ingest, and index datasets before asking questions upon them to derive any value.

This approach worked well five years ago for most use cases, and now, it still suffices for batching, common information models, correlation, threat feeds, and more. However, when it comes to real-time analytics at large scale — specifically anomaly detection and resolution — there are inherent limitations. As a result, it has become increasingly difficult for DevOps and SRE teams to minimize the impact of issues and ensure high-quality end-user experiences.

In this blog, I'm going to propose a new approach to support real-time use cases — edge observability — that enables you to detect issues as they occur and resolve them in minutes. But first, let' s walk through the current centralized model and the limitations it imposes on DevOps and SRE teams.

Centralized Observability Limits Visibility, Proactive Alerting, and Performance

The challenges created by centralized observability are largely a byproduct of exponential data growth. Shipping, ingesting, and indexing terabytes or even petabytes of data each day is difficult and cost-prohibitive for many businesses. So, teams are forced to predict which datasets meet the criteria to be centralized. The rest is banished to a cold storage destination, where you cannot apply real-time analytics on top of the dataset. For DevOps and SRE teams, this means less visibility and creates the potential that an issue could be present in a non-indexed dataset — meaning the team is unable to detect it.

On top of that, engineers must manually define monitoring logic within their observability platforms to uncover issues in real-time. This is not only time-consuming but puts the onus on the engineer to know every pattern they' d like to alert on upfront. This approach is reactive in nature since teams are often looking for behaviors they' re aware of or have seen before.

Root causing an issue and writing an effective unit test for it has been around for ages, but what happens when you need to detect and resolve an issue that' s never occurred before?

Lastly, the whole process is slow and begs the question, "how fast is real-time?"

Engineers must collect, compress, encrypt, and transfer data to a centralized cloud or data center. Then, they must unpack, ingest, index, and query the data before they can dashboard and alert. These steps naturally create a delta between when an issue actually occurs and when it's alerted upon. This delta grows as volumes increase and query performance degrades.

What is Edge Observability?

To detect issues in real-time and repair them in minutes, teams need to complement traditional observability with distributed stream processing and machine learning. Edge observability uses these technologies to push intelligence upstream to the data source. In other words, it calls for starting the analysis on raw telemetry within an organization' s computing environment before routing to downstream platforms.

By starting to analyze your telemetry data at the source, you no longer need to choose which datasets to centralize and which to neglect. Instead, you can process data as it' s created unlocking complete visibility into every dataset — and in turn, every issue.

Machine learning complements this approach by automatically:

■ baselining the datasets

■ detecting changes in behavior

■ determining the likelihood of an anomaly or issue

■ triggering an alert in real-time

Because these operations are all running at the source, alerts are triggered orders of magnitude faster than is possible with the old centralized approach.

It' s critical to point out that the use of machine learning wipes out the need for engineers to build and maintain complex monitoring logic within an observability platform. Instead, the machine learning picks up on negative patterns — even unknown unknowns — and surfaces the full context of the issue (including the raw data associated with it) to streamline root-cause analysis. Though operationalizing machine learning for real-time insights into high volumes has always proved a challenge at scale, distributing this machine learning gives teams the ability to have full access and deep views into all data sets.

Edge Observability Cuts MTTR from Hours to Minutes

Taking this approach, teams can detect anomalous changes in system behavior as soon as they occur and then pinpoint the affected systems/components in a few clicks — all without requiring an engineer to build regex, define parse statements, or run manual queries.

Organizations of all sizes and backgrounds are seeing the value of edge observability. Some are using it to dramatically reduce debugging times while others are gaining visibility into issues they didn' t know were going on. In all situations, it' s clear that analyzing massive volumes of data in real-time calls for a new approach — and this will only become clearer as data continues to grow exponentially. This new approach starts at the edge.

Ozan Unlu is CEO of Edge Delta
Share this

The Latest

June 27, 2022

Hybrid work adoption and the accelerated pace of digital transformation are driving an increasing need for automation and site reliability engineering (SRE) practices, according to new research. In a new survey almost half of respondents (48.2%) said automation is a way to decrease Mean Time to Resolution/Repair (MTTR) and improve service management ...

June 23, 2022

Digital businesses don't invest in monitoring for monitoring's sake. They do it to make the business run better. Every dollar spent on observability — every hour your team spends using monitoring tools or responding to what they reveal — should tie back directly to business outcomes: conversions, revenues, brand equity. If they don't? You might be missing the forest for the trees ...

June 22, 2022

Every day, companies are missing customer experience (CX) "red flags" because they don't have the tools to observe CX processes or metrics. Even basic errors or defects in automated customer interactions are left undetected for days, weeks or months, leading to widespread customer dissatisfaction. In fact, poor CX and digital technology investments are costing enterprises billions of dollars in lost potential revenue ...

June 21, 2022

Organizations are moving to microservices and cloud native architectures at an increasing pace. The primary incentive for these transformation projects is typically to increase the agility and velocity of software release and product innovation. These dynamic systems, however, are far more complex to manage and monitor, and they generate far higher data volumes ...

June 16, 2022

Global IT teams adapted to remote work in 2021, resolving employee tickets 23% faster than the year before as overall resolution time for IT tickets went down by 7 hours, according to the Freshservice Service Management Benchmark Report from Freshworks ...

June 15, 2022

Once upon a time data lived in the data center. Now data lives everywhere. All this signals the need for a new approach to data management, a next-gen solution ...

June 14, 2022

Findings from the 2022 State of Edge Messaging Report from Ably and Coleman Parkes Research show that most organizations (65%) that have built edge messaging capabilities in house have experienced an outage or significant downtime in the last 12-18 months. Most of the current in-house real-time messaging services aren't cutting it ...

June 13, 2022
Today's users want a complete digital experience when dealing with a software product or system. They are not content with the page load speeds or features alone but want the software to perform optimally in an omnichannel environment comprising multiple platforms, browsers, devices, and networks. This calls into question the role of load testing services to check whether the given software under testing can perform optimally when subjected to peak load ...
June 09, 2022

Networks need to be up and running for businesses to continue operating and sustaining customer-facing services. Streamlining and automating network administration tasks enable routine business processes to continue without disruption, eliminating any network downtime caused by human error or other system flaws ...

June 08, 2022

Enterprises have had access to various Project and Portfolio Management (PPM) tools for quite a few years, to guide in their project selection and execution lifecycle. Yet, in spite of the digital evolution of management software, many organizations still fail to construct an effective PPM plan or utilize cutting-edge management tools ...