Skip to main content

Finding the Needle in the Haystack: How Machine Learning Will Revolutionize Root Cause Analysis

Ajay Singh
Zebrium

When a website or app fails or falters, the standard operating procedure is to assemble a sizable team to quickly "divide and conquer" to find a solution. The details of the problem can usually be found somewhere among millions of log events and metrics, leading to slow and painstaking searches that can take hours and often involve handoffs between experts in different areas of the software. The immediate goal in these situations is not to be comprehensive, but rather to troubleshoot until you find a solution that remedies the symptom, even if the underlying root cause is not addressed.

The entire troubleshooting process takes time — generally lots and lots of it — and experience. Development teams tend to be chronically short-staffed and overworked, so adding the burden to hunt for the cause of an app problem means a substantial opportunity cost among other things. To help with the task, most companies leverage multiple best-of-breed observability tools including application performance management (APM), tracing, monitoring and log management. These are used to detect and find a solution to the problem being experienced. Although each tool provides useful data, in total, it can be hard for a person to interpret what is important and what is less so.

Instead of a disruptive and often frenzied, big team approach, this kind of challenge is a perfect application for machine learning (ML) to sift through volumes of data and find meaningful patterns or anomalies that can explain the root cause.

AIOps — using AI for IT operations — has emerged as a possible solution for correlating data from multiple tools to reduce noise and translate events into something more meaningful for a user. On the plus side, AIOps solutions are designed to handle events from a wide range of tools, making them versatile. On the negative side, most AIOps solutions require very long training periods (typically many months) against labeled data sets. These solutions also fall short, because they are designed to correlate events against known problems rather than find the root cause of new or unknown failure modes. This is a particular weakness in fast changing cloud-native environments, where new failure modes crop up on a regular basis.

In order to find the root cause of new failure modes, a different type of AI approach is needed. Since logs often contain the source of truth when a software failure occurs, one approach is to use ML on logs. The concept is to identify just the anomalous patterns in the logs that explain the details of the problem. This can be challenging since logs are mostly unstructured and "noisy." On top of that, log volumes are typically huge with data coming from many different log streams, each with a large number of log lines. Historical approaches have focused on basic anomaly detection which not only produce verbose results that require human interpretation, but also don't explain correlations across micro-services, often entirely missing key details of the problem.

It turns out, the most effective way to perform ML on logs is to use a pipeline with multiple different ML strategies depending on stage of the process. Specialized ML starts by self-learning (i.e. unsupervised) how to structure and categorize the logs — this produces a solid foundation for the remaining ML stages. Next, the ML learns the patterns of each type of log event. Once this learning has occurred, the ML system can identify anomalous log events within each log stream (events that break pattern).

Finally, to pull out the signal from the noise, the system needs to find correlations between anomalies and errors across multiple log streams. This process provides an effective way of uncovering just the sequence of log lines that describe the problem and its root cause. In doing so, it allows for accurate detection of new types of failure modes as well as the information needed to identify root cause.

Such a methodology is similar to the approach taken by skilled engineers — understanding the logs, identifying rare and high-severity events and then finding correlations between clusters of these events across multiple log streams. But it requires considerable time for humans to do this. In practice, the task would be spread out across multiple people in a divide and conquer mode in attempt to accelerate the process and lessen the load for each person. While this inherently makes sense, it creates an additional challenge of requiring team members to communicate with each other in such a way that all are aware of all anomalies and errors, and the observations and learnings are all known and shared across the group. In essence, the team needs to function as a single entity.

A multi-staged ML approach works as a single automated entity, and it should not require any manual training, whether in reviewing correlations for tuning algorithms or massaging data sets. The system should free up DevOps teams, so that they only have to respond to actual findings of root cause. A system should only need a few hours of log data to achieve proper levels of accuracy.

While AIOps is useful for reducing the overall event "noise" from the many observability tools in use in an organization, applying multi-stage unsupervised ML to logs is a great way of both detecting new types of failure modes as well as their root cause. Rather than just triaging a problem and coming up with a quick fix or workaround, the system can determine the true root cause and likely avoid other such problems in the future.

Ajay Singh is Founder and CEO of Zebrium

Hot Topics

The Latest

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...

Finding the Needle in the Haystack: How Machine Learning Will Revolutionize Root Cause Analysis

Ajay Singh
Zebrium

When a website or app fails or falters, the standard operating procedure is to assemble a sizable team to quickly "divide and conquer" to find a solution. The details of the problem can usually be found somewhere among millions of log events and metrics, leading to slow and painstaking searches that can take hours and often involve handoffs between experts in different areas of the software. The immediate goal in these situations is not to be comprehensive, but rather to troubleshoot until you find a solution that remedies the symptom, even if the underlying root cause is not addressed.

The entire troubleshooting process takes time — generally lots and lots of it — and experience. Development teams tend to be chronically short-staffed and overworked, so adding the burden to hunt for the cause of an app problem means a substantial opportunity cost among other things. To help with the task, most companies leverage multiple best-of-breed observability tools including application performance management (APM), tracing, monitoring and log management. These are used to detect and find a solution to the problem being experienced. Although each tool provides useful data, in total, it can be hard for a person to interpret what is important and what is less so.

Instead of a disruptive and often frenzied, big team approach, this kind of challenge is a perfect application for machine learning (ML) to sift through volumes of data and find meaningful patterns or anomalies that can explain the root cause.

AIOps — using AI for IT operations — has emerged as a possible solution for correlating data from multiple tools to reduce noise and translate events into something more meaningful for a user. On the plus side, AIOps solutions are designed to handle events from a wide range of tools, making them versatile. On the negative side, most AIOps solutions require very long training periods (typically many months) against labeled data sets. These solutions also fall short, because they are designed to correlate events against known problems rather than find the root cause of new or unknown failure modes. This is a particular weakness in fast changing cloud-native environments, where new failure modes crop up on a regular basis.

In order to find the root cause of new failure modes, a different type of AI approach is needed. Since logs often contain the source of truth when a software failure occurs, one approach is to use ML on logs. The concept is to identify just the anomalous patterns in the logs that explain the details of the problem. This can be challenging since logs are mostly unstructured and "noisy." On top of that, log volumes are typically huge with data coming from many different log streams, each with a large number of log lines. Historical approaches have focused on basic anomaly detection which not only produce verbose results that require human interpretation, but also don't explain correlations across micro-services, often entirely missing key details of the problem.

It turns out, the most effective way to perform ML on logs is to use a pipeline with multiple different ML strategies depending on stage of the process. Specialized ML starts by self-learning (i.e. unsupervised) how to structure and categorize the logs — this produces a solid foundation for the remaining ML stages. Next, the ML learns the patterns of each type of log event. Once this learning has occurred, the ML system can identify anomalous log events within each log stream (events that break pattern).

Finally, to pull out the signal from the noise, the system needs to find correlations between anomalies and errors across multiple log streams. This process provides an effective way of uncovering just the sequence of log lines that describe the problem and its root cause. In doing so, it allows for accurate detection of new types of failure modes as well as the information needed to identify root cause.

Such a methodology is similar to the approach taken by skilled engineers — understanding the logs, identifying rare and high-severity events and then finding correlations between clusters of these events across multiple log streams. But it requires considerable time for humans to do this. In practice, the task would be spread out across multiple people in a divide and conquer mode in attempt to accelerate the process and lessen the load for each person. While this inherently makes sense, it creates an additional challenge of requiring team members to communicate with each other in such a way that all are aware of all anomalies and errors, and the observations and learnings are all known and shared across the group. In essence, the team needs to function as a single entity.

A multi-staged ML approach works as a single automated entity, and it should not require any manual training, whether in reviewing correlations for tuning algorithms or massaging data sets. The system should free up DevOps teams, so that they only have to respond to actual findings of root cause. A system should only need a few hours of log data to achieve proper levels of accuracy.

While AIOps is useful for reducing the overall event "noise" from the many observability tools in use in an organization, applying multi-stage unsupervised ML to logs is a great way of both detecting new types of failure modes as well as their root cause. Rather than just triaging a problem and coming up with a quick fix or workaround, the system can determine the true root cause and likely avoid other such problems in the future.

Ajay Singh is Founder and CEO of Zebrium

Hot Topics

The Latest

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...