Skip to main content

Finding the Needle in the Haystack: How Machine Learning Will Revolutionize Root Cause Analysis

Ajay Singh
Zebrium

When a website or app fails or falters, the standard operating procedure is to assemble a sizable team to quickly "divide and conquer" to find a solution. The details of the problem can usually be found somewhere among millions of log events and metrics, leading to slow and painstaking searches that can take hours and often involve handoffs between experts in different areas of the software. The immediate goal in these situations is not to be comprehensive, but rather to troubleshoot until you find a solution that remedies the symptom, even if the underlying root cause is not addressed.

The entire troubleshooting process takes time — generally lots and lots of it — and experience. Development teams tend to be chronically short-staffed and overworked, so adding the burden to hunt for the cause of an app problem means a substantial opportunity cost among other things. To help with the task, most companies leverage multiple best-of-breed observability tools including application performance management (APM), tracing, monitoring and log management. These are used to detect and find a solution to the problem being experienced. Although each tool provides useful data, in total, it can be hard for a person to interpret what is important and what is less so.

Instead of a disruptive and often frenzied, big team approach, this kind of challenge is a perfect application for machine learning (ML) to sift through volumes of data and find meaningful patterns or anomalies that can explain the root cause.

AIOps — using AI for IT operations — has emerged as a possible solution for correlating data from multiple tools to reduce noise and translate events into something more meaningful for a user. On the plus side, AIOps solutions are designed to handle events from a wide range of tools, making them versatile. On the negative side, most AIOps solutions require very long training periods (typically many months) against labeled data sets. These solutions also fall short, because they are designed to correlate events against known problems rather than find the root cause of new or unknown failure modes. This is a particular weakness in fast changing cloud-native environments, where new failure modes crop up on a regular basis.

In order to find the root cause of new failure modes, a different type of AI approach is needed. Since logs often contain the source of truth when a software failure occurs, one approach is to use ML on logs. The concept is to identify just the anomalous patterns in the logs that explain the details of the problem. This can be challenging since logs are mostly unstructured and "noisy." On top of that, log volumes are typically huge with data coming from many different log streams, each with a large number of log lines. Historical approaches have focused on basic anomaly detection which not only produce verbose results that require human interpretation, but also don't explain correlations across micro-services, often entirely missing key details of the problem.

It turns out, the most effective way to perform ML on logs is to use a pipeline with multiple different ML strategies depending on stage of the process. Specialized ML starts by self-learning (i.e. unsupervised) how to structure and categorize the logs — this produces a solid foundation for the remaining ML stages. Next, the ML learns the patterns of each type of log event. Once this learning has occurred, the ML system can identify anomalous log events within each log stream (events that break pattern).

Finally, to pull out the signal from the noise, the system needs to find correlations between anomalies and errors across multiple log streams. This process provides an effective way of uncovering just the sequence of log lines that describe the problem and its root cause. In doing so, it allows for accurate detection of new types of failure modes as well as the information needed to identify root cause.

Such a methodology is similar to the approach taken by skilled engineers — understanding the logs, identifying rare and high-severity events and then finding correlations between clusters of these events across multiple log streams. But it requires considerable time for humans to do this. In practice, the task would be spread out across multiple people in a divide and conquer mode in attempt to accelerate the process and lessen the load for each person. While this inherently makes sense, it creates an additional challenge of requiring team members to communicate with each other in such a way that all are aware of all anomalies and errors, and the observations and learnings are all known and shared across the group. In essence, the team needs to function as a single entity.

A multi-staged ML approach works as a single automated entity, and it should not require any manual training, whether in reviewing correlations for tuning algorithms or massaging data sets. The system should free up DevOps teams, so that they only have to respond to actual findings of root cause. A system should only need a few hours of log data to achieve proper levels of accuracy.

While AIOps is useful for reducing the overall event "noise" from the many observability tools in use in an organization, applying multi-stage unsupervised ML to logs is a great way of both detecting new types of failure modes as well as their root cause. Rather than just triaging a problem and coming up with a quick fix or workaround, the system can determine the true root cause and likely avoid other such problems in the future.

Ajay Singh is Founder and CEO of Zebrium

Hot Topics

The Latest

Technology leaders across the federal landscape are facing, and will continue to face, an uphill battle when it comes to fortifying their digital environments against hostile and persistent threat actors. On one hand, they are being asked to push digital transformation ... On the other hand, they are facing the fiscal uncertainty of continuing resolutions (CR) and government shutdowns looming near and far. In the face of these challenges, CIOs, CTOs, and CISOs must figure out how to modernize legacy systems and infrastructure while doing more with less and still defending against external and internal threats ...

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

Finding the Needle in the Haystack: How Machine Learning Will Revolutionize Root Cause Analysis

Ajay Singh
Zebrium

When a website or app fails or falters, the standard operating procedure is to assemble a sizable team to quickly "divide and conquer" to find a solution. The details of the problem can usually be found somewhere among millions of log events and metrics, leading to slow and painstaking searches that can take hours and often involve handoffs between experts in different areas of the software. The immediate goal in these situations is not to be comprehensive, but rather to troubleshoot until you find a solution that remedies the symptom, even if the underlying root cause is not addressed.

The entire troubleshooting process takes time — generally lots and lots of it — and experience. Development teams tend to be chronically short-staffed and overworked, so adding the burden to hunt for the cause of an app problem means a substantial opportunity cost among other things. To help with the task, most companies leverage multiple best-of-breed observability tools including application performance management (APM), tracing, monitoring and log management. These are used to detect and find a solution to the problem being experienced. Although each tool provides useful data, in total, it can be hard for a person to interpret what is important and what is less so.

Instead of a disruptive and often frenzied, big team approach, this kind of challenge is a perfect application for machine learning (ML) to sift through volumes of data and find meaningful patterns or anomalies that can explain the root cause.

AIOps — using AI for IT operations — has emerged as a possible solution for correlating data from multiple tools to reduce noise and translate events into something more meaningful for a user. On the plus side, AIOps solutions are designed to handle events from a wide range of tools, making them versatile. On the negative side, most AIOps solutions require very long training periods (typically many months) against labeled data sets. These solutions also fall short, because they are designed to correlate events against known problems rather than find the root cause of new or unknown failure modes. This is a particular weakness in fast changing cloud-native environments, where new failure modes crop up on a regular basis.

In order to find the root cause of new failure modes, a different type of AI approach is needed. Since logs often contain the source of truth when a software failure occurs, one approach is to use ML on logs. The concept is to identify just the anomalous patterns in the logs that explain the details of the problem. This can be challenging since logs are mostly unstructured and "noisy." On top of that, log volumes are typically huge with data coming from many different log streams, each with a large number of log lines. Historical approaches have focused on basic anomaly detection which not only produce verbose results that require human interpretation, but also don't explain correlations across micro-services, often entirely missing key details of the problem.

It turns out, the most effective way to perform ML on logs is to use a pipeline with multiple different ML strategies depending on stage of the process. Specialized ML starts by self-learning (i.e. unsupervised) how to structure and categorize the logs — this produces a solid foundation for the remaining ML stages. Next, the ML learns the patterns of each type of log event. Once this learning has occurred, the ML system can identify anomalous log events within each log stream (events that break pattern).

Finally, to pull out the signal from the noise, the system needs to find correlations between anomalies and errors across multiple log streams. This process provides an effective way of uncovering just the sequence of log lines that describe the problem and its root cause. In doing so, it allows for accurate detection of new types of failure modes as well as the information needed to identify root cause.

Such a methodology is similar to the approach taken by skilled engineers — understanding the logs, identifying rare and high-severity events and then finding correlations between clusters of these events across multiple log streams. But it requires considerable time for humans to do this. In practice, the task would be spread out across multiple people in a divide and conquer mode in attempt to accelerate the process and lessen the load for each person. While this inherently makes sense, it creates an additional challenge of requiring team members to communicate with each other in such a way that all are aware of all anomalies and errors, and the observations and learnings are all known and shared across the group. In essence, the team needs to function as a single entity.

A multi-staged ML approach works as a single automated entity, and it should not require any manual training, whether in reviewing correlations for tuning algorithms or massaging data sets. The system should free up DevOps teams, so that they only have to respond to actual findings of root cause. A system should only need a few hours of log data to achieve proper levels of accuracy.

While AIOps is useful for reducing the overall event "noise" from the many observability tools in use in an organization, applying multi-stage unsupervised ML to logs is a great way of both detecting new types of failure modes as well as their root cause. Rather than just triaging a problem and coming up with a quick fix or workaround, the system can determine the true root cause and likely avoid other such problems in the future.

Ajay Singh is Founder and CEO of Zebrium

Hot Topics

The Latest

Technology leaders across the federal landscape are facing, and will continue to face, an uphill battle when it comes to fortifying their digital environments against hostile and persistent threat actors. On one hand, they are being asked to push digital transformation ... On the other hand, they are facing the fiscal uncertainty of continuing resolutions (CR) and government shutdowns looming near and far. In the face of these challenges, CIOs, CTOs, and CISOs must figure out how to modernize legacy systems and infrastructure while doing more with less and still defending against external and internal threats ...

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...