Skip to main content

Finding the Needle in the Haystack: How Machine Learning Will Revolutionize Root Cause Analysis

Ajay Singh
Zebrium

When a website or app fails or falters, the standard operating procedure is to assemble a sizable team to quickly "divide and conquer" to find a solution. The details of the problem can usually be found somewhere among millions of log events and metrics, leading to slow and painstaking searches that can take hours and often involve handoffs between experts in different areas of the software. The immediate goal in these situations is not to be comprehensive, but rather to troubleshoot until you find a solution that remedies the symptom, even if the underlying root cause is not addressed.

The entire troubleshooting process takes time — generally lots and lots of it — and experience. Development teams tend to be chronically short-staffed and overworked, so adding the burden to hunt for the cause of an app problem means a substantial opportunity cost among other things. To help with the task, most companies leverage multiple best-of-breed observability tools including application performance management (APM), tracing, monitoring and log management. These are used to detect and find a solution to the problem being experienced. Although each tool provides useful data, in total, it can be hard for a person to interpret what is important and what is less so.

Instead of a disruptive and often frenzied, big team approach, this kind of challenge is a perfect application for machine learning (ML) to sift through volumes of data and find meaningful patterns or anomalies that can explain the root cause.

AIOps — using AI for IT operations — has emerged as a possible solution for correlating data from multiple tools to reduce noise and translate events into something more meaningful for a user. On the plus side, AIOps solutions are designed to handle events from a wide range of tools, making them versatile. On the negative side, most AIOps solutions require very long training periods (typically many months) against labeled data sets. These solutions also fall short, because they are designed to correlate events against known problems rather than find the root cause of new or unknown failure modes. This is a particular weakness in fast changing cloud-native environments, where new failure modes crop up on a regular basis.

In order to find the root cause of new failure modes, a different type of AI approach is needed. Since logs often contain the source of truth when a software failure occurs, one approach is to use ML on logs. The concept is to identify just the anomalous patterns in the logs that explain the details of the problem. This can be challenging since logs are mostly unstructured and "noisy." On top of that, log volumes are typically huge with data coming from many different log streams, each with a large number of log lines. Historical approaches have focused on basic anomaly detection which not only produce verbose results that require human interpretation, but also don't explain correlations across micro-services, often entirely missing key details of the problem.

It turns out, the most effective way to perform ML on logs is to use a pipeline with multiple different ML strategies depending on stage of the process. Specialized ML starts by self-learning (i.e. unsupervised) how to structure and categorize the logs — this produces a solid foundation for the remaining ML stages. Next, the ML learns the patterns of each type of log event. Once this learning has occurred, the ML system can identify anomalous log events within each log stream (events that break pattern).

Finally, to pull out the signal from the noise, the system needs to find correlations between anomalies and errors across multiple log streams. This process provides an effective way of uncovering just the sequence of log lines that describe the problem and its root cause. In doing so, it allows for accurate detection of new types of failure modes as well as the information needed to identify root cause.

Such a methodology is similar to the approach taken by skilled engineers — understanding the logs, identifying rare and high-severity events and then finding correlations between clusters of these events across multiple log streams. But it requires considerable time for humans to do this. In practice, the task would be spread out across multiple people in a divide and conquer mode in attempt to accelerate the process and lessen the load for each person. While this inherently makes sense, it creates an additional challenge of requiring team members to communicate with each other in such a way that all are aware of all anomalies and errors, and the observations and learnings are all known and shared across the group. In essence, the team needs to function as a single entity.

A multi-staged ML approach works as a single automated entity, and it should not require any manual training, whether in reviewing correlations for tuning algorithms or massaging data sets. The system should free up DevOps teams, so that they only have to respond to actual findings of root cause. A system should only need a few hours of log data to achieve proper levels of accuracy.

While AIOps is useful for reducing the overall event "noise" from the many observability tools in use in an organization, applying multi-stage unsupervised ML to logs is a great way of both detecting new types of failure modes as well as their root cause. Rather than just triaging a problem and coming up with a quick fix or workaround, the system can determine the true root cause and likely avoid other such problems in the future.

Ajay Singh is Founder and CEO of Zebrium

Hot Topics

The Latest

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Seamless shopping is a basic demand of today's boundaryless consumer — one with little patience for friction, limited tolerance for disconnected experiences and minimal hesitation in switching brands. Customers expect intuitive, highly personalized experiences and the ability to move effortlessly across physical and digital channels within the same journey. Failure to deliver can cost dearly ...

If your best engineers spend their days sorting tickets and resetting access, you are wasting talent. New global data shows that employees in the IT sector rank among the least motivated across industries. They're under a lot of pressure from many angles. Pressure to upskill and uncertainty around what agentic AI means for job security is creating anxiety. Meanwhile, these roles often function like an on-call job and require many repetitive tasks ...

In a 2026 survey conducted by Liquibase, the research found that 96.5% of organizations reported at least one AI or LLM interaction with their production databases, often through analytics and reporting, training pipelines, internal copilots, and AI generated SQL. Only a small fraction reported no interaction at all. That means the database is no longer a downstream system that AI "might" reach later. AI is already there ...

In many organizations, IT still operates as a reactive service provider. Systems are managed through fragmented tools, teams focus heavily on operational metrics, and business leaders often see IT as a necessary cost center rather than a strategic partner. Even well-run ITIL environments can struggle to bridge the gap between operational excellence and business impact. This is where the concept of ITIL+ comes in ...

UK IT leaders are reaching a critical inflection point in how they manage observability, according to research from LogicMonitor. As infrastructure complexity grows and AI adoption accelerates, fragmented monitoring environments are driving organizations to rethink their operational strategies and consolidate tools ...

For years, many infrastructure teams treated the edge as a deployment variation. It was seen as the same cloud model, only stretched outward: more devices, more gateways, more locations and a little more latency. That assumption is proving costly. The edge is not just another place to run workloads. It is a fundamentally different operating condition ...

AI can't fix broken data. CIOs who modernize revenue data governance unlock predictable growth-those who don't risk millions in failed AI investments. For decades, CIOs kept the lights on. Revenue was someone else's problem, owned by sales, led by the CRO, measured by finance. Those days are behind us ...

Over the past few years, organizations have made enormous strides in enabling remote and hybrid work. But the foundational technologies powering today's digital workplace were never designed for the volume, velocity, and complexity that is coming next. By 2026 and beyond, three forces — 5G, the metaverse, and edge AI — will fundamentally reshape how people connect, collaborate, and access enterprise resources ... The businesses that begin preparing now will gain a competitive head start. Those that wait will find themselves trying to secure environments that have already outgrown their architecture ...

Ask where enterprise AI is making its most decisive impact, and the answer might surprise you: not marketing, not finance, not customer experience. It's IT. Across three years of industry research conducted by Digitate, one constant holds true is that IT is both the testing ground and the proving ground for enterprise AI. Last year, that position only strengthened ...

Finding the Needle in the Haystack: How Machine Learning Will Revolutionize Root Cause Analysis

Ajay Singh
Zebrium

When a website or app fails or falters, the standard operating procedure is to assemble a sizable team to quickly "divide and conquer" to find a solution. The details of the problem can usually be found somewhere among millions of log events and metrics, leading to slow and painstaking searches that can take hours and often involve handoffs between experts in different areas of the software. The immediate goal in these situations is not to be comprehensive, but rather to troubleshoot until you find a solution that remedies the symptom, even if the underlying root cause is not addressed.

The entire troubleshooting process takes time — generally lots and lots of it — and experience. Development teams tend to be chronically short-staffed and overworked, so adding the burden to hunt for the cause of an app problem means a substantial opportunity cost among other things. To help with the task, most companies leverage multiple best-of-breed observability tools including application performance management (APM), tracing, monitoring and log management. These are used to detect and find a solution to the problem being experienced. Although each tool provides useful data, in total, it can be hard for a person to interpret what is important and what is less so.

Instead of a disruptive and often frenzied, big team approach, this kind of challenge is a perfect application for machine learning (ML) to sift through volumes of data and find meaningful patterns or anomalies that can explain the root cause.

AIOps — using AI for IT operations — has emerged as a possible solution for correlating data from multiple tools to reduce noise and translate events into something more meaningful for a user. On the plus side, AIOps solutions are designed to handle events from a wide range of tools, making them versatile. On the negative side, most AIOps solutions require very long training periods (typically many months) against labeled data sets. These solutions also fall short, because they are designed to correlate events against known problems rather than find the root cause of new or unknown failure modes. This is a particular weakness in fast changing cloud-native environments, where new failure modes crop up on a regular basis.

In order to find the root cause of new failure modes, a different type of AI approach is needed. Since logs often contain the source of truth when a software failure occurs, one approach is to use ML on logs. The concept is to identify just the anomalous patterns in the logs that explain the details of the problem. This can be challenging since logs are mostly unstructured and "noisy." On top of that, log volumes are typically huge with data coming from many different log streams, each with a large number of log lines. Historical approaches have focused on basic anomaly detection which not only produce verbose results that require human interpretation, but also don't explain correlations across micro-services, often entirely missing key details of the problem.

It turns out, the most effective way to perform ML on logs is to use a pipeline with multiple different ML strategies depending on stage of the process. Specialized ML starts by self-learning (i.e. unsupervised) how to structure and categorize the logs — this produces a solid foundation for the remaining ML stages. Next, the ML learns the patterns of each type of log event. Once this learning has occurred, the ML system can identify anomalous log events within each log stream (events that break pattern).

Finally, to pull out the signal from the noise, the system needs to find correlations between anomalies and errors across multiple log streams. This process provides an effective way of uncovering just the sequence of log lines that describe the problem and its root cause. In doing so, it allows for accurate detection of new types of failure modes as well as the information needed to identify root cause.

Such a methodology is similar to the approach taken by skilled engineers — understanding the logs, identifying rare and high-severity events and then finding correlations between clusters of these events across multiple log streams. But it requires considerable time for humans to do this. In practice, the task would be spread out across multiple people in a divide and conquer mode in attempt to accelerate the process and lessen the load for each person. While this inherently makes sense, it creates an additional challenge of requiring team members to communicate with each other in such a way that all are aware of all anomalies and errors, and the observations and learnings are all known and shared across the group. In essence, the team needs to function as a single entity.

A multi-staged ML approach works as a single automated entity, and it should not require any manual training, whether in reviewing correlations for tuning algorithms or massaging data sets. The system should free up DevOps teams, so that they only have to respond to actual findings of root cause. A system should only need a few hours of log data to achieve proper levels of accuracy.

While AIOps is useful for reducing the overall event "noise" from the many observability tools in use in an organization, applying multi-stage unsupervised ML to logs is a great way of both detecting new types of failure modes as well as their root cause. Rather than just triaging a problem and coming up with a quick fix or workaround, the system can determine the true root cause and likely avoid other such problems in the future.

Ajay Singh is Founder and CEO of Zebrium

Hot Topics

The Latest

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Seamless shopping is a basic demand of today's boundaryless consumer — one with little patience for friction, limited tolerance for disconnected experiences and minimal hesitation in switching brands. Customers expect intuitive, highly personalized experiences and the ability to move effortlessly across physical and digital channels within the same journey. Failure to deliver can cost dearly ...

If your best engineers spend their days sorting tickets and resetting access, you are wasting talent. New global data shows that employees in the IT sector rank among the least motivated across industries. They're under a lot of pressure from many angles. Pressure to upskill and uncertainty around what agentic AI means for job security is creating anxiety. Meanwhile, these roles often function like an on-call job and require many repetitive tasks ...

In a 2026 survey conducted by Liquibase, the research found that 96.5% of organizations reported at least one AI or LLM interaction with their production databases, often through analytics and reporting, training pipelines, internal copilots, and AI generated SQL. Only a small fraction reported no interaction at all. That means the database is no longer a downstream system that AI "might" reach later. AI is already there ...

In many organizations, IT still operates as a reactive service provider. Systems are managed through fragmented tools, teams focus heavily on operational metrics, and business leaders often see IT as a necessary cost center rather than a strategic partner. Even well-run ITIL environments can struggle to bridge the gap between operational excellence and business impact. This is where the concept of ITIL+ comes in ...

UK IT leaders are reaching a critical inflection point in how they manage observability, according to research from LogicMonitor. As infrastructure complexity grows and AI adoption accelerates, fragmented monitoring environments are driving organizations to rethink their operational strategies and consolidate tools ...

For years, many infrastructure teams treated the edge as a deployment variation. It was seen as the same cloud model, only stretched outward: more devices, more gateways, more locations and a little more latency. That assumption is proving costly. The edge is not just another place to run workloads. It is a fundamentally different operating condition ...

AI can't fix broken data. CIOs who modernize revenue data governance unlock predictable growth-those who don't risk millions in failed AI investments. For decades, CIOs kept the lights on. Revenue was someone else's problem, owned by sales, led by the CRO, measured by finance. Those days are behind us ...

Over the past few years, organizations have made enormous strides in enabling remote and hybrid work. But the foundational technologies powering today's digital workplace were never designed for the volume, velocity, and complexity that is coming next. By 2026 and beyond, three forces — 5G, the metaverse, and edge AI — will fundamentally reshape how people connect, collaborate, and access enterprise resources ... The businesses that begin preparing now will gain a competitive head start. Those that wait will find themselves trying to secure environments that have already outgrown their architecture ...

Ask where enterprise AI is making its most decisive impact, and the answer might surprise you: not marketing, not finance, not customer experience. It's IT. Across three years of industry research conducted by Digitate, one constant holds true is that IT is both the testing ground and the proving ground for enterprise AI. Last year, that position only strengthened ...