Skip to main content

Optimizing Root Cause Analysis to Reduce MTTR

Ariel Gordon

Efficiently detecting and resolving problems is essential, of course, to continue supporting - and minimizing impact on - business services, as well as minimizing any financial impacts.

The goal is to turn the tables on IT problems so that 80 percent of the time is spent on the root cause analysis versus 20 percent on the actual problem fixing.

In resolving the issue, communication is a critical factor for integrating different expert groups towards a common goal. Because each team holds a narrow view of its own domain and expertise, there is always the danger lurking that the "big picture" angle will be missing. You don't want lack of communication to result in blame games and finger pointing.

Some problem detection methods include:

- Infrastructure Monitoring: specific resource utilization like disk, memory, CPU are effective for identifying availability failures – sometimes even heading those off before they happen.

- Domain or Application Tools: These help, but leave the issue that overall problem detection is still a game of hide-and-seek, a manually-intensive effort that comes under the pressure of needing a fix as quickly as possible.

- Dependency mapping tools, which map business services and applications to infrastructure components, can help you generate a topology map that will improve your root cause analysis process for the following reasons:

1. Connect Symptoms to Problems: A single map that relates a business service (user point of view) to its configuration items, will help you detect problems faster.

2. Common Ground: The map ties in all elements so that different groups can focus on a cross-domain effort.

3. High-Level, Cross-Domain View: Teams can view problems not only in the context of their domain, but in a wider view of all network components. For example, a database administrator analyzing a slow database performance problem can examine the topology map to see the effect of networking components on the database.

Root cause is a complex issue, so that no single tool or approach will provide you with full coverage. The idea is to plan a portfolio of tools that together deliver the most impact for your organization.

For instance, if you do not have a central event management console, then consider implementing a topology-based event management solution. If most of your applications involve online transactions, try to look for a transaction management product that covers the technology stack that is common in your environment. Put differently, select a combination of tools that are right for your environment.

Once you assess the tools that provide the most value, implement them in ascending order of value so that you get the biggest impact first.

Ariel Gordon is VP Products and Co-Founder of Neebula.

The Latest

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Artificial Intelligence (AI) is reshaping observability, and observability is becoming essential for AI. This is a two-way relationship that is increasingly relevant as enterprises scale generative AI ... This dual role makes AI and observability inseparable. In this blog, I cover more details of each side ...

Poor DEX directly costs global businesses an average of 470,000 hours per year, equivalent to around 226 full-time employees, according to a new report from Nexthink, Cracking the DEX Equation: The Annual Workplace Productivity Report. This indicates that digital friction is a vital and underreported element of the global productivity crisis ...

Optimizing Root Cause Analysis to Reduce MTTR

Ariel Gordon

Efficiently detecting and resolving problems is essential, of course, to continue supporting - and minimizing impact on - business services, as well as minimizing any financial impacts.

The goal is to turn the tables on IT problems so that 80 percent of the time is spent on the root cause analysis versus 20 percent on the actual problem fixing.

In resolving the issue, communication is a critical factor for integrating different expert groups towards a common goal. Because each team holds a narrow view of its own domain and expertise, there is always the danger lurking that the "big picture" angle will be missing. You don't want lack of communication to result in blame games and finger pointing.

Some problem detection methods include:

- Infrastructure Monitoring: specific resource utilization like disk, memory, CPU are effective for identifying availability failures – sometimes even heading those off before they happen.

- Domain or Application Tools: These help, but leave the issue that overall problem detection is still a game of hide-and-seek, a manually-intensive effort that comes under the pressure of needing a fix as quickly as possible.

- Dependency mapping tools, which map business services and applications to infrastructure components, can help you generate a topology map that will improve your root cause analysis process for the following reasons:

1. Connect Symptoms to Problems: A single map that relates a business service (user point of view) to its configuration items, will help you detect problems faster.

2. Common Ground: The map ties in all elements so that different groups can focus on a cross-domain effort.

3. High-Level, Cross-Domain View: Teams can view problems not only in the context of their domain, but in a wider view of all network components. For example, a database administrator analyzing a slow database performance problem can examine the topology map to see the effect of networking components on the database.

Root cause is a complex issue, so that no single tool or approach will provide you with full coverage. The idea is to plan a portfolio of tools that together deliver the most impact for your organization.

For instance, if you do not have a central event management console, then consider implementing a topology-based event management solution. If most of your applications involve online transactions, try to look for a transaction management product that covers the technology stack that is common in your environment. Put differently, select a combination of tools that are right for your environment.

Once you assess the tools that provide the most value, implement them in ascending order of value so that you get the biggest impact first.

Ariel Gordon is VP Products and Co-Founder of Neebula.

The Latest

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Artificial Intelligence (AI) is reshaping observability, and observability is becoming essential for AI. This is a two-way relationship that is increasingly relevant as enterprises scale generative AI ... This dual role makes AI and observability inseparable. In this blog, I cover more details of each side ...

Poor DEX directly costs global businesses an average of 470,000 hours per year, equivalent to around 226 full-time employees, according to a new report from Nexthink, Cracking the DEX Equation: The Annual Workplace Productivity Report. This indicates that digital friction is a vital and underreported element of the global productivity crisis ...