Skip to main content

The 4 Building Blocks of Root Cause Analysis

With every minute you can shave off root cause analysis, you get a minute closer to restoring the performance or availability of a process that's important to your business. But the plethora of monitoring tools used throughout your organization, each with its own root cause perspective about the IT environment, can lead to confusion, dysfunction and drawn-out debate when things go wrong. To get the most business value from these diverse views, you need to understand how they can work together.

Think of root cause analysis as a software stack, and the higher the layer is in the stack, the more meaningful it is from a business perspective. For example, in the Open Systems Interconnect (OSI) stack, understanding layer 1, the physical layer, is vital, but layer 7, the application, is more meaningful to the business.

Each layer in the root cause analysis stack is provided by unique monitoring functions, analytics and visualization. Here they are, top down:

- Business Service Root Cause Analysis

- Application-Driven Root Cause Analysis

- Network Fault Root Cause Analysis

- Device Root Cause Analysis

Think of adding each layer in terms of a geometrical analogy of human awareness cleverly explained by the Russian philosopher P.D. Ouspenski in his book Tertium Organum. As he explained, if you were one-dimensional, a point, you couldn't think of a line. If you were a line, you couldn't perceive two-dimensions: a square. If you were a square you couldn't understand a cube. If you were a cube, couldn't understand motion.

Let's see how each layer has legitimate root cause analysis and how each successive layer up the stack adds awareness and greater business value.

1. Device Root Cause Analysis

The device layer is the foundation, letting you know if a server, storage device or switch, router, load balancer, etc. simply is up or down, fast or slow. If it's pingable, you know it has a power source, and diagnostics can tell you which subcomponent has the fault causing the outage. For root cause of performance issues, you'll be relying on your monitoring tools' visual correlation of time series data and threshold alerts to see if the CPU, memory, disk, ports etc. are degraded and why.

But if servers or network devices aren't reachable, how do you know for sure if they are down or if there's an upstream network root cause? To see this, you need to add a higher layer of monitoring and analytics.

2. Network Fault Root Cause Analysis

The next layer is Network Root Cause Analysis. This is partly based on a mechanism called inductive modeling, which discovers relationships between networked devices by discovering port connections and routing and configuration tables in each device.

When an outage occurs, inference, a related Network Root Cause Analysis mechanism, uses known network relationships to determine which devices are downstream from the one that is down. So instead of drowning in a sea of red alerts for all the unreachable devices, you get one upstream network root cause alert. This can also be applied to virtual servers and their underlying physical hosts, as well as network configuration issues.

3. Application-Driven Root Cause Analysis

Next up is Application-Driven Network Performance Management, which includes two monitoring technologies: network flow analysis and end-to-end application delivery analysis.

The first mechanism lets you see which applications are running on your network segments and how much bandwidth each is using. When users are complaining that an application service is slow, this can let you know when a bandwidth-monopolizing application is the root cause. Visualization includes stacked protocol charts, top hosts, top talkers, etc.

The second mechanism in this layer shows you end-to-end application response timing: network round trip, retransmission, data transfer and server response. Together in a stacked graph, this reveals if the network, the server or the application itself is impacting response. To see the detailed root cause in the offending domain, you drill down into a lower layer (e.g., into a network flow analysis, device monitoring or an application forensic tool).

4. Business Service Root Cause Analysis

The best practice is to unify the three layers into a single infrastructure management dashboard, so you can visually correlate all three levels of analytics in an efficient workflow. This is ideal for technical Level 2 Operations specialists and administrators.

But there's one more level at the top of the stack: Business Service Root Cause Analysis. This gives IT Operations Level 1 staff the greatest insight into how infrastructure is impacting business processes.

Examples of business processes include: Concept To Product, Product To Launch, Opportunity To Order, Order To Cash, Request To Service, Design To Build, Manufacturing To Distribution, Build To Order, Build To Stock, Requisition To Payables and so on.

At this layer of the stack, you monitor application and infrastructure components in groups that support each business process. This allows you to monitor each business process as you would an IT infrastructure service, and a mechanism called service impact analysis rates the relative impact each component has on the service performance. From there you can drill down into a lower layer in the stack to see the technical root cause details of the service impact (network outage, not enough bandwidth, server memory degradation, packet loss, not enough host resources for a virtual server, application logic error, etc.).

Once you have a clear understanding of this architecture, and a way to unify the information into a smooth workflow for triage, you can put the human processes in place to realize its business value.

Image removed.

ABOUT David Hayward

David Hayward is Senior Principal Manager, Solutions Marketing at CA Technologies. Hayward specializes in integrated network, systems and application performance management – and his research, writing and speaking engagements focus on IT operations maturity challenges, best-practices and IT management software return on investment. He began his career in 1979 as an editor at the groundbreaking BYTE computer magazine and has since held senior marketing positions in tier one and startup computer system, networking, data warehousing, VoIP and security solution vendors.

Hot Topics

The Latest

OpenTelemetry enjoys a positive perception, with half of respondents considering OpenTelemetry mature enough for implementation today, and another 31% considering it moderately mature and useful, according to a new EMA report, Taking Observability to the Next Level: OpenTelemetry's Emerging Role in IT Performance and Reliability ... and almost everyone surveyed (98.7%) express support for where OpenTelemetry is heading  ...

Image
EMA

If you've been in the tech space for a while, you may be experiencing some deja vu. Though often compared to the adoption and proliferation of the internet, Generative AI (GenAI) is following in the footsteps of cloud computing ...

Lose your data and the best case scenario is, well, you know the word — but at worst, it is game over. And so World Backup Day has traditionally carried a very simple yet powerful message for businesses: Backup. Your. Data ...

Image
World Backup Day

A large majority (79%) believe the current service desk model will be unrecognizable within three years, and nearly as many (77%) say new technologies will render it redundant by 2027, according to The Death (and Rebirth) of the Service Desk, a report from Nexthink ...

Open source dominance continues in observability, according to the Observability Survey from Grafana Labs.  A remarkable 75% of respondents are now using open source licensing for observability, with 70% reporting that their organizations use both Prometheus and OpenTelemetry in some capacity. Half of all organizations increased their investments in both technologies for the second year in a row ...

Significant improvements in operational resilience, more effective use of automation and faster time to market are driving optimism about IT spending in 2025, with a majority of leaders expecting their budgets to increase year-over-year, according to the 2025 State of Digital Operations Report from PagerDuty ...

Image
PagerDuty

Are they simply number crunchers confined to back-office support, or are they the strategic influencers shaping the future of your enterprise? The reality is that data analysts are far more the latter. In fact, 94% of analysts agree their role is pivotal to making high-level business decisions, proving that they are becoming indispensable partners in shaping strategy ...

Today's enterprises exist in rapidly growing, complex IT landscapes that can inadvertently create silos and lead to the accumulation of disparate tools. To successfully manage such growth, these organizations must realize the requisite shift in corporate culture and workflow management needed to build trust in new technologies. This is particularly true in cases where enterprises are turning to automation and autonomic IT to offload the burden from IT professionals. This interplay between technology and culture is crucial in guiding teams using AIOps and observability solutions to proactively manage operations and transition toward a machine-driven IT ecosystem ...

Gartner identified the top data and analytics (D&A) trends for 2025 that are driving the emergence of a wide range of challenges, including organizational and human issues ...

Traditional network monitoring, while valuable, often falls short in providing the context needed to truly understand network behavior. This is where observability shines. In this blog, we'll compare and contrast traditional network monitoring and observability — highlighting the benefits of this evolving approach ...

The 4 Building Blocks of Root Cause Analysis

With every minute you can shave off root cause analysis, you get a minute closer to restoring the performance or availability of a process that's important to your business. But the plethora of monitoring tools used throughout your organization, each with its own root cause perspective about the IT environment, can lead to confusion, dysfunction and drawn-out debate when things go wrong. To get the most business value from these diverse views, you need to understand how they can work together.

Think of root cause analysis as a software stack, and the higher the layer is in the stack, the more meaningful it is from a business perspective. For example, in the Open Systems Interconnect (OSI) stack, understanding layer 1, the physical layer, is vital, but layer 7, the application, is more meaningful to the business.

Each layer in the root cause analysis stack is provided by unique monitoring functions, analytics and visualization. Here they are, top down:

- Business Service Root Cause Analysis

- Application-Driven Root Cause Analysis

- Network Fault Root Cause Analysis

- Device Root Cause Analysis

Think of adding each layer in terms of a geometrical analogy of human awareness cleverly explained by the Russian philosopher P.D. Ouspenski in his book Tertium Organum. As he explained, if you were one-dimensional, a point, you couldn't think of a line. If you were a line, you couldn't perceive two-dimensions: a square. If you were a square you couldn't understand a cube. If you were a cube, couldn't understand motion.

Let's see how each layer has legitimate root cause analysis and how each successive layer up the stack adds awareness and greater business value.

1. Device Root Cause Analysis

The device layer is the foundation, letting you know if a server, storage device or switch, router, load balancer, etc. simply is up or down, fast or slow. If it's pingable, you know it has a power source, and diagnostics can tell you which subcomponent has the fault causing the outage. For root cause of performance issues, you'll be relying on your monitoring tools' visual correlation of time series data and threshold alerts to see if the CPU, memory, disk, ports etc. are degraded and why.

But if servers or network devices aren't reachable, how do you know for sure if they are down or if there's an upstream network root cause? To see this, you need to add a higher layer of monitoring and analytics.

2. Network Fault Root Cause Analysis

The next layer is Network Root Cause Analysis. This is partly based on a mechanism called inductive modeling, which discovers relationships between networked devices by discovering port connections and routing and configuration tables in each device.

When an outage occurs, inference, a related Network Root Cause Analysis mechanism, uses known network relationships to determine which devices are downstream from the one that is down. So instead of drowning in a sea of red alerts for all the unreachable devices, you get one upstream network root cause alert. This can also be applied to virtual servers and their underlying physical hosts, as well as network configuration issues.

3. Application-Driven Root Cause Analysis

Next up is Application-Driven Network Performance Management, which includes two monitoring technologies: network flow analysis and end-to-end application delivery analysis.

The first mechanism lets you see which applications are running on your network segments and how much bandwidth each is using. When users are complaining that an application service is slow, this can let you know when a bandwidth-monopolizing application is the root cause. Visualization includes stacked protocol charts, top hosts, top talkers, etc.

The second mechanism in this layer shows you end-to-end application response timing: network round trip, retransmission, data transfer and server response. Together in a stacked graph, this reveals if the network, the server or the application itself is impacting response. To see the detailed root cause in the offending domain, you drill down into a lower layer (e.g., into a network flow analysis, device monitoring or an application forensic tool).

4. Business Service Root Cause Analysis

The best practice is to unify the three layers into a single infrastructure management dashboard, so you can visually correlate all three levels of analytics in an efficient workflow. This is ideal for technical Level 2 Operations specialists and administrators.

But there's one more level at the top of the stack: Business Service Root Cause Analysis. This gives IT Operations Level 1 staff the greatest insight into how infrastructure is impacting business processes.

Examples of business processes include: Concept To Product, Product To Launch, Opportunity To Order, Order To Cash, Request To Service, Design To Build, Manufacturing To Distribution, Build To Order, Build To Stock, Requisition To Payables and so on.

At this layer of the stack, you monitor application and infrastructure components in groups that support each business process. This allows you to monitor each business process as you would an IT infrastructure service, and a mechanism called service impact analysis rates the relative impact each component has on the service performance. From there you can drill down into a lower layer in the stack to see the technical root cause details of the service impact (network outage, not enough bandwidth, server memory degradation, packet loss, not enough host resources for a virtual server, application logic error, etc.).

Once you have a clear understanding of this architecture, and a way to unify the information into a smooth workflow for triage, you can put the human processes in place to realize its business value.

Image removed.

ABOUT David Hayward

David Hayward is Senior Principal Manager, Solutions Marketing at CA Technologies. Hayward specializes in integrated network, systems and application performance management – and his research, writing and speaking engagements focus on IT operations maturity challenges, best-practices and IT management software return on investment. He began his career in 1979 as an editor at the groundbreaking BYTE computer magazine and has since held senior marketing positions in tier one and startup computer system, networking, data warehousing, VoIP and security solution vendors.

Hot Topics

The Latest

OpenTelemetry enjoys a positive perception, with half of respondents considering OpenTelemetry mature enough for implementation today, and another 31% considering it moderately mature and useful, according to a new EMA report, Taking Observability to the Next Level: OpenTelemetry's Emerging Role in IT Performance and Reliability ... and almost everyone surveyed (98.7%) express support for where OpenTelemetry is heading  ...

Image
EMA

If you've been in the tech space for a while, you may be experiencing some deja vu. Though often compared to the adoption and proliferation of the internet, Generative AI (GenAI) is following in the footsteps of cloud computing ...

Lose your data and the best case scenario is, well, you know the word — but at worst, it is game over. And so World Backup Day has traditionally carried a very simple yet powerful message for businesses: Backup. Your. Data ...

Image
World Backup Day

A large majority (79%) believe the current service desk model will be unrecognizable within three years, and nearly as many (77%) say new technologies will render it redundant by 2027, according to The Death (and Rebirth) of the Service Desk, a report from Nexthink ...

Open source dominance continues in observability, according to the Observability Survey from Grafana Labs.  A remarkable 75% of respondents are now using open source licensing for observability, with 70% reporting that their organizations use both Prometheus and OpenTelemetry in some capacity. Half of all organizations increased their investments in both technologies for the second year in a row ...

Significant improvements in operational resilience, more effective use of automation and faster time to market are driving optimism about IT spending in 2025, with a majority of leaders expecting their budgets to increase year-over-year, according to the 2025 State of Digital Operations Report from PagerDuty ...

Image
PagerDuty

Are they simply number crunchers confined to back-office support, or are they the strategic influencers shaping the future of your enterprise? The reality is that data analysts are far more the latter. In fact, 94% of analysts agree their role is pivotal to making high-level business decisions, proving that they are becoming indispensable partners in shaping strategy ...

Today's enterprises exist in rapidly growing, complex IT landscapes that can inadvertently create silos and lead to the accumulation of disparate tools. To successfully manage such growth, these organizations must realize the requisite shift in corporate culture and workflow management needed to build trust in new technologies. This is particularly true in cases where enterprises are turning to automation and autonomic IT to offload the burden from IT professionals. This interplay between technology and culture is crucial in guiding teams using AIOps and observability solutions to proactively manage operations and transition toward a machine-driven IT ecosystem ...

Gartner identified the top data and analytics (D&A) trends for 2025 that are driving the emergence of a wide range of challenges, including organizational and human issues ...

Traditional network monitoring, while valuable, often falls short in providing the context needed to truly understand network behavior. This is where observability shines. In this blog, we'll compare and contrast traditional network monitoring and observability — highlighting the benefits of this evolving approach ...