Skip to main content

The 4 Building Blocks of Root Cause Analysis

With every minute you can shave off root cause analysis, you get a minute closer to restoring the performance or availability of a process that's important to your business. But the plethora of monitoring tools used throughout your organization, each with its own root cause perspective about the IT environment, can lead to confusion, dysfunction and drawn-out debate when things go wrong. To get the most business value from these diverse views, you need to understand how they can work together.

Think of root cause analysis as a software stack, and the higher the layer is in the stack, the more meaningful it is from a business perspective. For example, in the Open Systems Interconnect (OSI) stack, understanding layer 1, the physical layer, is vital, but layer 7, the application, is more meaningful to the business.

Each layer in the root cause analysis stack is provided by unique monitoring functions, analytics and visualization. Here they are, top down:

- Business Service Root Cause Analysis

- Application-Driven Root Cause Analysis

- Network Fault Root Cause Analysis

- Device Root Cause Analysis

Think of adding each layer in terms of a geometrical analogy of human awareness cleverly explained by the Russian philosopher P.D. Ouspenski in his book Tertium Organum. As he explained, if you were one-dimensional, a point, you couldn't think of a line. If you were a line, you couldn't perceive two-dimensions: a square. If you were a square you couldn't understand a cube. If you were a cube, couldn't understand motion.

Let's see how each layer has legitimate root cause analysis and how each successive layer up the stack adds awareness and greater business value.

1. Device Root Cause Analysis

The device layer is the foundation, letting you know if a server, storage device or switch, router, load balancer, etc. simply is up or down, fast or slow. If it's pingable, you know it has a power source, and diagnostics can tell you which subcomponent has the fault causing the outage. For root cause of performance issues, you'll be relying on your monitoring tools' visual correlation of time series data and threshold alerts to see if the CPU, memory, disk, ports etc. are degraded and why.

But if servers or network devices aren't reachable, how do you know for sure if they are down or if there's an upstream network root cause? To see this, you need to add a higher layer of monitoring and analytics.

2. Network Fault Root Cause Analysis

The next layer is Network Root Cause Analysis. This is partly based on a mechanism called inductive modeling, which discovers relationships between networked devices by discovering port connections and routing and configuration tables in each device.

When an outage occurs, inference, a related Network Root Cause Analysis mechanism, uses known network relationships to determine which devices are downstream from the one that is down. So instead of drowning in a sea of red alerts for all the unreachable devices, you get one upstream network root cause alert. This can also be applied to virtual servers and their underlying physical hosts, as well as network configuration issues.

3. Application-Driven Root Cause Analysis

Next up is Application-Driven Network Performance Management, which includes two monitoring technologies: network flow analysis and end-to-end application delivery analysis.

The first mechanism lets you see which applications are running on your network segments and how much bandwidth each is using. When users are complaining that an application service is slow, this can let you know when a bandwidth-monopolizing application is the root cause. Visualization includes stacked protocol charts, top hosts, top talkers, etc.

The second mechanism in this layer shows you end-to-end application response timing: network round trip, retransmission, data transfer and server response. Together in a stacked graph, this reveals if the network, the server or the application itself is impacting response. To see the detailed root cause in the offending domain, you drill down into a lower layer (e.g., into a network flow analysis, device monitoring or an application forensic tool).

4. Business Service Root Cause Analysis

The best practice is to unify the three layers into a single infrastructure management dashboard, so you can visually correlate all three levels of analytics in an efficient workflow. This is ideal for technical Level 2 Operations specialists and administrators.

But there's one more level at the top of the stack: Business Service Root Cause Analysis. This gives IT Operations Level 1 staff the greatest insight into how infrastructure is impacting business processes.

Examples of business processes include: Concept To Product, Product To Launch, Opportunity To Order, Order To Cash, Request To Service, Design To Build, Manufacturing To Distribution, Build To Order, Build To Stock, Requisition To Payables and so on.

At this layer of the stack, you monitor application and infrastructure components in groups that support each business process. This allows you to monitor each business process as you would an IT infrastructure service, and a mechanism called service impact analysis rates the relative impact each component has on the service performance. From there you can drill down into a lower layer in the stack to see the technical root cause details of the service impact (network outage, not enough bandwidth, server memory degradation, packet loss, not enough host resources for a virtual server, application logic error, etc.).

Once you have a clear understanding of this architecture, and a way to unify the information into a smooth workflow for triage, you can put the human processes in place to realize its business value.

Image removed.

ABOUT David Hayward

David Hayward is Senior Principal Manager, Solutions Marketing at CA Technologies. Hayward specializes in integrated network, systems and application performance management – and his research, writing and speaking engagements focus on IT operations maturity challenges, best-practices and IT management software return on investment. He began his career in 1979 as an editor at the groundbreaking BYTE computer magazine and has since held senior marketing positions in tier one and startup computer system, networking, data warehousing, VoIP and security solution vendors.

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

The 4 Building Blocks of Root Cause Analysis

With every minute you can shave off root cause analysis, you get a minute closer to restoring the performance or availability of a process that's important to your business. But the plethora of monitoring tools used throughout your organization, each with its own root cause perspective about the IT environment, can lead to confusion, dysfunction and drawn-out debate when things go wrong. To get the most business value from these diverse views, you need to understand how they can work together.

Think of root cause analysis as a software stack, and the higher the layer is in the stack, the more meaningful it is from a business perspective. For example, in the Open Systems Interconnect (OSI) stack, understanding layer 1, the physical layer, is vital, but layer 7, the application, is more meaningful to the business.

Each layer in the root cause analysis stack is provided by unique monitoring functions, analytics and visualization. Here they are, top down:

- Business Service Root Cause Analysis

- Application-Driven Root Cause Analysis

- Network Fault Root Cause Analysis

- Device Root Cause Analysis

Think of adding each layer in terms of a geometrical analogy of human awareness cleverly explained by the Russian philosopher P.D. Ouspenski in his book Tertium Organum. As he explained, if you were one-dimensional, a point, you couldn't think of a line. If you were a line, you couldn't perceive two-dimensions: a square. If you were a square you couldn't understand a cube. If you were a cube, couldn't understand motion.

Let's see how each layer has legitimate root cause analysis and how each successive layer up the stack adds awareness and greater business value.

1. Device Root Cause Analysis

The device layer is the foundation, letting you know if a server, storage device or switch, router, load balancer, etc. simply is up or down, fast or slow. If it's pingable, you know it has a power source, and diagnostics can tell you which subcomponent has the fault causing the outage. For root cause of performance issues, you'll be relying on your monitoring tools' visual correlation of time series data and threshold alerts to see if the CPU, memory, disk, ports etc. are degraded and why.

But if servers or network devices aren't reachable, how do you know for sure if they are down or if there's an upstream network root cause? To see this, you need to add a higher layer of monitoring and analytics.

2. Network Fault Root Cause Analysis

The next layer is Network Root Cause Analysis. This is partly based on a mechanism called inductive modeling, which discovers relationships between networked devices by discovering port connections and routing and configuration tables in each device.

When an outage occurs, inference, a related Network Root Cause Analysis mechanism, uses known network relationships to determine which devices are downstream from the one that is down. So instead of drowning in a sea of red alerts for all the unreachable devices, you get one upstream network root cause alert. This can also be applied to virtual servers and their underlying physical hosts, as well as network configuration issues.

3. Application-Driven Root Cause Analysis

Next up is Application-Driven Network Performance Management, which includes two monitoring technologies: network flow analysis and end-to-end application delivery analysis.

The first mechanism lets you see which applications are running on your network segments and how much bandwidth each is using. When users are complaining that an application service is slow, this can let you know when a bandwidth-monopolizing application is the root cause. Visualization includes stacked protocol charts, top hosts, top talkers, etc.

The second mechanism in this layer shows you end-to-end application response timing: network round trip, retransmission, data transfer and server response. Together in a stacked graph, this reveals if the network, the server or the application itself is impacting response. To see the detailed root cause in the offending domain, you drill down into a lower layer (e.g., into a network flow analysis, device monitoring or an application forensic tool).

4. Business Service Root Cause Analysis

The best practice is to unify the three layers into a single infrastructure management dashboard, so you can visually correlate all three levels of analytics in an efficient workflow. This is ideal for technical Level 2 Operations specialists and administrators.

But there's one more level at the top of the stack: Business Service Root Cause Analysis. This gives IT Operations Level 1 staff the greatest insight into how infrastructure is impacting business processes.

Examples of business processes include: Concept To Product, Product To Launch, Opportunity To Order, Order To Cash, Request To Service, Design To Build, Manufacturing To Distribution, Build To Order, Build To Stock, Requisition To Payables and so on.

At this layer of the stack, you monitor application and infrastructure components in groups that support each business process. This allows you to monitor each business process as you would an IT infrastructure service, and a mechanism called service impact analysis rates the relative impact each component has on the service performance. From there you can drill down into a lower layer in the stack to see the technical root cause details of the service impact (network outage, not enough bandwidth, server memory degradation, packet loss, not enough host resources for a virtual server, application logic error, etc.).

Once you have a clear understanding of this architecture, and a way to unify the information into a smooth workflow for triage, you can put the human processes in place to realize its business value.

Image removed.

ABOUT David Hayward

David Hayward is Senior Principal Manager, Solutions Marketing at CA Technologies. Hayward specializes in integrated network, systems and application performance management – and his research, writing and speaking engagements focus on IT operations maturity challenges, best-practices and IT management software return on investment. He began his career in 1979 as an editor at the groundbreaking BYTE computer magazine and has since held senior marketing positions in tier one and startup computer system, networking, data warehousing, VoIP and security solution vendors.

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...