The 4 Building Blocks of Root Cause Analysis
May 06, 2014
David Hayward
Share this

With every minute you can shave off root cause analysis, you get a minute closer to restoring the performance or availability of a process that's important to your business. But the plethora of monitoring tools used throughout your organization, each with its own root cause perspective about the IT environment, can lead to confusion, dysfunction and drawn-out debate when things go wrong. To get the most business value from these diverse views, you need to understand how they can work together.

Think of root cause analysis as a software stack, and the higher the layer is in the stack, the more meaningful it is from a business perspective. For example, in the Open Systems Interconnect (OSI) stack, understanding layer 1, the physical layer, is vital, but layer 7, the application, is more meaningful to the business.

Each layer in the root cause analysis stack is provided by unique monitoring functions, analytics and visualization. Here they are, top down:

- Business Service Root Cause Analysis

- Application-Driven Root Cause Analysis

- Network Fault Root Cause Analysis

- Device Root Cause Analysis

Think of adding each layer in terms of a geometrical analogy of human awareness cleverly explained by the Russian philosopher P.D. Ouspenski in his book Tertium Organum. As he explained, if you were one-dimensional, a point, you couldn't think of a line. If you were a line, you couldn't perceive two-dimensions: a square. If you were a square you couldn't understand a cube. If you were a cube, couldn't understand motion.

Let's see how each layer has legitimate root cause analysis and how each successive layer up the stack adds awareness and greater business value.

1. Device Root Cause Analysis

The device layer is the foundation, letting you know if a server, storage device or switch, router, load balancer, etc. simply is up or down, fast or slow. If it's pingable, you know it has a power source, and diagnostics can tell you which subcomponent has the fault causing the outage. For root cause of performance issues, you'll be relying on your monitoring tools' visual correlation of time series data and threshold alerts to see if the CPU, memory, disk, ports etc. are degraded and why.

But if servers or network devices aren't reachable, how do you know for sure if they are down or if there's an upstream network root cause? To see this, you need to add a higher layer of monitoring and analytics.

2. Network Fault Root Cause Analysis

The next layer is Network Root Cause Analysis. This is partly based on a mechanism called inductive modeling, which discovers relationships between networked devices by discovering port connections and routing and configuration tables in each device.

When an outage occurs, inference, a related Network Root Cause Analysis mechanism, uses known network relationships to determine which devices are downstream from the one that is down. So instead of drowning in a sea of red alerts for all the unreachable devices, you get one upstream network root cause alert. This can also be applied to virtual servers and their underlying physical hosts, as well as network configuration issues.

3. Application-Driven Root Cause Analysis

Next up is Application-Driven Network Performance Management, which includes two monitoring technologies: network flow analysis and end-to-end application delivery analysis.

The first mechanism lets you see which applications are running on your network segments and how much bandwidth each is using. When users are complaining that an application service is slow, this can let you know when a bandwidth-monopolizing application is the root cause. Visualization includes stacked protocol charts, top hosts, top talkers, etc.

The second mechanism in this layer shows you end-to-end application response timing: network round trip, retransmission, data transfer and server response. Together in a stacked graph, this reveals if the network, the server or the application itself is impacting response. To see the detailed root cause in the offending domain, you drill down into a lower layer (e.g., into a network flow analysis, device monitoring or an application forensic tool).

4. Business Service Root Cause Analysis

The best practice is to unify the three layers into a single infrastructure management dashboard, so you can visually correlate all three levels of analytics in an efficient workflow. This is ideal for technical Level 2 Operations specialists and administrators.

But there's one more level at the top of the stack: Business Service Root Cause Analysis. This gives IT Operations Level 1 staff the greatest insight into how infrastructure is impacting business processes.

Examples of business processes include: Concept To Product, Product To Launch, Opportunity To Order, Order To Cash, Request To Service, Design To Build, Manufacturing To Distribution, Build To Order, Build To Stock, Requisition To Payables and so on.

At this layer of the stack, you monitor application and infrastructure components in groups that support each business process. This allows you to monitor each business process as you would an IT infrastructure service, and a mechanism called service impact analysis rates the relative impact each component has on the service performance. From there you can drill down into a lower layer in the stack to see the technical root cause details of the service impact (network outage, not enough bandwidth, server memory degradation, packet loss, not enough host resources for a virtual server, application logic error, etc.).

Once you have a clear understanding of this architecture, and a way to unify the information into a smooth workflow for triage, you can put the human processes in place to realize its business value.

ABOUT David Hayward

David Hayward is Senior Principal Manager, Solutions Marketing at CA Technologies. Hayward specializes in integrated network, systems and application performance management – and his research, writing and speaking engagements focus on IT operations maturity challenges, best-practices and IT management software return on investment. He began his career in 1979 as an editor at the groundbreaking BYTE computer magazine and has since held senior marketing positions in tier one and startup computer system, networking, data warehousing, VoIP and security solution vendors.

Share this

The Latest

March 28, 2023

This blog presents the case for a radical new approach to basic information technology (IT) education. This conclusion is based on a study of courses and other forms of IT education which purport to cover IT "fundamentals" ...

March 27, 2023

To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...

March 23, 2023

APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...

March 22, 2023

CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...

March 21, 2023

Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...

March 20, 2023

Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...

March 16, 2023

Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...

March 15, 2023

Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...

March 14, 2023

Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...

March 13, 2023

Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.