Skip to main content

Fault Domain Isolation Key to Avoiding Network Blame Game - Part 2

Jeff Brown

Start with Part 1 of this Blog

What’s the Hold Up?

It always reduces costs and decreases time-to-resolution when root cause analysis is being done in earnest, with confidence (and perhaps a bit of guilt) that the problem simply cannot lay elsewhere. RCA works best when the people working on the problem have the expertise to properly evaluate the cause and resolve the problem.

In Part 1 of this Blog, I explained how a packet-driven FDI process is an effective way to accelerate incident investigations and reduce the number of people involved. Further, to achieve its primary goal of getting only the right people involved in the incident investigation, we know it doesn’t take a lot of taps and equipment to isolate the major technology tiers. So why do team-of-expert meetings still persist in so many major incident investigations?

The problem might be that some simply do not believe that complex incidents can be fully resolved with just a few taps and some network recorders. And you know what, they’re right! But that isn’t the goal of the FDI stage of the incident investigation process. The goal is fault isolation, and that can be done simply and reliably. All you need is the underlying packets and a process to analyze them.

Divide and Conquer

The primary or first-layer FDI process isolates the incident to a single technology tier as defined by the organization’s internal structure and outsourcing arrangement.

Primary FDI is best achieved by:

1. Using network recording tools to monitor and store the network traffic occurring between technology tiers

2. Applying application transaction analysis to perform fault isolation.

Packet storage (rather than just averages or summaries) is key to enabling the back-in-time analysis upon which efficient FDI depends.

As you’ve probably guessed, FDI is a divide and conquer process that can be deployed in layers. FDI can also be used within each tier to further isolate the problem until highly efficient RCA can be done. This can be called intra-tier FDI, or perhaps secondary FDI.

Not surprisingly, network incident investigations are particularly amenable to a secondary FDI workflow, and once again, this is best achieved by monitoring and storing the actual packet flows between key network components for efficient back-in-time analysis.

It is valid to ask where the network tap points and network recording tools should be deployed when intra-network FDI is the goal. The main difference between primary FDI and intra-network FDI is that the location of the observation points is less an organizational issue, and more about physical location, technology, staff expertise, and of course, level of outsourcing and external suppliers. But the FDI process is similar: use packet-based analysis to provide irrefutable evidence as to which technology or service provider is at fault, and which are not.

Always-On or Always-Available?

You do not want to wait for a major incident to occur before you start deploying the tap points and monitoring tools needed for performing FDI -- that would defeat its purpose. So it seems pretty clear that the tap points and network recording tools needed for primary or first-level FDI should be deployed and running all the time. Those are your always-on appliances.

But what about secondary or intra-technology FDI? What about remote sites, regional data centers, and non-critical applications? You can’t tap everywhere, nor can you store everything.

Fortunately many network recording tools have been built to satisfy the needs of the always-on recording required between primary technology tiers, and the “always-available” recording connected via Network Packet Brokers to a plethora of secondary tap points. Always-available appliances do not necessarily give you long-term back-in-time visibility, but they can be quickly configured to begin monitoring where needed, on demand, tuned to the specific visibility needs of the incident investigation underway.

How Simple Is It?

So, is FDI truly as simple as we’ve described? Well, yes and no. Obviously there are plenty of unusual, complex, and just plain bizarre problems that can appear in a system as complex and dynamic as a modern organization’s networked business application infrastructure. And these types of problems will always require deep investigation, and the skills and knowledge of specialists and experts to resolve. But that doesn’t render FDI irrelevant or ineffective for these complex issues. Indeed it makes the need for a rigorous, repeatable, data-driven FDI process all the more important. Put another way, for complex problems why wouldn’t you use a proven divide and conquer approach like FDI?

Jeff Brown is Global Director of Training, NVP at Emulex.

Hot Topics

The Latest

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM

 

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

Fault Domain Isolation Key to Avoiding Network Blame Game - Part 2

Jeff Brown

Start with Part 1 of this Blog

What’s the Hold Up?

It always reduces costs and decreases time-to-resolution when root cause analysis is being done in earnest, with confidence (and perhaps a bit of guilt) that the problem simply cannot lay elsewhere. RCA works best when the people working on the problem have the expertise to properly evaluate the cause and resolve the problem.

In Part 1 of this Blog, I explained how a packet-driven FDI process is an effective way to accelerate incident investigations and reduce the number of people involved. Further, to achieve its primary goal of getting only the right people involved in the incident investigation, we know it doesn’t take a lot of taps and equipment to isolate the major technology tiers. So why do team-of-expert meetings still persist in so many major incident investigations?

The problem might be that some simply do not believe that complex incidents can be fully resolved with just a few taps and some network recorders. And you know what, they’re right! But that isn’t the goal of the FDI stage of the incident investigation process. The goal is fault isolation, and that can be done simply and reliably. All you need is the underlying packets and a process to analyze them.

Divide and Conquer

The primary or first-layer FDI process isolates the incident to a single technology tier as defined by the organization’s internal structure and outsourcing arrangement.

Primary FDI is best achieved by:

1. Using network recording tools to monitor and store the network traffic occurring between technology tiers

2. Applying application transaction analysis to perform fault isolation.

Packet storage (rather than just averages or summaries) is key to enabling the back-in-time analysis upon which efficient FDI depends.

As you’ve probably guessed, FDI is a divide and conquer process that can be deployed in layers. FDI can also be used within each tier to further isolate the problem until highly efficient RCA can be done. This can be called intra-tier FDI, or perhaps secondary FDI.

Not surprisingly, network incident investigations are particularly amenable to a secondary FDI workflow, and once again, this is best achieved by monitoring and storing the actual packet flows between key network components for efficient back-in-time analysis.

It is valid to ask where the network tap points and network recording tools should be deployed when intra-network FDI is the goal. The main difference between primary FDI and intra-network FDI is that the location of the observation points is less an organizational issue, and more about physical location, technology, staff expertise, and of course, level of outsourcing and external suppliers. But the FDI process is similar: use packet-based analysis to provide irrefutable evidence as to which technology or service provider is at fault, and which are not.

Always-On or Always-Available?

You do not want to wait for a major incident to occur before you start deploying the tap points and monitoring tools needed for performing FDI -- that would defeat its purpose. So it seems pretty clear that the tap points and network recording tools needed for primary or first-level FDI should be deployed and running all the time. Those are your always-on appliances.

But what about secondary or intra-technology FDI? What about remote sites, regional data centers, and non-critical applications? You can’t tap everywhere, nor can you store everything.

Fortunately many network recording tools have been built to satisfy the needs of the always-on recording required between primary technology tiers, and the “always-available” recording connected via Network Packet Brokers to a plethora of secondary tap points. Always-available appliances do not necessarily give you long-term back-in-time visibility, but they can be quickly configured to begin monitoring where needed, on demand, tuned to the specific visibility needs of the incident investigation underway.

How Simple Is It?

So, is FDI truly as simple as we’ve described? Well, yes and no. Obviously there are plenty of unusual, complex, and just plain bizarre problems that can appear in a system as complex and dynamic as a modern organization’s networked business application infrastructure. And these types of problems will always require deep investigation, and the skills and knowledge of specialists and experts to resolve. But that doesn’t render FDI irrelevant or ineffective for these complex issues. Indeed it makes the need for a rigorous, repeatable, data-driven FDI process all the more important. Put another way, for complex problems why wouldn’t you use a proven divide and conquer approach like FDI?

Jeff Brown is Global Director of Training, NVP at Emulex.

Hot Topics

The Latest

CEOs are committed to advancing AI solutions across their organization even as they face challenges from accelerating technology adoption, according to the IBM CEO Study. The survey revealed that executive respondents expect the growth rate of AI investments to more than double in the next two years, and 61% confirm they are actively adopting AI agents today and preparing to implement them at scale ...

Image
IBM

 

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...