Skip to main content

Battling Network Zombies This Halloween?

Megan Assarrane

On Halloween, there's no shortage of horror movies to scare and entertain you. Among the usual cast of creepy characters, zombies are among the most popular underdogs. They're (often) embarrassingly slow and brainless. They have terrible personal hygiene. They can't operate machinery of any kind, they can't drive and they don't know how to use a computer or a smartphone.

Speaking of technology, network zombies, on the other hand, are an all too real menace for the modern-day IT administrator. They are smarter than the average zombie, impossible to predict because they appear randomly without warning and dangerous because they cause downtime and lost productivity. Without the right approach, they are nearly impossible to locate and kill.

Network Zombies Are Real

The process required to detect and eliminate network zombies is far more challenging than the swift headshot that eradicates their human counterparts. Network zombies are much harder to track down and kill because they often appear, wreak havoc and disappear. There's no trail of abandoned vehicles and half-eaten bodies to follow.

The only trace evidence is captured in event logs that are often buried in large volumes of hard to connect data. The root cause can be hidden almost anywhere because most business applications are complex entities that interact with multiple resources, such as databases, web servers, directory services and the network itself. That complexity forces the administrator through a slow, labor-intensive investigative process that can delay other daily tasks and projects.

Without a clear view of the zombie, the system administrator is forced to review event logs from every part of the application environment, analyzing long lists of events in multiple logs item by item to find an outstanding event, error condition, or combination of conditions that correlate to the timeframe in which users began to complain. The process can take many hours, if not weeks.

Hunting for Zombies Doesn't Have to be Hard – Using the Yools You Have

The greatest challenge in hunting zombies is where to begin. Is the zombie in an application, database or web server? Or is it a network issue? Without a valid starting point, there is no way to select the right diagnostic path and conduct an efficient hunt.

Effective Application Performance Monitoring (APM) can overcome this impasse by linking all application dependencies. Most organizations have a tool already in place to do this, but it is often underused or even overlooked as a tool for battling zombies. If used well, targeted, real-time monitoring puts administrators on the right diagnostic path, while clear graphic displays make it easy to follow that path to find the zombies causing the problems.

APM uses application profiles to locate and identify zombies. Application profiles define how an application is monitored and what actions should be taken when an application or one of its components fails. The most useful APM tools also define complex relationships and dependencies – from simple n-tier applications to large server farms to complete IT services.

In a SQL server farm, an application profile can be created to monitor each SQL server instance for zombies. Individual profiles can then be embedded into a higher-level profile to monitor the entire SQL server farm. Once the server farm profile is created, it can be embedded into an even higher-level profile that encompasses the entire service it is part of, such as CRM.

Replicating this process for each IT service component creates a comprehensive service profile to hunt and trap network zombies. The profile ensures the administrator can view the status of the entire service or drill down to any component within that service, to a specific instance or component of an application.

The resulting comprehensive service monitoring profile is the foundation for fast, accurate zombie eradication. Completing a service profile generally takes less than two hours but after that small investment in time, the process of hunting zombies can be collapsed from hours, days and weeks of time into a straightforward process that takes just minutes. If you multiply this by the number of zombie complaints an administrator receives, the amount of time saved could be considerable.

Expanding APM capabilities to the network can also help an administrator to identify the root cause of a network zombie attack easily.

Greater Protection Against the Zombie Menace

Once zombies have been caught, system administrators can use APM to create multi-step action zombie traps to address future invasions more quickly. Traps can include event logging, real-time alerts and PowerShell self-healing scripts such as reboot and service restart. Setting zombie trap policies can be assigned at the service, application and component level. Dependency-aware application profiles enable coordinated multi-tier zombie traps to ensure optimal performance of complex applications and IT services.

An APM tool can streamline the process of hunting and trapping zombies, whether they reside in a device or in the network itself, from many hours of exhausting work into a few highly-productive minutes.

Now there's a weapon people confronted with shuffling zombies in a horror film might wish they had at their disposal.

Megan Assarrane is Product Marketing Manager at Ipswitch.

Hot Topics

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

Battling Network Zombies This Halloween?

Megan Assarrane

On Halloween, there's no shortage of horror movies to scare and entertain you. Among the usual cast of creepy characters, zombies are among the most popular underdogs. They're (often) embarrassingly slow and brainless. They have terrible personal hygiene. They can't operate machinery of any kind, they can't drive and they don't know how to use a computer or a smartphone.

Speaking of technology, network zombies, on the other hand, are an all too real menace for the modern-day IT administrator. They are smarter than the average zombie, impossible to predict because they appear randomly without warning and dangerous because they cause downtime and lost productivity. Without the right approach, they are nearly impossible to locate and kill.

Network Zombies Are Real

The process required to detect and eliminate network zombies is far more challenging than the swift headshot that eradicates their human counterparts. Network zombies are much harder to track down and kill because they often appear, wreak havoc and disappear. There's no trail of abandoned vehicles and half-eaten bodies to follow.

The only trace evidence is captured in event logs that are often buried in large volumes of hard to connect data. The root cause can be hidden almost anywhere because most business applications are complex entities that interact with multiple resources, such as databases, web servers, directory services and the network itself. That complexity forces the administrator through a slow, labor-intensive investigative process that can delay other daily tasks and projects.

Without a clear view of the zombie, the system administrator is forced to review event logs from every part of the application environment, analyzing long lists of events in multiple logs item by item to find an outstanding event, error condition, or combination of conditions that correlate to the timeframe in which users began to complain. The process can take many hours, if not weeks.

Hunting for Zombies Doesn't Have to be Hard – Using the Yools You Have

The greatest challenge in hunting zombies is where to begin. Is the zombie in an application, database or web server? Or is it a network issue? Without a valid starting point, there is no way to select the right diagnostic path and conduct an efficient hunt.

Effective Application Performance Monitoring (APM) can overcome this impasse by linking all application dependencies. Most organizations have a tool already in place to do this, but it is often underused or even overlooked as a tool for battling zombies. If used well, targeted, real-time monitoring puts administrators on the right diagnostic path, while clear graphic displays make it easy to follow that path to find the zombies causing the problems.

APM uses application profiles to locate and identify zombies. Application profiles define how an application is monitored and what actions should be taken when an application or one of its components fails. The most useful APM tools also define complex relationships and dependencies – from simple n-tier applications to large server farms to complete IT services.

In a SQL server farm, an application profile can be created to monitor each SQL server instance for zombies. Individual profiles can then be embedded into a higher-level profile to monitor the entire SQL server farm. Once the server farm profile is created, it can be embedded into an even higher-level profile that encompasses the entire service it is part of, such as CRM.

Replicating this process for each IT service component creates a comprehensive service profile to hunt and trap network zombies. The profile ensures the administrator can view the status of the entire service or drill down to any component within that service, to a specific instance or component of an application.

The resulting comprehensive service monitoring profile is the foundation for fast, accurate zombie eradication. Completing a service profile generally takes less than two hours but after that small investment in time, the process of hunting zombies can be collapsed from hours, days and weeks of time into a straightforward process that takes just minutes. If you multiply this by the number of zombie complaints an administrator receives, the amount of time saved could be considerable.

Expanding APM capabilities to the network can also help an administrator to identify the root cause of a network zombie attack easily.

Greater Protection Against the Zombie Menace

Once zombies have been caught, system administrators can use APM to create multi-step action zombie traps to address future invasions more quickly. Traps can include event logging, real-time alerts and PowerShell self-healing scripts such as reboot and service restart. Setting zombie trap policies can be assigned at the service, application and component level. Dependency-aware application profiles enable coordinated multi-tier zombie traps to ensure optimal performance of complex applications and IT services.

An APM tool can streamline the process of hunting and trapping zombies, whether they reside in a device or in the network itself, from many hours of exhausting work into a few highly-productive minutes.

Now there's a weapon people confronted with shuffling zombies in a horror film might wish they had at their disposal.

Megan Assarrane is Product Marketing Manager at Ipswitch.

Hot Topics

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.