Skip to main content

Remote IT Network Monitoring and Automated Management Reduce Troubleshooting by 75%

Pete Goldin
APMdigest

Remote monitoring and automated management reduce the time to troubleshoot faulty networking devices by 75%, according to Dimension Data’s annual Network Barometer Report. Consequently, it takes 32% less time to repair such devices than those not managed in this way. Furthermore, this year’s research again shows a strong correlation between the failures caused by devices and their lifecycle stage.

According to the report, networks have continued to age for the fifth consecutive year, making 53% of the over 70,000 technology devices that were analyzed either aging or obsolete – up by two percentage points since last year.

There’s also been a slight drop in the percentage of obsolete devices – down to 9% from last year’s 11% – while the percentage of aging devices has increased by four points. The percentage of the current devices analyzed is at its lowest in three years.

Andre van Schalkwyk, Consulting Practice Manager for Dimension Data’s Networking Business Unit said, “During the seven-year history of the Network Barometer Report, the average tolerance level for organization’s obsolete devices in their networks has been around 10%. Rarely do organizations allow this to increase beyond 11% before they refresh the relevant devices. The conventional assumption was that an overall technology refresh was imminent, but our data shows that organizations are refreshing mostly obsolete devices, and are clearly willing to sweat their aging devices for longer than expected. Organizations therefore focus their refresh initiatives mostly on technology that has reached critical lifecycle stages when vendor support is no longer available,” explains van Schalkwyk.

Based on its experience in evaluating organizations’ operational support maturity, Dimension Data says that on a scale of five, some 90% of organizations are still at the first or second level of maturity. These levels are characterized by a lack of standard processes, ad hoc troubleshooting tools, and ambiguous roles and responsibilities for IT staff, resulting in extended network downtime and increased operational costs. This is also the reason why 30% of all service incidents are still related to human error.

Van Schalkwyk points out that mature monitoring, support, and maintenance processes allow for a higher tolerance for aging devices in the network. This proves the viability of managing an older network overall. “That’s provided there’s sufficient visibility of the lifecycle status of all devices, an understanding of their risk profile depending on their criticality to the infrastructure as a whole, and the proactive management of that risk. Overall, we’re seeing a growing need for more effective day-to-day network management across all corporate networks.”

Dimension Data’s Network Barometer Report globally analyses, compares and interprets the readiness of today’s networks to accelerate business. The 2015 Report was compiled from technology data gathered from over 350 Technology Lifecycle Management Assessments (up from 288 last year), which covered 70,000 technology devices in organisations of all sizes and all industry sectors across 28 countries. It also contains data relating to over 175,000 service incidents, logged at Dimension Data's Global Contact Centres, for client networks they support. The result is a multidimensional view of today’s networks.

Pete Goldin is Editor and Publisher of APMdigest

Hot Topics

The Latest

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Seamless shopping is a basic demand of today's boundaryless consumer — one with little patience for friction, limited tolerance for disconnected experiences and minimal hesitation in switching brands. Customers expect intuitive, highly personalized experiences and the ability to move effortlessly across physical and digital channels within the same journey. Failure to deliver can cost dearly ...

If your best engineers spend their days sorting tickets and resetting access, you are wasting talent. New global data shows that employees in the IT sector rank among the least motivated across industries. They're under a lot of pressure from many angles. Pressure to upskill and uncertainty around what agentic AI means for job security is creating anxiety. Meanwhile, these roles often function like an on-call job and require many repetitive tasks ...

Remote IT Network Monitoring and Automated Management Reduce Troubleshooting by 75%

Pete Goldin
APMdigest

Remote monitoring and automated management reduce the time to troubleshoot faulty networking devices by 75%, according to Dimension Data’s annual Network Barometer Report. Consequently, it takes 32% less time to repair such devices than those not managed in this way. Furthermore, this year’s research again shows a strong correlation between the failures caused by devices and their lifecycle stage.

According to the report, networks have continued to age for the fifth consecutive year, making 53% of the over 70,000 technology devices that were analyzed either aging or obsolete – up by two percentage points since last year.

There’s also been a slight drop in the percentage of obsolete devices – down to 9% from last year’s 11% – while the percentage of aging devices has increased by four points. The percentage of the current devices analyzed is at its lowest in three years.

Andre van Schalkwyk, Consulting Practice Manager for Dimension Data’s Networking Business Unit said, “During the seven-year history of the Network Barometer Report, the average tolerance level for organization’s obsolete devices in their networks has been around 10%. Rarely do organizations allow this to increase beyond 11% before they refresh the relevant devices. The conventional assumption was that an overall technology refresh was imminent, but our data shows that organizations are refreshing mostly obsolete devices, and are clearly willing to sweat their aging devices for longer than expected. Organizations therefore focus their refresh initiatives mostly on technology that has reached critical lifecycle stages when vendor support is no longer available,” explains van Schalkwyk.

Based on its experience in evaluating organizations’ operational support maturity, Dimension Data says that on a scale of five, some 90% of organizations are still at the first or second level of maturity. These levels are characterized by a lack of standard processes, ad hoc troubleshooting tools, and ambiguous roles and responsibilities for IT staff, resulting in extended network downtime and increased operational costs. This is also the reason why 30% of all service incidents are still related to human error.

Van Schalkwyk points out that mature monitoring, support, and maintenance processes allow for a higher tolerance for aging devices in the network. This proves the viability of managing an older network overall. “That’s provided there’s sufficient visibility of the lifecycle status of all devices, an understanding of their risk profile depending on their criticality to the infrastructure as a whole, and the proactive management of that risk. Overall, we’re seeing a growing need for more effective day-to-day network management across all corporate networks.”

Dimension Data’s Network Barometer Report globally analyses, compares and interprets the readiness of today’s networks to accelerate business. The 2015 Report was compiled from technology data gathered from over 350 Technology Lifecycle Management Assessments (up from 288 last year), which covered 70,000 technology devices in organisations of all sizes and all industry sectors across 28 countries. It also contains data relating to over 175,000 service incidents, logged at Dimension Data's Global Contact Centres, for client networks they support. The result is a multidimensional view of today’s networks.

Pete Goldin is Editor and Publisher of APMdigest

Hot Topics

The Latest

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Seamless shopping is a basic demand of today's boundaryless consumer — one with little patience for friction, limited tolerance for disconnected experiences and minimal hesitation in switching brands. Customers expect intuitive, highly personalized experiences and the ability to move effortlessly across physical and digital channels within the same journey. Failure to deliver can cost dearly ...

If your best engineers spend their days sorting tickets and resetting access, you are wasting talent. New global data shows that employees in the IT sector rank among the least motivated across industries. They're under a lot of pressure from many angles. Pressure to upskill and uncertainty around what agentic AI means for job security is creating anxiety. Meanwhile, these roles often function like an on-call job and require many repetitive tasks ...