Skip to main content

Not All Networks Are Built for the Edge

Julio Petrovitch
NetAlly

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure.

But there's a catch: Not every network is ready to support edge deployments. The shift from cloud to edge isn't a silver bullet … it comes with its own set of performance, connectivity and security challenges that can derail return on investment if IT teams aren't prepared. Before rushing into edge, it's worth asking: Is your network actually built for it?

Recent research from IDC shows that global spending on edge computing is expected to reach around $261 billion in 2025. Despite its advantages, edge computing introduces a new layer of complexity. Moving workloads closer to the source doesn't inherently solve latency. Local bottlenecks like Wi-Fi congestion, inefficient routing, and oversubscribed nodes can impact performance. For example, a retail store using edge-based video analytics might run into delays, not because the analytics system is slow, but because the Wi-Fi is overloaded. With numerous devices fighting for bandwidth or a single access point stretched too thin, performance can take a hit. Measuring round-trip latency at the point of deployment is essential to validate that the edge network is delivering on its promise.

Coverage gaps and internal bandwidth limitations also pose risks. Many edge and IoT devices are deployed in low-signal environments (ceilings, walls, utility spaces) where connectivity can be unreliable without precise, location-based testing.

Meanwhile, increased east-west traffic from localized processing can strain internal links that weren't designed for high-volume lateral communication. Imagine a building automation system where sensors are installed behind ceiling tiles or inside utility closets. On paper, the network coverage might look sufficient — but in practice, those materials can block or degrade the signal. Without testing connectivity at the exact device location, these sensors could drop offline or send delayed data, undermining the reliability of the entire system.

The surge in east-west traffic at the edge doesn't just strain network capacity; it also complicates security monitoring. Traditional perimeter defenses and cloud-based firewalls may not see lateral communications between devices. Without continuous visibility and anomaly detection, malicious activity can blend in with normal machine-to-machine chatter.

Beyond performance and reliability, security must be front and center. Every new sensor, kiosk, or edge server adds another potential entry point for attackers. Unlike data centers and company HQs with hardened perimeters, edge devices are often deployed in uncontrolled environments like retail floors, factory lines, or remote offices where they may be more vulnerable to physical tampering. Centralized monitoring technologies like Endpoint Detection and Response are less effective at the network edge, so the risk of rogue access points or unsecured ports is higher. Malicious activity or unusual network behavior will be harder to detect. Finally, edge devices themselves often use outdated operating systems and basic software with many security flaws.

Maximizing the value of edge computing starts with proactive planning and rigorous validation. That begins by measuring latency before and after deployment — not just at the network level, but for each specific application and service. Round-trip testing and packet analysis can confirm whether devices are reliably connecting with intended endpoints and performing within acceptable thresholds.

General proximity is not enough when it comes to wireless coverage, it must be assessed at the physical device location. Research from 2024 confirms that signal strength can deteriorate dramatically with just a few meters of distance or light obstruction. The study, measuring Wi-Fi signal quality from 1 meter to 15 meters from a router, found a significant drop in signal strength and data speed as distance increased, with performance further degraded by walls, furniture, and other obstructions — as would be expected. For instance, imagine a smart sensor mounted in a warehouse ceiling. On a map, it's well within range of the nearest access point, but thick steel rafters and high shelving panels obstruct the Wi-Fi path. At that exact location, signal strength can fall below usable thresholds, causing intermittent dropouts or delayed transmissions that wouldn't be caught unless measured in proximity to the sensor itself.

It's also important that signal quality and load testing simulate real-world conditions to ensure infrastructure can handle demand as deployments scale. With east-west (internal, device-to-device) traffic increasing, IT teams should test throughput across switch-to-switch and access-layer connections. At the same time, north-south (external, device-to-cloud) traffic should be validated to confirm critical applications can reliably reach data center and cloud services. Together, these tests ensure both internal and external paths can support elevated loads without introducing bottlenecks.

Edge computing can unlock significant performance gains, reduce latency, and shift compute load from centralized infrastructure — but only when the underlying network is both performance-ready and secure. Success depends on more than shifting workloads closer to devices. It requires deliberate testing, full visibility, and cross-functional coordination. By validating latency, assessing wireless coverage, stress-testing both east-west and north-south links, and securing every endpoint, IT leaders can avoid common pitfalls and deliver the reliability, responsiveness, and protection their users expect.

Julio Petrovitch is a Product Manager at NetAlly

The Latest

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Seamless shopping is a basic demand of today's boundaryless consumer — one with little patience for friction, limited tolerance for disconnected experiences and minimal hesitation in switching brands. Customers expect intuitive, highly personalized experiences and the ability to move effortlessly across physical and digital channels within the same journey. Failure to deliver can cost dearly ...

If your best engineers spend their days sorting tickets and resetting access, you are wasting talent. New global data shows that employees in the IT sector rank among the least motivated across industries. They're under a lot of pressure from many angles. Pressure to upskill and uncertainty around what agentic AI means for job security is creating anxiety. Meanwhile, these roles often function like an on-call job and require many repetitive tasks ...

Not All Networks Are Built for the Edge

Julio Petrovitch
NetAlly

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure.

But there's a catch: Not every network is ready to support edge deployments. The shift from cloud to edge isn't a silver bullet … it comes with its own set of performance, connectivity and security challenges that can derail return on investment if IT teams aren't prepared. Before rushing into edge, it's worth asking: Is your network actually built for it?

Recent research from IDC shows that global spending on edge computing is expected to reach around $261 billion in 2025. Despite its advantages, edge computing introduces a new layer of complexity. Moving workloads closer to the source doesn't inherently solve latency. Local bottlenecks like Wi-Fi congestion, inefficient routing, and oversubscribed nodes can impact performance. For example, a retail store using edge-based video analytics might run into delays, not because the analytics system is slow, but because the Wi-Fi is overloaded. With numerous devices fighting for bandwidth or a single access point stretched too thin, performance can take a hit. Measuring round-trip latency at the point of deployment is essential to validate that the edge network is delivering on its promise.

Coverage gaps and internal bandwidth limitations also pose risks. Many edge and IoT devices are deployed in low-signal environments (ceilings, walls, utility spaces) where connectivity can be unreliable without precise, location-based testing.

Meanwhile, increased east-west traffic from localized processing can strain internal links that weren't designed for high-volume lateral communication. Imagine a building automation system where sensors are installed behind ceiling tiles or inside utility closets. On paper, the network coverage might look sufficient — but in practice, those materials can block or degrade the signal. Without testing connectivity at the exact device location, these sensors could drop offline or send delayed data, undermining the reliability of the entire system.

The surge in east-west traffic at the edge doesn't just strain network capacity; it also complicates security monitoring. Traditional perimeter defenses and cloud-based firewalls may not see lateral communications between devices. Without continuous visibility and anomaly detection, malicious activity can blend in with normal machine-to-machine chatter.

Beyond performance and reliability, security must be front and center. Every new sensor, kiosk, or edge server adds another potential entry point for attackers. Unlike data centers and company HQs with hardened perimeters, edge devices are often deployed in uncontrolled environments like retail floors, factory lines, or remote offices where they may be more vulnerable to physical tampering. Centralized monitoring technologies like Endpoint Detection and Response are less effective at the network edge, so the risk of rogue access points or unsecured ports is higher. Malicious activity or unusual network behavior will be harder to detect. Finally, edge devices themselves often use outdated operating systems and basic software with many security flaws.

Maximizing the value of edge computing starts with proactive planning and rigorous validation. That begins by measuring latency before and after deployment — not just at the network level, but for each specific application and service. Round-trip testing and packet analysis can confirm whether devices are reliably connecting with intended endpoints and performing within acceptable thresholds.

General proximity is not enough when it comes to wireless coverage, it must be assessed at the physical device location. Research from 2024 confirms that signal strength can deteriorate dramatically with just a few meters of distance or light obstruction. The study, measuring Wi-Fi signal quality from 1 meter to 15 meters from a router, found a significant drop in signal strength and data speed as distance increased, with performance further degraded by walls, furniture, and other obstructions — as would be expected. For instance, imagine a smart sensor mounted in a warehouse ceiling. On a map, it's well within range of the nearest access point, but thick steel rafters and high shelving panels obstruct the Wi-Fi path. At that exact location, signal strength can fall below usable thresholds, causing intermittent dropouts or delayed transmissions that wouldn't be caught unless measured in proximity to the sensor itself.

It's also important that signal quality and load testing simulate real-world conditions to ensure infrastructure can handle demand as deployments scale. With east-west (internal, device-to-device) traffic increasing, IT teams should test throughput across switch-to-switch and access-layer connections. At the same time, north-south (external, device-to-cloud) traffic should be validated to confirm critical applications can reliably reach data center and cloud services. Together, these tests ensure both internal and external paths can support elevated loads without introducing bottlenecks.

Edge computing can unlock significant performance gains, reduce latency, and shift compute load from centralized infrastructure — but only when the underlying network is both performance-ready and secure. Success depends on more than shifting workloads closer to devices. It requires deliberate testing, full visibility, and cross-functional coordination. By validating latency, assessing wireless coverage, stress-testing both east-west and north-south links, and securing every endpoint, IT leaders can avoid common pitfalls and deliver the reliability, responsiveness, and protection their users expect.

Julio Petrovitch is a Product Manager at NetAlly

The Latest

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Seamless shopping is a basic demand of today's boundaryless consumer — one with little patience for friction, limited tolerance for disconnected experiences and minimal hesitation in switching brands. Customers expect intuitive, highly personalized experiences and the ability to move effortlessly across physical and digital channels within the same journey. Failure to deliver can cost dearly ...

If your best engineers spend their days sorting tickets and resetting access, you are wasting talent. New global data shows that employees in the IT sector rank among the least motivated across industries. They're under a lot of pressure from many angles. Pressure to upskill and uncertainty around what agentic AI means for job security is creating anxiety. Meanwhile, these roles often function like an on-call job and require many repetitive tasks ...