Skip to main content

Stop Using the Network as an Excuse for Poor App Performance

Dave Berg

Recent data on the world's fastest 4G LTE network speeds places the USA 8th. Some will use this as an excuse for the poor performance of their mobile applications and why they fail to meet user expectations. For a majority, however, this information confirms what they have always known: the network was never and will never be the answer to better performance.

If the network itself can't be used to guarantee increased app performance, what can a developer or tester do to ensure an app will perform once it is released across less than stellar mobile networks? Be realistic with expectations and virtualize networks and services to test under the conditions your end users experience.

First, understand that compromises will have to be made. With each new device or faster network option, users will expect to see a corresponding increase in application speed and content. Operations, marketing, and executives will also expect apps to be feature rich with big logos, plenty of functions, and connections with myriad third-party services.

All of this is not possible. These rich features will quickly erode performance. Put yourself on a "performance budget". Even with management and users clamoring for any/all rich features made possible by faster network speeds, choose only those that are most business critical or that customers demand.

Not every feature will make the cut. Those that do will work and perform well. This is preferable to a feature-rich, poor performing app that will leave the user frustrated with your company.

Testing Apps Over Realistic Network Conditions

For your apps to perform at their peak over the network, you have to test them over realistic network conditions. Testing your features and services to know how different network conditions and geographies impact performance can help you manage your performance budget.

You can do this by capturing and virtualizing production network conditions and using them in your test environment. This provides test results that are more accurate and predictive of what your performance will be in the real world.

Mobile network conditions fluctuate far more than broadband connections. Therefore mobile makes app testing over a pristine Wi-Fi connection inaccurate if not obsolete.

A Wi-Fi connected test experience does not represent how end users experience the app. However, if you can capture and virtualize the conditions end users do experience, your testing will reliably reflect how the app will perform in production.

Performance testing your application in an accurate test environment means accounting for application dependencies and the network conditions affecting them. This goes beyond native features. Third-party services are affected by variable network conditions. Worse, since you do not have direct control over them, they are often harder to account for.

Services virtualization testing will provide insight into how they will interact with your application. But, without testing these services over a virtualized network, you once again lose sight of how they will perform in the hands of real-world users.

Virtualizing real-world mobile network conditions also allows you to alter the parameters of the network (available bandwidth, latency, jitter, packet loss) to see how an app performs under varying conditions. These conditions can represent typical network scenarios or edge cases. Since the conditions are configurable and repeatable, unlike in the wild testing, issues can be fixed and retested under the exact same virtual network constraints.

This testing should be done at the earliest possible phase of production. Developers should have access to performance criteria when writing code. If they only have functional criteria, you will only know "Action A = Outcome". You won't know if that transaction occurs within your stated SLAs.

Also, by the time the app passes to the system build and user acceptance levels, it is too late to fix many of the performance issues without a major reworking of the app. So, you are stuck with releasing an app that doesn't perform, angering your user base, and losing revenue and productivity or losing time and money starting from scratch. Incorporating network-informed performance testing as early as possible can help prevent this from happening.

This structured approach to performance management is important because once your app is released to the public or delivered to a client, you have little control over its performance environment. But, you can control how you incorporate performance throughout development and testing. If you have done that successfully, you won't have reason to blame the network, or anything else, for poor performance.

Dave Berg is VP of Product Strategy for Shunra Software.

Related Links:

www.shunra.com

The Latest

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Seamless shopping is a basic demand of today's boundaryless consumer — one with little patience for friction, limited tolerance for disconnected experiences and minimal hesitation in switching brands. Customers expect intuitive, highly personalized experiences and the ability to move effortlessly across physical and digital channels within the same journey. Failure to deliver can cost dearly ...

If your best engineers spend their days sorting tickets and resetting access, you are wasting talent. New global data shows that employees in the IT sector rank among the least motivated across industries. They're under a lot of pressure from many angles. Pressure to upskill and uncertainty around what agentic AI means for job security is creating anxiety. Meanwhile, these roles often function like an on-call job and require many repetitive tasks ...

Stop Using the Network as an Excuse for Poor App Performance

Dave Berg

Recent data on the world's fastest 4G LTE network speeds places the USA 8th. Some will use this as an excuse for the poor performance of their mobile applications and why they fail to meet user expectations. For a majority, however, this information confirms what they have always known: the network was never and will never be the answer to better performance.

If the network itself can't be used to guarantee increased app performance, what can a developer or tester do to ensure an app will perform once it is released across less than stellar mobile networks? Be realistic with expectations and virtualize networks and services to test under the conditions your end users experience.

First, understand that compromises will have to be made. With each new device or faster network option, users will expect to see a corresponding increase in application speed and content. Operations, marketing, and executives will also expect apps to be feature rich with big logos, plenty of functions, and connections with myriad third-party services.

All of this is not possible. These rich features will quickly erode performance. Put yourself on a "performance budget". Even with management and users clamoring for any/all rich features made possible by faster network speeds, choose only those that are most business critical or that customers demand.

Not every feature will make the cut. Those that do will work and perform well. This is preferable to a feature-rich, poor performing app that will leave the user frustrated with your company.

Testing Apps Over Realistic Network Conditions

For your apps to perform at their peak over the network, you have to test them over realistic network conditions. Testing your features and services to know how different network conditions and geographies impact performance can help you manage your performance budget.

You can do this by capturing and virtualizing production network conditions and using them in your test environment. This provides test results that are more accurate and predictive of what your performance will be in the real world.

Mobile network conditions fluctuate far more than broadband connections. Therefore mobile makes app testing over a pristine Wi-Fi connection inaccurate if not obsolete.

A Wi-Fi connected test experience does not represent how end users experience the app. However, if you can capture and virtualize the conditions end users do experience, your testing will reliably reflect how the app will perform in production.

Performance testing your application in an accurate test environment means accounting for application dependencies and the network conditions affecting them. This goes beyond native features. Third-party services are affected by variable network conditions. Worse, since you do not have direct control over them, they are often harder to account for.

Services virtualization testing will provide insight into how they will interact with your application. But, without testing these services over a virtualized network, you once again lose sight of how they will perform in the hands of real-world users.

Virtualizing real-world mobile network conditions also allows you to alter the parameters of the network (available bandwidth, latency, jitter, packet loss) to see how an app performs under varying conditions. These conditions can represent typical network scenarios or edge cases. Since the conditions are configurable and repeatable, unlike in the wild testing, issues can be fixed and retested under the exact same virtual network constraints.

This testing should be done at the earliest possible phase of production. Developers should have access to performance criteria when writing code. If they only have functional criteria, you will only know "Action A = Outcome". You won't know if that transaction occurs within your stated SLAs.

Also, by the time the app passes to the system build and user acceptance levels, it is too late to fix many of the performance issues without a major reworking of the app. So, you are stuck with releasing an app that doesn't perform, angering your user base, and losing revenue and productivity or losing time and money starting from scratch. Incorporating network-informed performance testing as early as possible can help prevent this from happening.

This structured approach to performance management is important because once your app is released to the public or delivered to a client, you have little control over its performance environment. But, you can control how you incorporate performance throughout development and testing. If you have done that successfully, you won't have reason to blame the network, or anything else, for poor performance.

Dave Berg is VP of Product Strategy for Shunra Software.

Related Links:

www.shunra.com

The Latest

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

Seamless shopping is a basic demand of today's boundaryless consumer — one with little patience for friction, limited tolerance for disconnected experiences and minimal hesitation in switching brands. Customers expect intuitive, highly personalized experiences and the ability to move effortlessly across physical and digital channels within the same journey. Failure to deliver can cost dearly ...

If your best engineers spend their days sorting tickets and resetting access, you are wasting talent. New global data shows that employees in the IT sector rank among the least motivated across industries. They're under a lot of pressure from many angles. Pressure to upskill and uncertainty around what agentic AI means for job security is creating anxiety. Meanwhile, these roles often function like an on-call job and require many repetitive tasks ...