Skip to main content

Stop Using the Network as an Excuse for Poor App Performance

Dave Berg

Recent data on the world's fastest 4G LTE network speeds places the USA 8th. Some will use this as an excuse for the poor performance of their mobile applications and why they fail to meet user expectations. For a majority, however, this information confirms what they have always known: the network was never and will never be the answer to better performance.

If the network itself can't be used to guarantee increased app performance, what can a developer or tester do to ensure an app will perform once it is released across less than stellar mobile networks? Be realistic with expectations and virtualize networks and services to test under the conditions your end users experience.

First, understand that compromises will have to be made. With each new device or faster network option, users will expect to see a corresponding increase in application speed and content. Operations, marketing, and executives will also expect apps to be feature rich with big logos, plenty of functions, and connections with myriad third-party services.

All of this is not possible. These rich features will quickly erode performance. Put yourself on a "performance budget". Even with management and users clamoring for any/all rich features made possible by faster network speeds, choose only those that are most business critical or that customers demand.

Not every feature will make the cut. Those that do will work and perform well. This is preferable to a feature-rich, poor performing app that will leave the user frustrated with your company.

Testing Apps Over Realistic Network Conditions

For your apps to perform at their peak over the network, you have to test them over realistic network conditions. Testing your features and services to know how different network conditions and geographies impact performance can help you manage your performance budget.

You can do this by capturing and virtualizing production network conditions and using them in your test environment. This provides test results that are more accurate and predictive of what your performance will be in the real world.

Mobile network conditions fluctuate far more than broadband connections. Therefore mobile makes app testing over a pristine Wi-Fi connection inaccurate if not obsolete.

A Wi-Fi connected test experience does not represent how end users experience the app. However, if you can capture and virtualize the conditions end users do experience, your testing will reliably reflect how the app will perform in production.

Performance testing your application in an accurate test environment means accounting for application dependencies and the network conditions affecting them. This goes beyond native features. Third-party services are affected by variable network conditions. Worse, since you do not have direct control over them, they are often harder to account for.

Services virtualization testing will provide insight into how they will interact with your application. But, without testing these services over a virtualized network, you once again lose sight of how they will perform in the hands of real-world users.

Virtualizing real-world mobile network conditions also allows you to alter the parameters of the network (available bandwidth, latency, jitter, packet loss) to see how an app performs under varying conditions. These conditions can represent typical network scenarios or edge cases. Since the conditions are configurable and repeatable, unlike in the wild testing, issues can be fixed and retested under the exact same virtual network constraints.

This testing should be done at the earliest possible phase of production. Developers should have access to performance criteria when writing code. If they only have functional criteria, you will only know "Action A = Outcome". You won't know if that transaction occurs within your stated SLAs.

Also, by the time the app passes to the system build and user acceptance levels, it is too late to fix many of the performance issues without a major reworking of the app. So, you are stuck with releasing an app that doesn't perform, angering your user base, and losing revenue and productivity or losing time and money starting from scratch. Incorporating network-informed performance testing as early as possible can help prevent this from happening.

This structured approach to performance management is important because once your app is released to the public or delivered to a client, you have little control over its performance environment. But, you can control how you incorporate performance throughout development and testing. If you have done that successfully, you won't have reason to blame the network, or anything else, for poor performance.

Dave Berg is VP of Product Strategy for Shunra Software.

Related Links:

www.shunra.com

The Latest

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

Stop Using the Network as an Excuse for Poor App Performance

Dave Berg

Recent data on the world's fastest 4G LTE network speeds places the USA 8th. Some will use this as an excuse for the poor performance of their mobile applications and why they fail to meet user expectations. For a majority, however, this information confirms what they have always known: the network was never and will never be the answer to better performance.

If the network itself can't be used to guarantee increased app performance, what can a developer or tester do to ensure an app will perform once it is released across less than stellar mobile networks? Be realistic with expectations and virtualize networks and services to test under the conditions your end users experience.

First, understand that compromises will have to be made. With each new device or faster network option, users will expect to see a corresponding increase in application speed and content. Operations, marketing, and executives will also expect apps to be feature rich with big logos, plenty of functions, and connections with myriad third-party services.

All of this is not possible. These rich features will quickly erode performance. Put yourself on a "performance budget". Even with management and users clamoring for any/all rich features made possible by faster network speeds, choose only those that are most business critical or that customers demand.

Not every feature will make the cut. Those that do will work and perform well. This is preferable to a feature-rich, poor performing app that will leave the user frustrated with your company.

Testing Apps Over Realistic Network Conditions

For your apps to perform at their peak over the network, you have to test them over realistic network conditions. Testing your features and services to know how different network conditions and geographies impact performance can help you manage your performance budget.

You can do this by capturing and virtualizing production network conditions and using them in your test environment. This provides test results that are more accurate and predictive of what your performance will be in the real world.

Mobile network conditions fluctuate far more than broadband connections. Therefore mobile makes app testing over a pristine Wi-Fi connection inaccurate if not obsolete.

A Wi-Fi connected test experience does not represent how end users experience the app. However, if you can capture and virtualize the conditions end users do experience, your testing will reliably reflect how the app will perform in production.

Performance testing your application in an accurate test environment means accounting for application dependencies and the network conditions affecting them. This goes beyond native features. Third-party services are affected by variable network conditions. Worse, since you do not have direct control over them, they are often harder to account for.

Services virtualization testing will provide insight into how they will interact with your application. But, without testing these services over a virtualized network, you once again lose sight of how they will perform in the hands of real-world users.

Virtualizing real-world mobile network conditions also allows you to alter the parameters of the network (available bandwidth, latency, jitter, packet loss) to see how an app performs under varying conditions. These conditions can represent typical network scenarios or edge cases. Since the conditions are configurable and repeatable, unlike in the wild testing, issues can be fixed and retested under the exact same virtual network constraints.

This testing should be done at the earliest possible phase of production. Developers should have access to performance criteria when writing code. If they only have functional criteria, you will only know "Action A = Outcome". You won't know if that transaction occurs within your stated SLAs.

Also, by the time the app passes to the system build and user acceptance levels, it is too late to fix many of the performance issues without a major reworking of the app. So, you are stuck with releasing an app that doesn't perform, angering your user base, and losing revenue and productivity or losing time and money starting from scratch. Incorporating network-informed performance testing as early as possible can help prevent this from happening.

This structured approach to performance management is important because once your app is released to the public or delivered to a client, you have little control over its performance environment. But, you can control how you incorporate performance throughout development and testing. If you have done that successfully, you won't have reason to blame the network, or anything else, for poor performance.

Dave Berg is VP of Product Strategy for Shunra Software.

Related Links:

www.shunra.com

The Latest

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency