Skip to main content

Stop Using the Network as an Excuse for Poor App Performance

Dave Berg

Recent data on the world's fastest 4G LTE network speeds places the USA 8th. Some will use this as an excuse for the poor performance of their mobile applications and why they fail to meet user expectations. For a majority, however, this information confirms what they have always known: the network was never and will never be the answer to better performance.

If the network itself can't be used to guarantee increased app performance, what can a developer or tester do to ensure an app will perform once it is released across less than stellar mobile networks? Be realistic with expectations and virtualize networks and services to test under the conditions your end users experience.

First, understand that compromises will have to be made. With each new device or faster network option, users will expect to see a corresponding increase in application speed and content. Operations, marketing, and executives will also expect apps to be feature rich with big logos, plenty of functions, and connections with myriad third-party services.

All of this is not possible. These rich features will quickly erode performance. Put yourself on a "performance budget". Even with management and users clamoring for any/all rich features made possible by faster network speeds, choose only those that are most business critical or that customers demand.

Not every feature will make the cut. Those that do will work and perform well. This is preferable to a feature-rich, poor performing app that will leave the user frustrated with your company.

Testing Apps Over Realistic Network Conditions

For your apps to perform at their peak over the network, you have to test them over realistic network conditions. Testing your features and services to know how different network conditions and geographies impact performance can help you manage your performance budget.

You can do this by capturing and virtualizing production network conditions and using them in your test environment. This provides test results that are more accurate and predictive of what your performance will be in the real world.

Mobile network conditions fluctuate far more than broadband connections. Therefore mobile makes app testing over a pristine Wi-Fi connection inaccurate if not obsolete.

A Wi-Fi connected test experience does not represent how end users experience the app. However, if you can capture and virtualize the conditions end users do experience, your testing will reliably reflect how the app will perform in production.

Performance testing your application in an accurate test environment means accounting for application dependencies and the network conditions affecting them. This goes beyond native features. Third-party services are affected by variable network conditions. Worse, since you do not have direct control over them, they are often harder to account for.

Services virtualization testing will provide insight into how they will interact with your application. But, without testing these services over a virtualized network, you once again lose sight of how they will perform in the hands of real-world users.

Virtualizing real-world mobile network conditions also allows you to alter the parameters of the network (available bandwidth, latency, jitter, packet loss) to see how an app performs under varying conditions. These conditions can represent typical network scenarios or edge cases. Since the conditions are configurable and repeatable, unlike in the wild testing, issues can be fixed and retested under the exact same virtual network constraints.

This testing should be done at the earliest possible phase of production. Developers should have access to performance criteria when writing code. If they only have functional criteria, you will only know "Action A = Outcome". You won't know if that transaction occurs within your stated SLAs.

Also, by the time the app passes to the system build and user acceptance levels, it is too late to fix many of the performance issues without a major reworking of the app. So, you are stuck with releasing an app that doesn't perform, angering your user base, and losing revenue and productivity or losing time and money starting from scratch. Incorporating network-informed performance testing as early as possible can help prevent this from happening.

This structured approach to performance management is important because once your app is released to the public or delivered to a client, you have little control over its performance environment. But, you can control how you incorporate performance throughout development and testing. If you have done that successfully, you won't have reason to blame the network, or anything else, for poor performance.

Dave Berg is VP of Product Strategy for Shunra Software.

Related Links:

www.shunra.com

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Stop Using the Network as an Excuse for Poor App Performance

Dave Berg

Recent data on the world's fastest 4G LTE network speeds places the USA 8th. Some will use this as an excuse for the poor performance of their mobile applications and why they fail to meet user expectations. For a majority, however, this information confirms what they have always known: the network was never and will never be the answer to better performance.

If the network itself can't be used to guarantee increased app performance, what can a developer or tester do to ensure an app will perform once it is released across less than stellar mobile networks? Be realistic with expectations and virtualize networks and services to test under the conditions your end users experience.

First, understand that compromises will have to be made. With each new device or faster network option, users will expect to see a corresponding increase in application speed and content. Operations, marketing, and executives will also expect apps to be feature rich with big logos, plenty of functions, and connections with myriad third-party services.

All of this is not possible. These rich features will quickly erode performance. Put yourself on a "performance budget". Even with management and users clamoring for any/all rich features made possible by faster network speeds, choose only those that are most business critical or that customers demand.

Not every feature will make the cut. Those that do will work and perform well. This is preferable to a feature-rich, poor performing app that will leave the user frustrated with your company.

Testing Apps Over Realistic Network Conditions

For your apps to perform at their peak over the network, you have to test them over realistic network conditions. Testing your features and services to know how different network conditions and geographies impact performance can help you manage your performance budget.

You can do this by capturing and virtualizing production network conditions and using them in your test environment. This provides test results that are more accurate and predictive of what your performance will be in the real world.

Mobile network conditions fluctuate far more than broadband connections. Therefore mobile makes app testing over a pristine Wi-Fi connection inaccurate if not obsolete.

A Wi-Fi connected test experience does not represent how end users experience the app. However, if you can capture and virtualize the conditions end users do experience, your testing will reliably reflect how the app will perform in production.

Performance testing your application in an accurate test environment means accounting for application dependencies and the network conditions affecting them. This goes beyond native features. Third-party services are affected by variable network conditions. Worse, since you do not have direct control over them, they are often harder to account for.

Services virtualization testing will provide insight into how they will interact with your application. But, without testing these services over a virtualized network, you once again lose sight of how they will perform in the hands of real-world users.

Virtualizing real-world mobile network conditions also allows you to alter the parameters of the network (available bandwidth, latency, jitter, packet loss) to see how an app performs under varying conditions. These conditions can represent typical network scenarios or edge cases. Since the conditions are configurable and repeatable, unlike in the wild testing, issues can be fixed and retested under the exact same virtual network constraints.

This testing should be done at the earliest possible phase of production. Developers should have access to performance criteria when writing code. If they only have functional criteria, you will only know "Action A = Outcome". You won't know if that transaction occurs within your stated SLAs.

Also, by the time the app passes to the system build and user acceptance levels, it is too late to fix many of the performance issues without a major reworking of the app. So, you are stuck with releasing an app that doesn't perform, angering your user base, and losing revenue and productivity or losing time and money starting from scratch. Incorporating network-informed performance testing as early as possible can help prevent this from happening.

This structured approach to performance management is important because once your app is released to the public or delivered to a client, you have little control over its performance environment. But, you can control how you incorporate performance throughout development and testing. If you have done that successfully, you won't have reason to blame the network, or anything else, for poor performance.

Dave Berg is VP of Product Strategy for Shunra Software.

Related Links:

www.shunra.com

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...