Skip to main content

Trying To Improve Mobile App Experiences? The New Standard Is "Flawless"

John Reister

There was a time when consumers were so happy to have the power of a computer in their pockets that they’d put up with some usage flaws in exchange for information and entertainment on the go. But with higher costs of owning and using smartphones, and experiences enriched by 4G speeds, consumers have developed much higher performance expectations.

For the past two years, Vasona Networks has surveyed more than 1,000 smartphone owners about their mobile broadband performance expectations. This year, 72% of respondents said that they expect “good mobile data performance all of the time” with no hiccups or flaws. This is up 8% from the year before.

Even more striking is what we’ve learned about the increasing onus consumers put on their service providers to ensure great app experiences. In fact, the majority of consumers told us they hold their mobile operator most responsible when apps don’t function properly. This number is up to 55% from last year’s 40%, when app developers and operators were essentially tied for blame. This year, consumers that held the app developer most responsible dropped to 25%. In our most recent survey, the remaining 20% suspected either the device maker or operating system to be the cause of poor app performance. Considering recent operating system update struggles, perhaps there will be future increase in the blame placed there.

Regardless of where consumers place responsibility, delivering a great app experience is truly a shared burden across operators, technology providers and the developers of those apps.

On the app side, the developers that prioritize performance management work smartly to control the size of their apps, take advantage of the latest compression techniques, and give users control over how content is displayed depending on what type of network they’re connected to. These app developer strategies are well-covered by other authors on this site.

From our experience working with service providers, there are some exciting new techniques available for use in mobile networks that drive the best app experiences by smarter approaches to the RAN (Radio Access Network). Managing contending traffic that shares the cell air interface is a major area of focus. This is where bandwidth additions are most expensive, and, related to that, where congestion is most frequently encountered. Operators are finding better ways to address the diverse mixture of streaming media, web browsing and downloads that can cause severe congestion within cells.

Solutions like edge application controllers assess whether a cell faces congestion at any given moment, and understand which sessions are causing it and the experiences suffering the most as a result. Bandwidth is then reallocated based on application type and subscriber needs.

This is a leap beyond prior probe and DPI (Deep Packet Inspection) approaches that observe traffic patterns and congestion and then communicate through a policy control function to take enforcement action. But congestion and latency are transient phenomena that may last seconds or less. These small incidents can destroy app experiences and cause degradation with repercussions longer than the initial periods of congestion. In these cases, the information can be revealed too late by the probe and service experience is compromised before the DPI takes action.

The results of better approaches to RAN management are speaking for themselves. For instance, a US service provider using an edge application controller to manage the impact of congestion has achieved more than 30% improved bitrate performance for video and web browsing and more than 35% reduction in service latency during congestion. These numbers signify the difference between a great app experience and a frustrating one. Between a finger tapping happily on a screen or pointing angrily at the offending party.

As consumers stiffen their demands for mobile operators to assure flawless app experiences, the industry continues to move closer to that promise.

Click on the infographic below for a larger version.

John Reister is VP of Marketing and Product Management for Vasona Networks.

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Trying To Improve Mobile App Experiences? The New Standard Is "Flawless"

John Reister

There was a time when consumers were so happy to have the power of a computer in their pockets that they’d put up with some usage flaws in exchange for information and entertainment on the go. But with higher costs of owning and using smartphones, and experiences enriched by 4G speeds, consumers have developed much higher performance expectations.

For the past two years, Vasona Networks has surveyed more than 1,000 smartphone owners about their mobile broadband performance expectations. This year, 72% of respondents said that they expect “good mobile data performance all of the time” with no hiccups or flaws. This is up 8% from the year before.

Even more striking is what we’ve learned about the increasing onus consumers put on their service providers to ensure great app experiences. In fact, the majority of consumers told us they hold their mobile operator most responsible when apps don’t function properly. This number is up to 55% from last year’s 40%, when app developers and operators were essentially tied for blame. This year, consumers that held the app developer most responsible dropped to 25%. In our most recent survey, the remaining 20% suspected either the device maker or operating system to be the cause of poor app performance. Considering recent operating system update struggles, perhaps there will be future increase in the blame placed there.

Regardless of where consumers place responsibility, delivering a great app experience is truly a shared burden across operators, technology providers and the developers of those apps.

On the app side, the developers that prioritize performance management work smartly to control the size of their apps, take advantage of the latest compression techniques, and give users control over how content is displayed depending on what type of network they’re connected to. These app developer strategies are well-covered by other authors on this site.

From our experience working with service providers, there are some exciting new techniques available for use in mobile networks that drive the best app experiences by smarter approaches to the RAN (Radio Access Network). Managing contending traffic that shares the cell air interface is a major area of focus. This is where bandwidth additions are most expensive, and, related to that, where congestion is most frequently encountered. Operators are finding better ways to address the diverse mixture of streaming media, web browsing and downloads that can cause severe congestion within cells.

Solutions like edge application controllers assess whether a cell faces congestion at any given moment, and understand which sessions are causing it and the experiences suffering the most as a result. Bandwidth is then reallocated based on application type and subscriber needs.

This is a leap beyond prior probe and DPI (Deep Packet Inspection) approaches that observe traffic patterns and congestion and then communicate through a policy control function to take enforcement action. But congestion and latency are transient phenomena that may last seconds or less. These small incidents can destroy app experiences and cause degradation with repercussions longer than the initial periods of congestion. In these cases, the information can be revealed too late by the probe and service experience is compromised before the DPI takes action.

The results of better approaches to RAN management are speaking for themselves. For instance, a US service provider using an edge application controller to manage the impact of congestion has achieved more than 30% improved bitrate performance for video and web browsing and more than 35% reduction in service latency during congestion. These numbers signify the difference between a great app experience and a frustrating one. Between a finger tapping happily on a screen or pointing angrily at the offending party.

As consumers stiffen their demands for mobile operators to assure flawless app experiences, the industry continues to move closer to that promise.

Click on the infographic below for a larger version.

John Reister is VP of Marketing and Product Management for Vasona Networks.

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...