Skip to main content

Apps That Crash? How App Stability Impacts User Experience and Affects a Business's Bottom Line

James Smith
SmartBear

Mobile apps play an increasingly central role in the interactions between customers and brands. We know that users spent about $34 billion on apps in Q2 of 2021 , breaking last year's Q2 record by a whopping $7 billion. With nearly 1.8 million apps on the Apple app store and more than 1,000 new apps released every day, the modern smartphone user has endless choice and variety, translating to higher user experience standards.

B2C apps allow customers to engage with both new and staple brands alike, driving revenue growth opportunities for businesses. B2B apps, on the other hand, give organizations the opportunity to modernize things like training, employee engagement, workflow management, logistics and planning, and more, without the need for technical experts on staff.

One of the strongest indicators we have of smooth, error-free user experiences is app stability. As a vital business metric, an app's stability score translates directly to customer conversion, engagement and retention. The importance of app stability cannot be overstated.

Bugsnag recently released the results of its second app stability report: Application Stability Index (ASI): Characteristics of Leading Mobile Apps. The report analyzed app stability scores in industry verticals such as B2B SaaS, eCommerce, consumer goods, finance & banking, gaming, technology and travel & hospitality. The goal of the ASI is to help organizations understand how their app is performing compared to others and what level of stability it needs to achieve a leader status in its industry.

The results of the ASI highlight the need for regular and proactive error monitoring and stability management, meaning how often an app crashes. Overall, the data showed that of the ten verticals analyzed, travel and hospitality earned the highest median app stability score (99.90%), followed by a three-way tie among B2B sales, eCommerce and finance and banking, with each scoring 99.85%. Media and entertainment was at the bottom (99.65%).

Let's explore a few of the most prominent app success indicators and how app engineers can shift their development strategy to better meet the needs of today's app users.

Higher Stability Score = Higher App Store Ratings

The median stability score across all of the apps analyzed in the ASI was 99.8%. The ASI found that just a 1% lower stability score can lead to a drop of almost 1 whole star in the app stores. That fact alone has huge implications for developers and app engineers. More stable apps drive more exceptional user experiences, maximize retention and build competitive advantage, which is critical to an app's long-term growth and success. To secure higher app store ratings, an app must deliver on usefulness, design, engagement and stability. Being able to balance all four of those elements is key to growing the app's reputation and rating, and thus, gaining users and boosting an app's profitability.

Higher Stability Score = Higher Interaction Volume and Value

While we typically define app stability as a calculation of crash-free sessions, it is also impacted by business decisions. Organizations must analyze the value and volume of interactions in order to get an accurate representation of their app stability. In terms of value, that means the interest showed by a customer for a certain product or service and their overall experience with the app, which can translate into brand loyalty, referrals to friends and family and more in-app purchases. Volume, on the other hand, looks sheerly at the number of interactions with a specific app. While value and volume are both indicators of higher app stability, value of the interaction may be the strongest predictor of app stability because it can help circle resources back into the improvement of an app. Since interaction value determines product and pricing models, that carries over to an engineering teams' incentives to roll out bug fixes and feature releases.

Weekly Release Cadence Will Become the Norm

Software engineers are adopting a weekly release cadence to replace the bi-weekly norm. Data across industries indicate that apps are being updated with a new version on average four times within a thirty-day time span. This is important because it tells us there is a greater push by developers to regularly deliver features and, most importantly, address software bugs that decrease stability scores. The direct correlation between app store ratings and accelerated release cadences tells us that, with the right tools, developers can increase release frequency without sacrificing quality. It's worth noting here that some bugs are inevitable in any app release. Software engineers must only worry about fixing the bugs that matter — that is, the ones that tangibly impact the user experience. To identify which bugs matter, comprehensive diagnostic tools are essential, enabling engineers to prioritize errors and make data-drive decisions.

The ASI also indicated that higher frequency of release helps app developers improve the dynamic with their customers and ultimately build more confidence in their development strategy. Adopting a progressive delivery strategy, defined by phased rollouts, feature flags and A/B testing, is a key part of enabling these quicker app release cycles.

Conclusion

Apps play an increasingly prominent role in our personal and professional lives, and users are coming to expect smoother and more dynamic app experiences. Even though app stability is a KPI owned by engineers and developers, its impact is felt throughout the larger organization through brand reputation and the ability to compete with similar apps. Having a stronger focus on app stability will enable engineering teams to build healthier apps that deliver superior customer experiences.

James Smith is SVP of the Bugsnag Product Group at SmartBear

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Apps That Crash? How App Stability Impacts User Experience and Affects a Business's Bottom Line

James Smith
SmartBear

Mobile apps play an increasingly central role in the interactions between customers and brands. We know that users spent about $34 billion on apps in Q2 of 2021 , breaking last year's Q2 record by a whopping $7 billion. With nearly 1.8 million apps on the Apple app store and more than 1,000 new apps released every day, the modern smartphone user has endless choice and variety, translating to higher user experience standards.

B2C apps allow customers to engage with both new and staple brands alike, driving revenue growth opportunities for businesses. B2B apps, on the other hand, give organizations the opportunity to modernize things like training, employee engagement, workflow management, logistics and planning, and more, without the need for technical experts on staff.

One of the strongest indicators we have of smooth, error-free user experiences is app stability. As a vital business metric, an app's stability score translates directly to customer conversion, engagement and retention. The importance of app stability cannot be overstated.

Bugsnag recently released the results of its second app stability report: Application Stability Index (ASI): Characteristics of Leading Mobile Apps. The report analyzed app stability scores in industry verticals such as B2B SaaS, eCommerce, consumer goods, finance & banking, gaming, technology and travel & hospitality. The goal of the ASI is to help organizations understand how their app is performing compared to others and what level of stability it needs to achieve a leader status in its industry.

The results of the ASI highlight the need for regular and proactive error monitoring and stability management, meaning how often an app crashes. Overall, the data showed that of the ten verticals analyzed, travel and hospitality earned the highest median app stability score (99.90%), followed by a three-way tie among B2B sales, eCommerce and finance and banking, with each scoring 99.85%. Media and entertainment was at the bottom (99.65%).

Let's explore a few of the most prominent app success indicators and how app engineers can shift their development strategy to better meet the needs of today's app users.

Higher Stability Score = Higher App Store Ratings

The median stability score across all of the apps analyzed in the ASI was 99.8%. The ASI found that just a 1% lower stability score can lead to a drop of almost 1 whole star in the app stores. That fact alone has huge implications for developers and app engineers. More stable apps drive more exceptional user experiences, maximize retention and build competitive advantage, which is critical to an app's long-term growth and success. To secure higher app store ratings, an app must deliver on usefulness, design, engagement and stability. Being able to balance all four of those elements is key to growing the app's reputation and rating, and thus, gaining users and boosting an app's profitability.

Higher Stability Score = Higher Interaction Volume and Value

While we typically define app stability as a calculation of crash-free sessions, it is also impacted by business decisions. Organizations must analyze the value and volume of interactions in order to get an accurate representation of their app stability. In terms of value, that means the interest showed by a customer for a certain product or service and their overall experience with the app, which can translate into brand loyalty, referrals to friends and family and more in-app purchases. Volume, on the other hand, looks sheerly at the number of interactions with a specific app. While value and volume are both indicators of higher app stability, value of the interaction may be the strongest predictor of app stability because it can help circle resources back into the improvement of an app. Since interaction value determines product and pricing models, that carries over to an engineering teams' incentives to roll out bug fixes and feature releases.

Weekly Release Cadence Will Become the Norm

Software engineers are adopting a weekly release cadence to replace the bi-weekly norm. Data across industries indicate that apps are being updated with a new version on average four times within a thirty-day time span. This is important because it tells us there is a greater push by developers to regularly deliver features and, most importantly, address software bugs that decrease stability scores. The direct correlation between app store ratings and accelerated release cadences tells us that, with the right tools, developers can increase release frequency without sacrificing quality. It's worth noting here that some bugs are inevitable in any app release. Software engineers must only worry about fixing the bugs that matter — that is, the ones that tangibly impact the user experience. To identify which bugs matter, comprehensive diagnostic tools are essential, enabling engineers to prioritize errors and make data-drive decisions.

The ASI also indicated that higher frequency of release helps app developers improve the dynamic with their customers and ultimately build more confidence in their development strategy. Adopting a progressive delivery strategy, defined by phased rollouts, feature flags and A/B testing, is a key part of enabling these quicker app release cycles.

Conclusion

Apps play an increasingly prominent role in our personal and professional lives, and users are coming to expect smoother and more dynamic app experiences. Even though app stability is a KPI owned by engineers and developers, its impact is felt throughout the larger organization through brand reputation and the ability to compete with similar apps. Having a stronger focus on app stability will enable engineering teams to build healthier apps that deliver superior customer experiences.

James Smith is SVP of the Bugsnag Product Group at SmartBear

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...