Skip to main content

How Insufficient API Testing Can Impact Your End Users

Sven Hammar

We have seen Application Programming Interfaces (APIs) emerge as a new engine for digital business transformation, tying into many parts of organization including new product innovation and the prospect of new partnership integration. In creating effective API-centric architectures, companies are breaking free from the tedium of monolithic design so that different teams can work independently to quickly produce new offerings to keep pace with market innovation. And, as APIs are increasingly published so that applications can easily make processes available to other applications, there's a world of new opportunity open to every business with the appetite and ability to capitalize on the API economy.

According to Gartner VP and distinguished analyst Kristin R. Moyer, "The API economy is an enabler for turning a business or organization into a platform." It's a positive message for platform-based businesses that are experiencing accelerated digital transformation fueled by successful API management, but it's one that should also come with a warning: given APIs' essential role in driving innovation, there is should be more focus than ever on ensuring APIs are tested properly so that they deploy and function properly.

This blog will take a look at end users – the people waiting for your websites or applications to load, and the effects insufficient testing may have in terms of site abandonment, loss of brand loyalty, and turning to a competitor's solution.

Digital Desertion: Consumer Survey Results

Now more than ever, every second counts in the online world. Slow rendering time and digital disappointment will often drive consumers to a competitor's offerings. Apica recently conducted a survey with research agency 3Gem in an effort to better understand consumers and how they are interacting with websites and apps. We surveyed 2,500 web and app users in the US, UK, and Sweden to explore their expectations regarding digital experiences and to uncover exactly what would impact their opinion of brands.

Start with Poor Website and App Performance Results in Digital Desertion to learn more about the survey results.

On average, 83 percent of consumers in all markets are affected negatively by poor website or app performance. While around half of respondents say they lose patience and are somewhat negatively affected by page loading delays, over 35 percent of respondents say they abandon poor performing sites and apps quickly – often abandoning a site in 10 seconds or less. The survey also identified that 75 percent users expect webpages and apps to load faster than they did three years ago.

These alarming statistics are often born out of great web functionality expectations gone wrong. With APIs driving much of the new functionality on today's e-commerce driven sites, thorough testing has become more important than ever before. With API testing you can cover both internal APIs as well as external. The mix is what the user will experience when he or she is trying to complete a transaction.

Done properly, API testing will minimize deployment issues and catch performance issues earlier in the development cycle.

Validating APIs can be automated to a large extent and should be performed for every release. The secret lies in performing validation constantly for all code and infrastructure changes. Validating third parties is also a constant headache, as they do not always inform you about changes. You should not of course abandon conventional UI testing; but for performance and uptime, API testing will deliver more bang for the buck.

Done properly, API testing will minimize deployment issues and catch performance issues earlier in the development cycle; this is much easier to fix versus catching issues the day of the release.

Make sure that your test tool/vendor provides support for security tools as well as modularized selection of data. It is a big plus is if the test scripts are reusable for monitoring, ideally updated via API from the build engine.


What Does This Mean for Your Business?

Apica's survey results affirm that consumers are more demanding and less forgiving when it comes to website and app performance, and businesses should heed the warning. With three-quarters of users expecting sites and apps to perform faster than they did three years ago, businesses must recognize that they need to manage the peaks and troughs of online traffic and deliver consistently exceptional customer experiences.

The survey also highlights "Digital Desertion" syndrome; if users are disappointed by their digital experience, they often move over to competitors' websites – leaving your site for one that provides a better digital experience. The revenue and brand impact is further impacted by the new reality that nearly 4 in 10 consumers indicate they would likely share a poor online experience with friends or colleagues.

There is nothing to dispute — negative digital experiences are likely to have an impact on brand reputation and loyalty.

Today, websites and apps are integral as a consumer-facing part of your business. The pressure is on companies to continuously monitor and optimize their online performance to ensure they deliver a digital experience that meets today's user expectations. This means taking a proactive approach in the development lifecycle by conducting comprehensive performance testing aimed at ensuring the user experience in production and optimal monitoring capabilities to respond to issues in production before they impact the user experience. Don't leave your revenue, brand and customer experience to chance.

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

How Insufficient API Testing Can Impact Your End Users

Sven Hammar

We have seen Application Programming Interfaces (APIs) emerge as a new engine for digital business transformation, tying into many parts of organization including new product innovation and the prospect of new partnership integration. In creating effective API-centric architectures, companies are breaking free from the tedium of monolithic design so that different teams can work independently to quickly produce new offerings to keep pace with market innovation. And, as APIs are increasingly published so that applications can easily make processes available to other applications, there's a world of new opportunity open to every business with the appetite and ability to capitalize on the API economy.

According to Gartner VP and distinguished analyst Kristin R. Moyer, "The API economy is an enabler for turning a business or organization into a platform." It's a positive message for platform-based businesses that are experiencing accelerated digital transformation fueled by successful API management, but it's one that should also come with a warning: given APIs' essential role in driving innovation, there is should be more focus than ever on ensuring APIs are tested properly so that they deploy and function properly.

This blog will take a look at end users – the people waiting for your websites or applications to load, and the effects insufficient testing may have in terms of site abandonment, loss of brand loyalty, and turning to a competitor's solution.

Digital Desertion: Consumer Survey Results

Now more than ever, every second counts in the online world. Slow rendering time and digital disappointment will often drive consumers to a competitor's offerings. Apica recently conducted a survey with research agency 3Gem in an effort to better understand consumers and how they are interacting with websites and apps. We surveyed 2,500 web and app users in the US, UK, and Sweden to explore their expectations regarding digital experiences and to uncover exactly what would impact their opinion of brands.

Start with Poor Website and App Performance Results in Digital Desertion to learn more about the survey results.

On average, 83 percent of consumers in all markets are affected negatively by poor website or app performance. While around half of respondents say they lose patience and are somewhat negatively affected by page loading delays, over 35 percent of respondents say they abandon poor performing sites and apps quickly – often abandoning a site in 10 seconds or less. The survey also identified that 75 percent users expect webpages and apps to load faster than they did three years ago.

These alarming statistics are often born out of great web functionality expectations gone wrong. With APIs driving much of the new functionality on today's e-commerce driven sites, thorough testing has become more important than ever before. With API testing you can cover both internal APIs as well as external. The mix is what the user will experience when he or she is trying to complete a transaction.

Done properly, API testing will minimize deployment issues and catch performance issues earlier in the development cycle.

Validating APIs can be automated to a large extent and should be performed for every release. The secret lies in performing validation constantly for all code and infrastructure changes. Validating third parties is also a constant headache, as they do not always inform you about changes. You should not of course abandon conventional UI testing; but for performance and uptime, API testing will deliver more bang for the buck.

Done properly, API testing will minimize deployment issues and catch performance issues earlier in the development cycle; this is much easier to fix versus catching issues the day of the release.

Make sure that your test tool/vendor provides support for security tools as well as modularized selection of data. It is a big plus is if the test scripts are reusable for monitoring, ideally updated via API from the build engine.


What Does This Mean for Your Business?

Apica's survey results affirm that consumers are more demanding and less forgiving when it comes to website and app performance, and businesses should heed the warning. With three-quarters of users expecting sites and apps to perform faster than they did three years ago, businesses must recognize that they need to manage the peaks and troughs of online traffic and deliver consistently exceptional customer experiences.

The survey also highlights "Digital Desertion" syndrome; if users are disappointed by their digital experience, they often move over to competitors' websites – leaving your site for one that provides a better digital experience. The revenue and brand impact is further impacted by the new reality that nearly 4 in 10 consumers indicate they would likely share a poor online experience with friends or colleagues.

There is nothing to dispute — negative digital experiences are likely to have an impact on brand reputation and loyalty.

Today, websites and apps are integral as a consumer-facing part of your business. The pressure is on companies to continuously monitor and optimize their online performance to ensure they deliver a digital experience that meets today's user expectations. This means taking a proactive approach in the development lifecycle by conducting comprehensive performance testing aimed at ensuring the user experience in production and optimal monitoring capabilities to respond to issues in production before they impact the user experience. Don't leave your revenue, brand and customer experience to chance.

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...