API Performance in 2016: New Insights for Organizations that Develop and Consume APIs
March 28, 2016

Priyanka Tiwari
SmartBear Software

Share this

When it comes to developing, deploying, and maintaining a truly powerful application, performance needs to be a top priority.

But that performance isn't only limited to the software your team builds and maintains. Moreover, the performance of an application depends on the performance of the APIs that power it.

SmartBear Software recently released the results of a global API survey, which includes responses from more than 2,300 software professionals in over 50 industries, across 104 countries around the globe.

The report included input from both API providers — organizations that develop and deploy APIs — and API consumers — organizations that use APIs to power their applications or internal systems.

When Asked: Why Do You Consume/Use APIs?

■ 50% said they use APIs to provide interoperation between internal systems, tools, and teams

■ 49% said they use APIs to extend functionality in a product or service

■ 42% said they use APIs to reduce development time

■ 38% said they used APIs to reduce development cost

It's clear to understand the impact that poor API performance could have on any of these use cases. Which is why it's not surprising that, when asked about how they would react upon encountering an API quality or performance issue, one-third of consumers said they would consider permanently switching API providers.

Whether you work in an organization that develops APIs, or have tools and systems that depend on APIs — performance should matter to you.

How Can You Ensure API Performance?

Just like you use tools to test and monitor your application, you also need to invest in the right tools for testing and monitoring your API. Whether you're launching an API of your own, or are concerned about the third party APIs that power your applications, you need to understand how your APIs are performing. You also need to understand the capacity of these APIs so that you can determine the amount of volume your applications can handle and adjust as necessary.

In most cases, ensuring API performance begins with load testing your API to ensure that it functions properly in real-world situations.

By utilizing specialized testing software, load testing allows testers to answer questions like:

"Is my system doing what I expect under these conditions?"

"How will my application respond when a failure occurs?"

"Is my application's performance good enough?"

But if you're performance strategy ends there, you could still be at risk of costly performance problems. This is where monitoring comes in.

API monitoring allows you to determine how your APIs are performing and compare those results to the performance expectations set for your application. Monitoring will enable you to collect insights that can then be incorporated back into the process. Once you've created your monitors and established your acceptable thresholds, you can set up alerts to be notified if performance degrades or the API goes offline.

Monitoring is Critical for Identifying and Resolving API Performance Issues

One of the key findings from the State of API 2016 Report is that a majority of API providers still face setbacks when it comes to resolving API performance issues.

Less than 10% of API issues are resolved within 24 hours. Nearly 1-in-4 API quality issues (23.9%) will remain unresolved for one week or more.

The biggest barrier to resolving API quality issues is determining the root cause (45.2%), followed by isolating the API as being the cause of the issue (29%).

A premium synthetic monitoring tool enables you to monitor your internal or 3rd party APIs proactively, from within your private network or from across the globe. A monitoring tool will help you find API and application issues, engage experts in a timely manner and fix issues before they impact your end users. If you are using external 3rd party APIs for your mission critical applications, a tool can help you monitor SLAs and hold your vendors accountable in case of unavailability or performance degradations.

Priyanka Tiwari is Product Marketing Manager, AlertSite, SmartBear Software.

Share this

The Latest

July 25, 2024

The 2024 State of the Data Center Report from CoreSite shows that although C-suite confidence in the economy remains high, a VUCA (volatile, uncertain, complex, ambiguous) environment has many business leaders proceeding with caution when it comes to their IT and data ecosystems, with an emphasis on cost control and predictability, flexibility and risk management ...

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...