API Performance in 2016: New Insights for Organizations that Develop and Consume APIs
March 28, 2016

Priyanka Tiwari
SmartBear Software

Share this

When it comes to developing, deploying, and maintaining a truly powerful application, performance needs to be a top priority.

But that performance isn't only limited to the software your team builds and maintains. Moreover, the performance of an application depends on the performance of the APIs that power it.

SmartBear Software recently released the results of a global API survey, which includes responses from more than 2,300 software professionals in over 50 industries, across 104 countries around the globe.

The report included input from both API providers — organizations that develop and deploy APIs — and API consumers — organizations that use APIs to power their applications or internal systems.

When Asked: Why Do You Consume/Use APIs?

■ 50% said they use APIs to provide interoperation between internal systems, tools, and teams

■ 49% said they use APIs to extend functionality in a product or service

■ 42% said they use APIs to reduce development time

■ 38% said they used APIs to reduce development cost

It's clear to understand the impact that poor API performance could have on any of these use cases. Which is why it's not surprising that, when asked about how they would react upon encountering an API quality or performance issue, one-third of consumers said they would consider permanently switching API providers.

Whether you work in an organization that develops APIs, or have tools and systems that depend on APIs — performance should matter to you.

How Can You Ensure API Performance?

Just like you use tools to test and monitor your application, you also need to invest in the right tools for testing and monitoring your API. Whether you're launching an API of your own, or are concerned about the third party APIs that power your applications, you need to understand how your APIs are performing. You also need to understand the capacity of these APIs so that you can determine the amount of volume your applications can handle and adjust as necessary.

In most cases, ensuring API performance begins with load testing your API to ensure that it functions properly in real-world situations.

By utilizing specialized testing software, load testing allows testers to answer questions like:

"Is my system doing what I expect under these conditions?"

"How will my application respond when a failure occurs?"

"Is my application's performance good enough?"

But if you're performance strategy ends there, you could still be at risk of costly performance problems. This is where monitoring comes in.

API monitoring allows you to determine how your APIs are performing and compare those results to the performance expectations set for your application. Monitoring will enable you to collect insights that can then be incorporated back into the process. Once you've created your monitors and established your acceptable thresholds, you can set up alerts to be notified if performance degrades or the API goes offline.

Monitoring is Critical for Identifying and Resolving API Performance Issues

One of the key findings from the State of API 2016 Report is that a majority of API providers still face setbacks when it comes to resolving API performance issues.

Less than 10% of API issues are resolved within 24 hours. Nearly 1-in-4 API quality issues (23.9%) will remain unresolved for one week or more.

The biggest barrier to resolving API quality issues is determining the root cause (45.2%), followed by isolating the API as being the cause of the issue (29%).

A premium synthetic monitoring tool enables you to monitor your internal or 3rd party APIs proactively, from within your private network or from across the globe. A monitoring tool will help you find API and application issues, engage experts in a timely manner and fix issues before they impact your end users. If you are using external 3rd party APIs for your mission critical applications, a tool can help you monitor SLAs and hold your vendors accountable in case of unavailability or performance degradations.

Priyanka Tiwari is Product Marketing Manager, AlertSite, SmartBear Software.

Share this

The Latest

October 05, 2022

IT operations is a metrics-driven function and teams should keep score as a core practice. Services and sub-services break, alerts of varying quality come in, incidents are created, and services get fixed. Analytics can help IT teams improve these operations ...

October 04, 2022

Big Data makes it possible to bring data from all the monitoring and reporting tools together, both for more effective analysis and a simplified single-pane view for the user. IT teams gain a holistic picture of system performance. Doing this makes sense because the system's components interact, and issues in one area affect another ...

October 03, 2022

IT engineers and executives are responsible for system reliability and availability. The volume of data can make it hard to be proactive and fix issues quickly. With over a decade of experience in the field, I know the importance of IT operations analytics and how it can help identify incidents and enable agile responses ...

September 30, 2022

For businesses with vast and distributed computing infrastructures, one of the main objectives of IT and network operations is to locate the cause of a service condition that is having an impact. The more human resources are put into the task of gathering, processing, and finally visual monitoring the massive volumes of event and log data that serve as the main source of symptomatic indications for emerging crises, the closer the service is to the company's source of revenue ...

September 29, 2022

Our digital economy is intolerant of downtime. But consumers haven't just come to expect always-on digital apps and services. They also expect continuous innovation, new functionality and lightening fast response times. Organizations have taken note, investing heavily in teams and tools that supposedly increase uptime and free resources for innovation. But leaders have not realized this "throw money at the problem" approach to monitoring is burning through resources without much improvement in availability outcomes ...

September 28, 2022

Although 83% of businesses are concerned about a recession in 2023, B2B tech marketers can look forward to growth — 51% of organizations plan to increase IT budgets in 2023 vs. a narrow 6% that plan to reduce their spend, according to the 2023 State of IT report from Spiceworks Ziff Davis ...

September 27, 2022

Users have high expectations around applications — quick loading times, look and feel visually advanced, with feature-rich content, video streaming, and multimedia capabilities — all of these devour network bandwidth. With millions of users accessing applications and mobile apps from multiple devices, most companies today generate seemingly unmanageable volumes of data and traffic on their networks ...

September 26, 2022

In Italy, it is customary to treat wine as part of the meal ... Too often, testing is treated with the same reverence as the post-meal task of loading the dishwasher, when it should be treated like an elegant wine pairing ...

September 23, 2022

In order to properly sort through all monitoring noise and identify true problems, their causes, and to prioritize them for response by the IT team, they have created and built a revolutionary new system using a meta-cognitive model ...

September 22, 2022

As we shift further into a digital-first world, where having a reliable online experience becomes more essential, Site Reliability Engineers remain in-demand among organizations of all sizes ... This diverse set of skills and values can be difficult to interview for. In this blog, we'll get you started with some example questions and processes to find your ideal SRE ...