Skip to main content

API Performance in 2016: New Insights for Organizations that Develop and Consume APIs

Priyanka Tiwari

When it comes to developing, deploying, and maintaining a truly powerful application, performance needs to be a top priority.

But that performance isn't only limited to the software your team builds and maintains. Moreover, the performance of an application depends on the performance of the APIs that power it.

SmartBear Software recently released the results of a global API survey, which includes responses from more than 2,300 software professionals in over 50 industries, across 104 countries around the globe.

The report included input from both API providers — organizations that develop and deploy APIs — and API consumers — organizations that use APIs to power their applications or internal systems.

When Asked: Why Do You Consume/Use APIs?

■ 50% said they use APIs to provide interoperation between internal systems, tools, and teams

■ 49% said they use APIs to extend functionality in a product or service

■ 42% said they use APIs to reduce development time

■ 38% said they used APIs to reduce development cost

It's clear to understand the impact that poor API performance could have on any of these use cases. Which is why it's not surprising that, when asked about how they would react upon encountering an API quality or performance issue, one-third of consumers said they would consider permanently switching API providers.

Whether you work in an organization that develops APIs, or have tools and systems that depend on APIs — performance should matter to you.

How Can You Ensure API Performance?

Just like you use tools to test and monitor your application, you also need to invest in the right tools for testing and monitoring your API. Whether you're launching an API of your own, or are concerned about the third party APIs that power your applications, you need to understand how your APIs are performing. You also need to understand the capacity of these APIs so that you can determine the amount of volume your applications can handle and adjust as necessary.

In most cases, ensuring API performance begins with load testing your API to ensure that it functions properly in real-world situations.

By utilizing specialized testing software, load testing allows testers to answer questions like:

"Is my system doing what I expect under these conditions?"

"How will my application respond when a failure occurs?"

"Is my application's performance good enough?"

But if you're performance strategy ends there, you could still be at risk of costly performance problems. This is where monitoring comes in.

API monitoring allows you to determine how your APIs are performing and compare those results to the performance expectations set for your application. Monitoring will enable you to collect insights that can then be incorporated back into the process. Once you've created your monitors and established your acceptable thresholds, you can set up alerts to be notified if performance degrades or the API goes offline.

Monitoring is Critical for Identifying and Resolving API Performance Issues

One of the key findings from the State of API 2016 Report is that a majority of API providers still face setbacks when it comes to resolving API performance issues.

Less than 10% of API issues are resolved within 24 hours. Nearly 1-in-4 API quality issues (23.9%) will remain unresolved for one week or more.

The biggest barrier to resolving API quality issues is determining the root cause (45.2%), followed by isolating the API as being the cause of the issue (29%).

A premium synthetic monitoring tool enables you to monitor your internal or 3rd party APIs proactively, from within your private network or from across the globe. A monitoring tool will help you find API and application issues, engage experts in a timely manner and fix issues before they impact your end users. If you are using external 3rd party APIs for your mission critical applications, a tool can help you monitor SLAs and hold your vendors accountable in case of unavailability or performance degradations.

Priyanka Tiwari is Product Marketing Manager, AlertSite, SmartBear Software.

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

API Performance in 2016: New Insights for Organizations that Develop and Consume APIs

Priyanka Tiwari

When it comes to developing, deploying, and maintaining a truly powerful application, performance needs to be a top priority.

But that performance isn't only limited to the software your team builds and maintains. Moreover, the performance of an application depends on the performance of the APIs that power it.

SmartBear Software recently released the results of a global API survey, which includes responses from more than 2,300 software professionals in over 50 industries, across 104 countries around the globe.

The report included input from both API providers — organizations that develop and deploy APIs — and API consumers — organizations that use APIs to power their applications or internal systems.

When Asked: Why Do You Consume/Use APIs?

■ 50% said they use APIs to provide interoperation between internal systems, tools, and teams

■ 49% said they use APIs to extend functionality in a product or service

■ 42% said they use APIs to reduce development time

■ 38% said they used APIs to reduce development cost

It's clear to understand the impact that poor API performance could have on any of these use cases. Which is why it's not surprising that, when asked about how they would react upon encountering an API quality or performance issue, one-third of consumers said they would consider permanently switching API providers.

Whether you work in an organization that develops APIs, or have tools and systems that depend on APIs — performance should matter to you.

How Can You Ensure API Performance?

Just like you use tools to test and monitor your application, you also need to invest in the right tools for testing and monitoring your API. Whether you're launching an API of your own, or are concerned about the third party APIs that power your applications, you need to understand how your APIs are performing. You also need to understand the capacity of these APIs so that you can determine the amount of volume your applications can handle and adjust as necessary.

In most cases, ensuring API performance begins with load testing your API to ensure that it functions properly in real-world situations.

By utilizing specialized testing software, load testing allows testers to answer questions like:

"Is my system doing what I expect under these conditions?"

"How will my application respond when a failure occurs?"

"Is my application's performance good enough?"

But if you're performance strategy ends there, you could still be at risk of costly performance problems. This is where monitoring comes in.

API monitoring allows you to determine how your APIs are performing and compare those results to the performance expectations set for your application. Monitoring will enable you to collect insights that can then be incorporated back into the process. Once you've created your monitors and established your acceptable thresholds, you can set up alerts to be notified if performance degrades or the API goes offline.

Monitoring is Critical for Identifying and Resolving API Performance Issues

One of the key findings from the State of API 2016 Report is that a majority of API providers still face setbacks when it comes to resolving API performance issues.

Less than 10% of API issues are resolved within 24 hours. Nearly 1-in-4 API quality issues (23.9%) will remain unresolved for one week or more.

The biggest barrier to resolving API quality issues is determining the root cause (45.2%), followed by isolating the API as being the cause of the issue (29%).

A premium synthetic monitoring tool enables you to monitor your internal or 3rd party APIs proactively, from within your private network or from across the globe. A monitoring tool will help you find API and application issues, engage experts in a timely manner and fix issues before they impact your end users. If you are using external 3rd party APIs for your mission critical applications, a tool can help you monitor SLAs and hold your vendors accountable in case of unavailability or performance degradations.

Priyanka Tiwari is Product Marketing Manager, AlertSite, SmartBear Software.

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...