Skip to main content

Inefficient Apps Cause Overspending by Millions on Cloud

Enterprises with services operating in the cloud are overspending by millions due to inefficiencies with their apps and runtime environments, according to a poll conducted by Lead to Market, and commissioned by Opsani.

69 Percent of respondents report regularly overspending on their cloud budget by 25 percent or more, leading to a loss of millions on unnecessary cloud spend. Respondents were a mix of 100 companies using the leading public clouds — AWS, Azure, and Google — internal clouds, and "others," that were verified as spending more than $5 million annually on the cloud.

Gartner predicts that by 2022 overall cloud spend will reach more than $330 billion. Current estimates reveal that, even now, billions of this is the result of needless and wasted outlay. Why? Because resources are over-provisioned in order to buy peace of mind, and performance tuning is only happening in scenarios when an SLA isn't met, instead of continuously, as new code is released.

Of the poll respondents, 45 percent are releasing software in weekly, daily or hourly sprints. 65 percent of these companies plan on deploying their mainstream production applications on containers within the next 12 months. However, despite this trend toward DevOps and microservices, only 43 percent of respondents are confident their applications are running efficiently in the cloud, which leads to sub-par user experiences and over-paying for unneeded resources.

Modern enterprises are neglecting the post-release portion of the delivery pipeline — continuous optimization of live cloud apps and their environments.

Survey respondents indicated that:

■ 49 percent cite improving application performance as the most important priority for their organization.

■ 54 percent report that their organization has only optimized their application stack in the event of an emergency.

■ 48 percent point to manual time-consuming processes as the biggest hurdle to application optimization due to complexity; even a simple five container application can have more than 255-trillion resource and basic parameter permutations. It's beyond human scale.

Polled companies were also asked what their biggest priorities were for DevOps moving forward. Options were: reducing cloud spend by more than 30 percent, improving application performance by more than 20 percent, or accelerating release cycles by more than 200 percent:

■ Reducing cloud spend by more than 30%: 39 percent of respondents

■ Getting 20% better app performance: 32 percent of respondents

■ Accelerating release cycles by more than 200%: 23 percent of respondents

And overspending for cloud apps only goes up as services get traction. Take a company currently spending $50mm on the cloud. If it's growing at 20 percent year-on-year, the total cloud spend will be more than $372mm over the next five years. 20 percent of that $372mm is unnecessary spend — that's more than $60mm in overspend.

"Modern enterprises are using the cloud to reduce the costs of operating data centers, scale exponentially, bring value-added services online faster and more efficiently, and enjoy the flexibility of using resources as needed," said Ross Schibler, co-founder and CEO, Opsani. "But, operating in the cloud comes with costs that, if not managed continuously, can climb fast due to over provisioning and a lack of visibility into how live applications are affected by the CI/CD toolchain. Even small changes to live code disrupt tuned applications that lead to weak performance and higher costs."

Hot Topics

The Latest

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...

Inefficient Apps Cause Overspending by Millions on Cloud

Enterprises with services operating in the cloud are overspending by millions due to inefficiencies with their apps and runtime environments, according to a poll conducted by Lead to Market, and commissioned by Opsani.

69 Percent of respondents report regularly overspending on their cloud budget by 25 percent or more, leading to a loss of millions on unnecessary cloud spend. Respondents were a mix of 100 companies using the leading public clouds — AWS, Azure, and Google — internal clouds, and "others," that were verified as spending more than $5 million annually on the cloud.

Gartner predicts that by 2022 overall cloud spend will reach more than $330 billion. Current estimates reveal that, even now, billions of this is the result of needless and wasted outlay. Why? Because resources are over-provisioned in order to buy peace of mind, and performance tuning is only happening in scenarios when an SLA isn't met, instead of continuously, as new code is released.

Of the poll respondents, 45 percent are releasing software in weekly, daily or hourly sprints. 65 percent of these companies plan on deploying their mainstream production applications on containers within the next 12 months. However, despite this trend toward DevOps and microservices, only 43 percent of respondents are confident their applications are running efficiently in the cloud, which leads to sub-par user experiences and over-paying for unneeded resources.

Modern enterprises are neglecting the post-release portion of the delivery pipeline — continuous optimization of live cloud apps and their environments.

Survey respondents indicated that:

■ 49 percent cite improving application performance as the most important priority for their organization.

■ 54 percent report that their organization has only optimized their application stack in the event of an emergency.

■ 48 percent point to manual time-consuming processes as the biggest hurdle to application optimization due to complexity; even a simple five container application can have more than 255-trillion resource and basic parameter permutations. It's beyond human scale.

Polled companies were also asked what their biggest priorities were for DevOps moving forward. Options were: reducing cloud spend by more than 30 percent, improving application performance by more than 20 percent, or accelerating release cycles by more than 200 percent:

■ Reducing cloud spend by more than 30%: 39 percent of respondents

■ Getting 20% better app performance: 32 percent of respondents

■ Accelerating release cycles by more than 200%: 23 percent of respondents

And overspending for cloud apps only goes up as services get traction. Take a company currently spending $50mm on the cloud. If it's growing at 20 percent year-on-year, the total cloud spend will be more than $372mm over the next five years. 20 percent of that $372mm is unnecessary spend — that's more than $60mm in overspend.

"Modern enterprises are using the cloud to reduce the costs of operating data centers, scale exponentially, bring value-added services online faster and more efficiently, and enjoy the flexibility of using resources as needed," said Ross Schibler, co-founder and CEO, Opsani. "But, operating in the cloud comes with costs that, if not managed continuously, can climb fast due to over provisioning and a lack of visibility into how live applications are affected by the CI/CD toolchain. Even small changes to live code disrupt tuned applications that lead to weak performance and higher costs."

Hot Topics

The Latest

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...