Skip to main content

Cost of Poor Software Quality in US Exceeds $2 Trillion

The cost of poor software quality (CPSQ) in the US in 2020 was approximately $2.08 trillion, according to The Cost of Poor Software Quality In the US: A 2020 Report from the Consortium for Information & Software Quality (CISQ), co-sponsored by Synopsys.

This includes poor software quality resulting from software failures, unsuccessful development projects, legacy system problems, technical debt and cybercrime enabled by exploitable weaknesses and vulnerabilities in software.

"As organizations undertake major digital transformations, software-based innovation and development rapidly expands," said report author, Herb Krasner. "The result is a balancing act, trying to deliver value at high speed without sacrificing quality. However, software quality typically lags behind other objectives in most organizations. That lack of primary attention to quality comes at a steep cost."

Key findings from the report include:

Operational software failure

Operational software failure is the leading driver of the total cost of poor software quality (CPSQ), estimated at $1.56 trillion — about 10X costlier than finding and fixing the defects before releasing software into operation.

This figure represents a 22% increase since 2018. That number could be low given the meteoric rise in cybersecurity failures, and also with the understanding that many failures go unreported.

Cybercrimes enabled by exploitable weaknesses and vulnerabilities in software are the largest growth area by far in the last 2 years. The underlying cause is primarily unmitigated software flaws.

The report recommends preventing defects from occurring as early as possible when they are relatively cheap to fix. The second recommendation is isolating, mitigating, and correcting those failures as quickly as possible to limit damage.

Unsuccessful development projects

Unsuccessful development projects, the next largest growth area of the CPSQ, is estimated at $260 billion.

This figure has risen by 46% since 2018. There has been a steady project failure rate of ~19% for over a decade.

The underlying causes are varied, but one consistent theme has been the lack of attention to quality.

The report states: "It is amazing how many IT projects just assume that “quality happens.” The best way to focus a project on quality is to properly define what quality means for that specific project and then focus on achieving measurable results against stated quality objectives."

Research suggests that success rates go up dramatically when using Agile and DevOps methodologies, leading to decision latency being minimized.

Legacy software

The operation and maintenance of legacy software contributed $520 billion to the CPSQ.

While this is down from $635 billion in 2018, it still represents nearly a third of the US's total IT expenditure in 2020.

The report explains: "CPSQ in legacy systems is harder to address because such systems automate core business functions and modernization is not always straightforward. After decades of operation, they may have become less efficient, less secure, unstable, incompatible with newer technologies and systems, and more difficult to support due to loss of knowledge and/or increased complexity or loss of vendor support. In many cases, they represent a single point of failure risk to the business."

The report recommends strategies to improve quality are about overcoming the lack of understanding and knowledge of how the system works internally. Any tool that helps identify weaknesses, vulnerabilities, failure symptoms, defects and improvement targets is going to be useful.

Conslusion

"As poor software quality persists on an upward trajectory, the solution remains the same: prevention is still the best medicine. It's important to build secure, high-quality software that addresses weaknesses and vulnerabilities as close to the source as possible," said Joe Jarzombek, Director for Government and Critical Infrastructure Programs at Synopsys. "This limits the potential damage and cost to resolve issues. It reduces the cost of ownership and makes software-controlled capabilities more resilient to attempts of cyber exploitation."

Methodologies such as Agile and DevOps have supported the evolution of software development whereby software developers apply enhancements as small, incremental changes that are tested and committed daily, hourly, or even moment by moment into production. This results in higher velocity and more responsive development cycles, but not necessarily better quality.

As DevSecOps aims to improve the security mechanisms around high-velocity software development, the emergence of DevQualOps encompasses activities that assure an appropriate level of quality across the Agile, DevOps, and DevSecOps lifecycle.

Hot Topics

The Latest

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...

Cost of Poor Software Quality in US Exceeds $2 Trillion

The cost of poor software quality (CPSQ) in the US in 2020 was approximately $2.08 trillion, according to The Cost of Poor Software Quality In the US: A 2020 Report from the Consortium for Information & Software Quality (CISQ), co-sponsored by Synopsys.

This includes poor software quality resulting from software failures, unsuccessful development projects, legacy system problems, technical debt and cybercrime enabled by exploitable weaknesses and vulnerabilities in software.

"As organizations undertake major digital transformations, software-based innovation and development rapidly expands," said report author, Herb Krasner. "The result is a balancing act, trying to deliver value at high speed without sacrificing quality. However, software quality typically lags behind other objectives in most organizations. That lack of primary attention to quality comes at a steep cost."

Key findings from the report include:

Operational software failure

Operational software failure is the leading driver of the total cost of poor software quality (CPSQ), estimated at $1.56 trillion — about 10X costlier than finding and fixing the defects before releasing software into operation.

This figure represents a 22% increase since 2018. That number could be low given the meteoric rise in cybersecurity failures, and also with the understanding that many failures go unreported.

Cybercrimes enabled by exploitable weaknesses and vulnerabilities in software are the largest growth area by far in the last 2 years. The underlying cause is primarily unmitigated software flaws.

The report recommends preventing defects from occurring as early as possible when they are relatively cheap to fix. The second recommendation is isolating, mitigating, and correcting those failures as quickly as possible to limit damage.

Unsuccessful development projects

Unsuccessful development projects, the next largest growth area of the CPSQ, is estimated at $260 billion.

This figure has risen by 46% since 2018. There has been a steady project failure rate of ~19% for over a decade.

The underlying causes are varied, but one consistent theme has been the lack of attention to quality.

The report states: "It is amazing how many IT projects just assume that “quality happens.” The best way to focus a project on quality is to properly define what quality means for that specific project and then focus on achieving measurable results against stated quality objectives."

Research suggests that success rates go up dramatically when using Agile and DevOps methodologies, leading to decision latency being minimized.

Legacy software

The operation and maintenance of legacy software contributed $520 billion to the CPSQ.

While this is down from $635 billion in 2018, it still represents nearly a third of the US's total IT expenditure in 2020.

The report explains: "CPSQ in legacy systems is harder to address because such systems automate core business functions and modernization is not always straightforward. After decades of operation, they may have become less efficient, less secure, unstable, incompatible with newer technologies and systems, and more difficult to support due to loss of knowledge and/or increased complexity or loss of vendor support. In many cases, they represent a single point of failure risk to the business."

The report recommends strategies to improve quality are about overcoming the lack of understanding and knowledge of how the system works internally. Any tool that helps identify weaknesses, vulnerabilities, failure symptoms, defects and improvement targets is going to be useful.

Conslusion

"As poor software quality persists on an upward trajectory, the solution remains the same: prevention is still the best medicine. It's important to build secure, high-quality software that addresses weaknesses and vulnerabilities as close to the source as possible," said Joe Jarzombek, Director for Government and Critical Infrastructure Programs at Synopsys. "This limits the potential damage and cost to resolve issues. It reduces the cost of ownership and makes software-controlled capabilities more resilient to attempts of cyber exploitation."

Methodologies such as Agile and DevOps have supported the evolution of software development whereby software developers apply enhancements as small, incremental changes that are tested and committed daily, hourly, or even moment by moment into production. This results in higher velocity and more responsive development cycles, but not necessarily better quality.

As DevSecOps aims to improve the security mechanisms around high-velocity software development, the emergence of DevQualOps encompasses activities that assure an appropriate level of quality across the Agile, DevOps, and DevSecOps lifecycle.

Hot Topics

The Latest

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...