Skip to main content

Software Development and Testing Faces Endemic Constraints, CA Survey Says

More than half (60 percent) of respondents in the North American ‘Business Benefits of Service Virtualization’ study conducted by CA Technologies claim that customer-facing applications are delayed as a result of endemic constraints within the software development and testing environment including limited access to infrastructure, databases and undeveloped applications.

To compound the situation, applications are often released with reduced functionality, according to 70 percent of those surveyed.

The vast majority of the 200 in-house software development executives and managers from large (US $1 Billion+) enterprises surveyed are aware of the significant consequences that result from endemic constraints across software development and testing. This includes loss of reputation (96 percent) and customers switching to competitors (93 percent).

“North American businesses are under pressure to deliver increasingly complex applications, and at a much faster rate than ever before to keep pace with customer demands,” said Shridhar Mittal, general manager, Service Virtualization, CA Technologies. “Unfortunately, IT budgets are not increasing at the rate of change inherent in today’s highly distributed composite applications. This causes serious constraints to software development, resulting in delays and failures in delivering new software features to market.”

Delays in application development and testing are negatively impacting businesses with respondents reporting reduced functionality (74 percent) and late delivery of new customer facing applications (60 percent). In part, this is due to the increased pressure and demand for highly sophisticated applications, with 66 percent of respondents stating that their approach to software development and testing will have to change as a result of massive growth particularly across mobile.

The pressures highlighted by this independent study point to the need for improved development processes and faster, more effective testing. North American survey respondents also identified the potential benefits of pursuing updated approaches to include increased quality (81 percent), faster time-to-market (76 percent) and reduced costs (71 percent).

Service Virtualization addresses these challenges by enabling teams to develop and test an application using a virtual service environment that has been configured to imitate a real production environment. This provides the ability to change the behavior and data of these virtual services easily in order to validate different scenarios.

“This research follows a European study conducted in July 2012 in which 32 percent of respondents revealed that they were expected to deliver and manage four to seven releases a year compared to 53 percent in North America,” said Ian Parkes, Managing Director, Coleman Parkes Research. “Even more surprising, 75 percent of respondents across North America and Europe reported they were seeking additional budget to pay for more application development man-hours, when we know that additional labor is not in fact the ideal solution.”

According to the study, “These survey results suggest that development managers often bring new applications or services from testing environments into production without complete insight into how their integrated applications might fail. For engineers, understanding failure modes is a critical part of the job, yet according to this study, 69 percent did not have this insight on a consistent basis. This is an alarming prospect for any board giving the green light for new software projects, especially those that impact the customer. It is also concerning that only nine percent have comprehensive insight into how complex integrated applications could break in production."

About the Study

The independent study was conducted by Coleman Parkes Research in September 2012, underwritten by CA Technologies, and includes feedback from 200 in-house software development executives and managers from large enterprises with revenues of more than US$1 billion in the US and Canada. It is the second phase of a similar July 2012 study conducted in Europe.

Hot Topic

The Latest

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

Cloud computing has transformed how we build and scale software, but it has also quietly introduced one of the most persistent challenges in modern IT: cost visibility and control ... So why, after more than a decade of cloud adoption, are cloud costs still spiraling out of control? The answer lies not in tooling but in culture ...

Software Development and Testing Faces Endemic Constraints, CA Survey Says

More than half (60 percent) of respondents in the North American ‘Business Benefits of Service Virtualization’ study conducted by CA Technologies claim that customer-facing applications are delayed as a result of endemic constraints within the software development and testing environment including limited access to infrastructure, databases and undeveloped applications.

To compound the situation, applications are often released with reduced functionality, according to 70 percent of those surveyed.

The vast majority of the 200 in-house software development executives and managers from large (US $1 Billion+) enterprises surveyed are aware of the significant consequences that result from endemic constraints across software development and testing. This includes loss of reputation (96 percent) and customers switching to competitors (93 percent).

“North American businesses are under pressure to deliver increasingly complex applications, and at a much faster rate than ever before to keep pace with customer demands,” said Shridhar Mittal, general manager, Service Virtualization, CA Technologies. “Unfortunately, IT budgets are not increasing at the rate of change inherent in today’s highly distributed composite applications. This causes serious constraints to software development, resulting in delays and failures in delivering new software features to market.”

Delays in application development and testing are negatively impacting businesses with respondents reporting reduced functionality (74 percent) and late delivery of new customer facing applications (60 percent). In part, this is due to the increased pressure and demand for highly sophisticated applications, with 66 percent of respondents stating that their approach to software development and testing will have to change as a result of massive growth particularly across mobile.

The pressures highlighted by this independent study point to the need for improved development processes and faster, more effective testing. North American survey respondents also identified the potential benefits of pursuing updated approaches to include increased quality (81 percent), faster time-to-market (76 percent) and reduced costs (71 percent).

Service Virtualization addresses these challenges by enabling teams to develop and test an application using a virtual service environment that has been configured to imitate a real production environment. This provides the ability to change the behavior and data of these virtual services easily in order to validate different scenarios.

“This research follows a European study conducted in July 2012 in which 32 percent of respondents revealed that they were expected to deliver and manage four to seven releases a year compared to 53 percent in North America,” said Ian Parkes, Managing Director, Coleman Parkes Research. “Even more surprising, 75 percent of respondents across North America and Europe reported they were seeking additional budget to pay for more application development man-hours, when we know that additional labor is not in fact the ideal solution.”

According to the study, “These survey results suggest that development managers often bring new applications or services from testing environments into production without complete insight into how their integrated applications might fail. For engineers, understanding failure modes is a critical part of the job, yet according to this study, 69 percent did not have this insight on a consistent basis. This is an alarming prospect for any board giving the green light for new software projects, especially those that impact the customer. It is also concerning that only nine percent have comprehensive insight into how complex integrated applications could break in production."

About the Study

The independent study was conducted by Coleman Parkes Research in September 2012, underwritten by CA Technologies, and includes feedback from 200 in-house software development executives and managers from large enterprises with revenues of more than US$1 billion in the US and Canada. It is the second phase of a similar July 2012 study conducted in Europe.

Hot Topic

The Latest

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

Cloud computing has transformed how we build and scale software, but it has also quietly introduced one of the most persistent challenges in modern IT: cost visibility and control ... So why, after more than a decade of cloud adoption, are cloud costs still spiraling out of control? The answer lies not in tooling but in culture ...