While the adoption of continuous integration (CI) is on the rise, software engineering teams are unable to take a zero-tolerance approach to software failures, costing enterprise organizations billions annually, according to a quantitative study conducted by Undo and a Cambridge Judge Business School MBA project.
"Every company is a software company. The ability for engineering teams to deliver high quality software at velocity is the difference between companies that gain a competitive edge versus those that fall behind," said Undo CEO Barry Morris. "The next phase of CI will be about making defect resolution bounded, efficient and less skills-dependent. Organizations that evolve with CI will be able to resolve bugs faster, accelerate software delivery and reduce engineering costs."
The research concluded three key findings:
1. Adoption of CI best practices is on the rise
88% of enterprise software companies say they have adopted CI practices, compared to 70% in 2015.
More than 50% of businesses surveyed report deploying new code changes & updates at least daily, with 35% reporting hourly deployments
2. Reproducing software failures is impeding delivery speed
41% of respondents say getting the bug to reproduce is the biggest barrier to finding and fixing bugs faster; and 56% say they could release software 1-2 days faster if reproducing failures wasn’t an issue.
Software engineers spend an average of 13 hours to fix a single software failure in their backlog.
3. Failing tests cost the enterprise software market $61 billion annually
This equals 620 million developer hours a year wasted on debugging software failures.
Although CI adoption is becoming ubiquitous, test suites are still plagued by a growing backlog of failing tests. Failures in integration and automated tests cause bottlenecks in the development pipeline, and substantially increase engineering costs.
The study further suggests that reproducibility of failures is also a major blocker, finding that not being able to reproduce issues slows engineering teams down and prevents them from releasing software changes at pace.
To fully realize the benefits of CI, software failure replay offers a way out by enabling engineering teams to reproduce and fix software bugs faster. By eliminating the guesswork in defect diagnosis, development teams are able to accelerate Mean-Time-to-Resolution (MTTR) — resulting in considerable cost savings.
The Latest
To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.
An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...