Metrics-oriented thinking is key to continuous improvement – and a core tenant of any agile or DevOps philosophy. Metrics are factual and once agreed upon, these facts are used to drive discussions and methods. They also allow for a collaborative effort to execute decisions that contribute towards business outcomes.
DevOps, although becoming a commonly used job title, is not a role or person and there is no playbook or rule set to follow. Instead, DevOps is a philosophy which spans people, process, and technology. The goal is releasing better software more rapidly, and keeping said software up and running by joining development and operational responsibilities together.
Additionally, DevOps aims to improve business outcomes, but there are challenges in selecting the right metrics and collecting the metric data. Continuous improvement requires continuous change, measurement, and iteration. What’s more, the agreed-upon metrics drive this cycle, but also create insights for the broader organization.
Data-Driven DevOps
A successful DevOps transformation focuses on a couple areas. To start, a culture change is needed between development and operations teams. Another core tenant of DevOps is measurement. In order to accomplish a true DevOps transformation, it’s important to measure the current situation and regularly review metrics which indicate improvement or degradation. One of the core tenants of DevOps is measurement, and using said measurements as facts when driving decision making. These metrics should span several areas which may have been considered disjointed in the past.
To help DevOps teams think of possible metrics and how these metrics relate to key initiatives, Gartner recently released this useful metrics pyramid for DevOps:
Many of these metrics span development, operations, and most importantly – the business. They measure efficiency, quality, and velocity. However, Gartner points out that the hardest part is often defining what we can collect, take action upon, audit, and use to drive a lifecycle.
The second challenge (which Gartner does not discuss) is how these metrics should be linked together to offer meaningful insights. If the metrics do not allow linkage between a release and business performance, attribution gaps remain. And unfortunately, many enterprises today analyze metrics that have a lack of linkage or relationship between them.
To help with these relationships, context is critical. Without context, metrics can be open to interpretation, especially as you move up the Gartner pyramid. So it’s crucial to be able to link metrics together and attribute earnings or cash flow with a release or change that represents improvements in the application.
Additionally, metrics should be able to drive visibility inside the application without creating an additional burden for developers. With automated instrumentation, metric data can be produced consistently and comprehensively across all teams. This is extremely beneficial as many teams have different ways of collecting data, which can traditionally lead to inconsistencies. Consistent measurements should always be obtained from the application components and desired business outcomes of the application.
The Latest
This blog presents the case for a radical new approach to basic information technology (IT) education. This conclusion is based on a study of courses and other forms of IT education which purport to cover IT "fundamentals" ...
To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.