Skip to main content

Change Management Part 2: Metrics, Best Practices and Pitfalls

Dennis Drogseth

This is Part 2 of a three-part series on change management. In Part 1, I addressed the question, “What is change management?” and examined change management from the perspectives of both process and use case. In this blog, I’ll look at what it takes to make change management initiatives succeed — including metrics and requirements, best practice concerns, and some of the more common pitfalls. Much of the content is derived from past EMA consulting experience as reflected our book, CMDB Systems: Making Change Work in the Age of Cloud and Agile.

Start with Change Management Part 1

Metrics and Requirements

Whether you’re targeting lifecycle endpoint management, data center consolidation, or the move to cloud, it’s important to have some way to measure your progress. These measurements might address operational efficiencies, impacts on the infrastructure and its supported applications, and even impacts on your service consumers and business outcomes. Some of the high-level metrics EMA analysts recommend include:

■ Reduction in number of change collisions

■ Reduction in number of failed changes and re-dos

■ Reduced cycle time to review, approve, and implement changes

■ Improved time efficiency to validate that changes made are non–service disruptive

■ Number of changes that do not deliver expected results

In one consulting engagement in particular, we also saw the following:

■ Degree of conformance to current software licensing agreements

■ Exceptions detected during configuration audits (e.g., when actual state is not as authorized)

■ Cost savings for acquisition and retirement of assets

■ Faster ability to provide services

Of course, these are just a few examples, and these metrics are primarily beginning points. In other words, they are not fully fleshed-out requirements you can use to create the very specific, and hence more measurable, objectives that you will need to go forward.

Going from high-level metrics, such as those above, to more detailed requirements typically means understanding ownership, process, and impact specifics. One example cited in our book involved documented costs in terms of phone time spent in the service desk trying to find the right individual in operations to handle incident-related issues, or what they called “mean time to find someone (MTTFS).” In this case, a CMDB-related initiative saved them nearly $100,000 per year, just in terms of personnel costs of time spent on the phone. The same MTTFS metric might apply to requests involving changes, such as those made in response to service requests or onboarding new end users—where a mixture of IT and non-IT stakeholders for approval and review is often required. Knowing who owns a specific problem for a specific configuration item (CI) is worth its weight in gold.

Some Common Change Management Issues

Image removed.Developing an appropriate set of metrics and requirements typically involves dialog with relevant stakeholders and executives. While it might be nice to simply legislate your change management initiative with a few emails, EMA consulting experience consistently underscores the need for two-way dialog in which stakeholders are both informed and listened to. These dialogs or interviews not only help to pave the way for new and better ways of managing change, they will usually shed light on other issues that, once documented, can help your IT organization move forward in any number of (sometimes surprising) ways.

Scope Creep: While you want enthusiasm for going forward, and in fact you’ll probably want to target your more enthusiastic stakeholders, many change management initiatives can get bogged down by trying to do too much at once. Two of my favorite quotes from our consulting reports along these lines are:

“The biggest issue now is scope creep. Trying to make everyone happy at this point is like trying to rebuild the Titanic from the bottom up.”

Another change management initiative was more prescriptive: “We’re managing scope creep by being incremental in how we’re driving our deployment—going forward with small steps on a regular schedule.”

Toolset Ownership: Managing changes well requires attention to technologies, both those already in use and new technology investments, as I’ll discuss in my next blog. But making the right technology choices can often become a political as well as a technology challenge. EMA consulting has seen literally hundreds of tools addressing monitoring, inventory, configuration, and change management in larger enterprises, each affiliated with its own determined set of owners. This can create problems when you’re trying to promote more cross-domain capabilities for discovery, automation, and configuration updates. So once again, dialog, leadership, and attention to consistent processes are key. Two quotes from EMA consulting serve to underscore this point:

“We are territorial and don’t want to replace our tools.”

“We have issues with toolset ownership. There is no confidence that others will do the work. So, you do it yourself.”

Issues Surrounding Standards and Best Practices: Whether you’re seeking to leverage processes defined in the IT Infrastructure Library (ITIL) or other formalized best practices or you’re simply documenting your own, trying to establish good change management processes across a heterogeneous and often siloed set of stakeholders may well be your biggest single challenge. Even when good technology is in place, trying to get the necessary mix of players to use it well and consistently is not often easy, especially without some level of executive sponsorship. Here are a few additional quotes from EMA consulting reports to provide you with some process-related examples:

“There are over 5000 change requests per year, and all of them are marked ‘high priority.’”

“Change control needs to hold people accountable if it is to be effective. No one questions why.”

“I believe in standards, as long as they’re mine.”

And finally, something positive: “We had an opportunity to reinvent change management in our organization and go from a project management approach that was very ambivalent when it came to execution to a much more enforceable approach that supported clear ownership and led to increased levels of automation.”

Read Change Management Part 3

Dennis Drogseth is VP at Enterprise Management Associates (EMA).

Hot Topics

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

Change Management Part 2: Metrics, Best Practices and Pitfalls

Dennis Drogseth

This is Part 2 of a three-part series on change management. In Part 1, I addressed the question, “What is change management?” and examined change management from the perspectives of both process and use case. In this blog, I’ll look at what it takes to make change management initiatives succeed — including metrics and requirements, best practice concerns, and some of the more common pitfalls. Much of the content is derived from past EMA consulting experience as reflected our book, CMDB Systems: Making Change Work in the Age of Cloud and Agile.

Start with Change Management Part 1

Metrics and Requirements

Whether you’re targeting lifecycle endpoint management, data center consolidation, or the move to cloud, it’s important to have some way to measure your progress. These measurements might address operational efficiencies, impacts on the infrastructure and its supported applications, and even impacts on your service consumers and business outcomes. Some of the high-level metrics EMA analysts recommend include:

■ Reduction in number of change collisions

■ Reduction in number of failed changes and re-dos

■ Reduced cycle time to review, approve, and implement changes

■ Improved time efficiency to validate that changes made are non–service disruptive

■ Number of changes that do not deliver expected results

In one consulting engagement in particular, we also saw the following:

■ Degree of conformance to current software licensing agreements

■ Exceptions detected during configuration audits (e.g., when actual state is not as authorized)

■ Cost savings for acquisition and retirement of assets

■ Faster ability to provide services

Of course, these are just a few examples, and these metrics are primarily beginning points. In other words, they are not fully fleshed-out requirements you can use to create the very specific, and hence more measurable, objectives that you will need to go forward.

Going from high-level metrics, such as those above, to more detailed requirements typically means understanding ownership, process, and impact specifics. One example cited in our book involved documented costs in terms of phone time spent in the service desk trying to find the right individual in operations to handle incident-related issues, or what they called “mean time to find someone (MTTFS).” In this case, a CMDB-related initiative saved them nearly $100,000 per year, just in terms of personnel costs of time spent on the phone. The same MTTFS metric might apply to requests involving changes, such as those made in response to service requests or onboarding new end users—where a mixture of IT and non-IT stakeholders for approval and review is often required. Knowing who owns a specific problem for a specific configuration item (CI) is worth its weight in gold.

Some Common Change Management Issues

Image removed.Developing an appropriate set of metrics and requirements typically involves dialog with relevant stakeholders and executives. While it might be nice to simply legislate your change management initiative with a few emails, EMA consulting experience consistently underscores the need for two-way dialog in which stakeholders are both informed and listened to. These dialogs or interviews not only help to pave the way for new and better ways of managing change, they will usually shed light on other issues that, once documented, can help your IT organization move forward in any number of (sometimes surprising) ways.

Scope Creep: While you want enthusiasm for going forward, and in fact you’ll probably want to target your more enthusiastic stakeholders, many change management initiatives can get bogged down by trying to do too much at once. Two of my favorite quotes from our consulting reports along these lines are:

“The biggest issue now is scope creep. Trying to make everyone happy at this point is like trying to rebuild the Titanic from the bottom up.”

Another change management initiative was more prescriptive: “We’re managing scope creep by being incremental in how we’re driving our deployment—going forward with small steps on a regular schedule.”

Toolset Ownership: Managing changes well requires attention to technologies, both those already in use and new technology investments, as I’ll discuss in my next blog. But making the right technology choices can often become a political as well as a technology challenge. EMA consulting has seen literally hundreds of tools addressing monitoring, inventory, configuration, and change management in larger enterprises, each affiliated with its own determined set of owners. This can create problems when you’re trying to promote more cross-domain capabilities for discovery, automation, and configuration updates. So once again, dialog, leadership, and attention to consistent processes are key. Two quotes from EMA consulting serve to underscore this point:

“We are territorial and don’t want to replace our tools.”

“We have issues with toolset ownership. There is no confidence that others will do the work. So, you do it yourself.”

Issues Surrounding Standards and Best Practices: Whether you’re seeking to leverage processes defined in the IT Infrastructure Library (ITIL) or other formalized best practices or you’re simply documenting your own, trying to establish good change management processes across a heterogeneous and often siloed set of stakeholders may well be your biggest single challenge. Even when good technology is in place, trying to get the necessary mix of players to use it well and consistently is not often easy, especially without some level of executive sponsorship. Here are a few additional quotes from EMA consulting reports to provide you with some process-related examples:

“There are over 5000 change requests per year, and all of them are marked ‘high priority.’”

“Change control needs to hold people accountable if it is to be effective. No one questions why.”

“I believe in standards, as long as they’re mine.”

And finally, something positive: “We had an opportunity to reinvent change management in our organization and go from a project management approach that was very ambivalent when it came to execution to a much more enforceable approach that supported clear ownership and led to increased levels of automation.”

Read Change Management Part 3

Dennis Drogseth is VP at Enterprise Management Associates (EMA).

Hot Topics

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.