Skip to main content

Measuring the Payoff of Your ITIL Investment

While organizations generally agree that ITIL-based process improvements are a "good" thing, executives often struggle to measure quantifiable benefits of investment in the framework. We have found that an effective approach to ITIL is characterized by a manageable yet scalable implementation, a focus on people and skills issues, and ongoing measurement that takes a comprehensive view of the enterprise-wide impact of process maturity.

ITIL initiatives are often initially confined to individual business units or infrastructure towers. The idea is to conduct a "pilot study" that can then be rolled out on a broader basis. In practice, however, such initiatives often lack a well-defined long-term plan for extending process improvement, as well as specific goals or milestones to track.

Another problem is that ITIL, by definition, requires an integrated, enterprise-wide approach – the underlying philosophy is to understand how problems in one specific area impact other areas and permeate throughout the entire organization, and to then use that analysis to take corrective action. A narrow pilot study is therefore a poor choice for a proof of concept.

At the other extreme, organizations get overly ambitious and endeavor to roll out all ITIL processes concurrently across the entire enterprise without prior expectation setting and buy-in. When reality sets in and the complexity of implementing changes and gauging cause/effect impacts across a global organization become apparent, the initiative quickly loses momentum. This all-at-once approach not only fails to deliver benefits, it can actually result in a regression and decline in overall process maturity. As a result, the viability of the ITIL framework is often called into question.

Start Small, Think Big

An effective ITIL implementation aims to create a seamless organization, unrestricted by pockets, units or departments each of which have a particular non-standard way of doing things.

As such, an ITIL initiative typically requires addressing four types of organizational silos:

- Geographic silos – still a reality for most IT organizations – that can hinder process consistency and communications.

- Group-based silos, comprised of multiple development and/or support teams, that provide redundant services that lead to higher costs, conflicts, and project delays.

- Technology silos that add complexity and inhibit change, as applications running on multiple distributed platforms require managing dependencies between the platforms and synchronizing work on each platform.

- Functional silos, put in place to address complexities related to project management, architecture, database administration, testing, and other areas, that can impede coordination and communication.

Top-performing businesses address the silo challenges by applying focus and detail on the one hand, coupled with a holistic perspective and a long-term plan for extending ITIL across the enterprise. Specifically, an effective approach is characterized by a phased, process-by-process implementation that begins with one ITIL process – Change Management, for example – and extends from the IT organization across the business.

Once the initial process is rolled out, integrated, and assessed, a second ITIL process (Incident Management, perhaps) can be similarly rolled out, followed by a third (Release Management) and so forth.

Through this approach, repeatable leading practice processes and a supporting knowledge base can be established to facilitate communication between group members and across groups.

From this perspective, a pilot-based approach can be effective, if implemented within the context of a long-term process improvement plan that includes ongoing measurement and communication of benefits. This approach makes it possible to combine a phased or step-by-step implementation, while at the same time gauging the enterprise-wide impact of the changes.

Measuring Results

A benchmark analysis of the operational environment can identify the specific downstream changes that result from specific process enhancements resulting from ITIL maturity. Ideally, efficiency, productivity, and service availability are baselined before the ITIL initiative begins. This baseline can then be used to more accurately quantify improvements resulting from process changes.

That said, no data exists to show a clear correlation between ITIL compliance and cost savings, and there’s no "ITIL maturity equals x% savings" formula. The ITIL framework is designed to improve quality and efficiency by enhancing an organization's ability to manage activity within the IT function and the IT function's interface with the business. So, while total costs might not change, or while savings might not be measurable in concrete dollar terms, ITIL process improvement can allow IT to spend less time fighting fires and more time providing value to the business for developing new applications and deploying new technologies.

In this context, a longer, big picture view of ITIL is most effective – one that recognizes that, ultimately, implementing rigor and discipline will deliver benefits to the business.

ABOUT Chris Pfauser and Cindy LaChapelle

Chris Pfauser is an ISG Principal Consultant with more than 20 years of experience in management consulting and operational improvement. He specializes in service management and process optimization and works with global organizations in a variety of industry sectors.

ISG Principal Consultant Cindy LaChapelle has over 25 years of industry experience. Her areas of expertise include sourcing strategy development, data and storage assessment and lifecycle management, and backup and recovery and data protection strategies. Both Pfauser and LaChapelle hold Foundations-Level V.3 ITIL Certifications.

Related Links:

www.isg-one.com

ISG White Paper: ITIL Benefits - Where's the Beef?

Hot Topics

The Latest

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

Measuring the Payoff of Your ITIL Investment

While organizations generally agree that ITIL-based process improvements are a "good" thing, executives often struggle to measure quantifiable benefits of investment in the framework. We have found that an effective approach to ITIL is characterized by a manageable yet scalable implementation, a focus on people and skills issues, and ongoing measurement that takes a comprehensive view of the enterprise-wide impact of process maturity.

ITIL initiatives are often initially confined to individual business units or infrastructure towers. The idea is to conduct a "pilot study" that can then be rolled out on a broader basis. In practice, however, such initiatives often lack a well-defined long-term plan for extending process improvement, as well as specific goals or milestones to track.

Another problem is that ITIL, by definition, requires an integrated, enterprise-wide approach – the underlying philosophy is to understand how problems in one specific area impact other areas and permeate throughout the entire organization, and to then use that analysis to take corrective action. A narrow pilot study is therefore a poor choice for a proof of concept.

At the other extreme, organizations get overly ambitious and endeavor to roll out all ITIL processes concurrently across the entire enterprise without prior expectation setting and buy-in. When reality sets in and the complexity of implementing changes and gauging cause/effect impacts across a global organization become apparent, the initiative quickly loses momentum. This all-at-once approach not only fails to deliver benefits, it can actually result in a regression and decline in overall process maturity. As a result, the viability of the ITIL framework is often called into question.

Start Small, Think Big

An effective ITIL implementation aims to create a seamless organization, unrestricted by pockets, units or departments each of which have a particular non-standard way of doing things.

As such, an ITIL initiative typically requires addressing four types of organizational silos:

- Geographic silos – still a reality for most IT organizations – that can hinder process consistency and communications.

- Group-based silos, comprised of multiple development and/or support teams, that provide redundant services that lead to higher costs, conflicts, and project delays.

- Technology silos that add complexity and inhibit change, as applications running on multiple distributed platforms require managing dependencies between the platforms and synchronizing work on each platform.

- Functional silos, put in place to address complexities related to project management, architecture, database administration, testing, and other areas, that can impede coordination and communication.

Top-performing businesses address the silo challenges by applying focus and detail on the one hand, coupled with a holistic perspective and a long-term plan for extending ITIL across the enterprise. Specifically, an effective approach is characterized by a phased, process-by-process implementation that begins with one ITIL process – Change Management, for example – and extends from the IT organization across the business.

Once the initial process is rolled out, integrated, and assessed, a second ITIL process (Incident Management, perhaps) can be similarly rolled out, followed by a third (Release Management) and so forth.

Through this approach, repeatable leading practice processes and a supporting knowledge base can be established to facilitate communication between group members and across groups.

From this perspective, a pilot-based approach can be effective, if implemented within the context of a long-term process improvement plan that includes ongoing measurement and communication of benefits. This approach makes it possible to combine a phased or step-by-step implementation, while at the same time gauging the enterprise-wide impact of the changes.

Measuring Results

A benchmark analysis of the operational environment can identify the specific downstream changes that result from specific process enhancements resulting from ITIL maturity. Ideally, efficiency, productivity, and service availability are baselined before the ITIL initiative begins. This baseline can then be used to more accurately quantify improvements resulting from process changes.

That said, no data exists to show a clear correlation between ITIL compliance and cost savings, and there’s no "ITIL maturity equals x% savings" formula. The ITIL framework is designed to improve quality and efficiency by enhancing an organization's ability to manage activity within the IT function and the IT function's interface with the business. So, while total costs might not change, or while savings might not be measurable in concrete dollar terms, ITIL process improvement can allow IT to spend less time fighting fires and more time providing value to the business for developing new applications and deploying new technologies.

In this context, a longer, big picture view of ITIL is most effective – one that recognizes that, ultimately, implementing rigor and discipline will deliver benefits to the business.

ABOUT Chris Pfauser and Cindy LaChapelle

Chris Pfauser is an ISG Principal Consultant with more than 20 years of experience in management consulting and operational improvement. He specializes in service management and process optimization and works with global organizations in a variety of industry sectors.

ISG Principal Consultant Cindy LaChapelle has over 25 years of industry experience. Her areas of expertise include sourcing strategy development, data and storage assessment and lifecycle management, and backup and recovery and data protection strategies. Both Pfauser and LaChapelle hold Foundations-Level V.3 ITIL Certifications.

Related Links:

www.isg-one.com

ISG White Paper: ITIL Benefits - Where's the Beef?

Hot Topics

The Latest

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...