4 Tips to Increase Velocity in Your ITIL-Based Change Process
September 10, 2012
Russ Miller
Share this

For those of us who handle IT change requests on a daily basis, the process can seem as onerous as a Sisyphean task. In this case, rather than a boulder rolling back down the hill, it is the backlog of RFCs that seem to grow every time you complete a request. Let’s face it - the requests are not going to stop coming. The business is trying to change faster, so they need IT to implement change faster. In the end, it is all about doing as much as you can with what you have. Following ITIL best practices and using a solid change management tool goes a long way, but how can the whole task be made less Sisyphean?

The answer may come from borrowing some of the lessons learned from application development. It turns out that many of the same agile practices adopted by programmers over the last decade can greatly improve the continuity, speed and predictability of your organization's IT Change Management – leveraging, not replacing, your existing ITIL best practices and tools.

When you view the never-ending flow of RFCs as roughly equivalent to the flow of feature requests that an application development team receives, and recognize that the goals are the same - to implement the changes as quickly as possible, as close to the user’s expectations as possible, and in a predictable amount of time - then it becomes clear there is much overlap between processing change requests and turning around new features in a software application.

Of course, some of those change requests are for application development, and those clearly lend themselves to agile practices. But as it turns out, agile methodology can help IT manage all types of requests.

It is possible to apply agile practices to the flow of IT change requests and see these same benefits. Here are some specific ideas on how to implement agile methodology without abandoning ITIL best practices or your current change management tool:

1. Use CAB Meetings to Prioritize Backlog

Take an agile methodology mindset to CAB. Instead of holding weekly or monthly CABs, have daily scrums to prioritize change requests. Place the less urgent and non-routine requests on a backlog. Let the responsible teams decide how much of the backlog they can achieve in a short period of time, say two weeks.

In your Change Management tool, create views that clearly indicate which requests are on the backlog, and which are in process.

2. User Stories with Story Points help Document RFCs

Scrutinize the change requests to ensure they are properly broken down, the same way agile software developers break down feature requests into User Stories with Story Points assigned and weighted.

Leveraging the customizability of your change management tool, re-label the Description field as User Story and add a field to contain the Story Points. For most IT organizations, the change requests are relatively short lived and independent of each other. But if your organization has enough large, complex requests that are related to a given system or project, following the agile technique of grouping them into a sprint may make sense. You may even consider forming two teams, one team that manages the shorter lived requests as a constant stream, perhaps using a Kanban approach; while the larger requests flow into a team using a scrum approach.

3. Change Manager becomes Scrum Master

The Change Manager acts as the Scrum Manager, overseeing quick daily standup meetings to talk about progress on the tickets in process. The Scrum Master’s mantra: plan a little, do a little, test a little - create a tight feedback loop.

In a larger organization, where the Change Manager cannot possibly have the bandwidth to be the Scrum Master for all of the supported services, the technical owner of a business service may need to act as the Scrum Master, with the Change Manager overseeing the various business Scrum Masters.

It is important that someone fills the role as Scrum Master in order to insure that the amount of work in process is always minimal (agile methodology is all about finishing what you start and delivering each item before you bite off another). As such, it is critical for the Scrum Master to monitor the batch of changes in progress on a day-to-day basis and be available to help clear roadblocks for those implementing the changes.

4. Continuous Improvement to Optimize Agility

By assigning a size in the form of Story Points and by minimizing the amount of work in progress at a given time, you can judge the velocity of successful changes and provide more transparency into what the bottlenecks are.

As requests are completed, the Story Points assigned to the requests represent a quantity of work completed. The average rate of delivering Story Points over time is the velocity. By charting changes in velocity over time, you can better manage your team’s bandwidth and better predict the time required to work down the backlog of requests.

Increasing the velocity becomes the challenge. Agile prescribes various approaches to eliminate the bottlenecks preventing a faster velocity. The best approach for a handling a continuous stream of requests, where they are not batched into sprints, is likely a cumulative flow diagram. However, if you are batching your requests into sprints, then a scrum approach using a burn down chart may work better.

Either of these approaches can be diagrammed in Excel by exporting the data you collect in your change management tool.

In summary, the best way to attack that seemingly always growing backlog of RFCs requires an ability to expose the bottlenecks that are impacting your ability to increase the speed and predictability by which the requests can be implemented. One way, is to borrow agile software development techniques and overlay them on your existing ITIL processes and tools.

ABOUT Russ Miller

Russ Miller is the CTO of SunView Software, Inc., an ITSM tools vendor he co-founded. He has more than 20 years of experience leading software product development from concept to delivery for companies like IBM, Compuware, Quark, and Intuit. He was an early advocate and adopter of agile software practices and has been immersed in ITIL over the last decade as part of developing the ChangGear ITSM solution.

Related Links:


Share this

The Latest

October 17, 2019

As the data generated by organizations grows, APM tools are now required to do a lot more than basic monitoring of metrics. Modern data is often raw and unstructured and requires more advanced methods of analysis. The tools must help dig deep into this data for both forensic analysis and predictive analysis. To extract more accurate and cheaper insights, modern APM tools use Big Data techniques to store, access, and analyze the multi-dimensional data ...

October 16, 2019

Modern enterprises are generating data at an unprecedented rate but aren't taking advantage of all the data available to them in order to drive real-time, actionable insights. According to a recent study commissioned by Actian, more than half of enterprises today are unable to efficiently manage nor effectively use data to drive decision-making ...

October 15, 2019

According to a study by Forrester Research, an enhanced UX design can increase the conversion rate by 400%. If UX has become the ultimate arbiter in determining the success or failure of a product or service, let us first understand what UX is all about ...

October 10, 2019

The requirements of an APM tool are now much more complex than they've ever been. Not only do they need to trace a user transaction across numerous microservices on the same system, but they also need to happen pretty fast ...

October 09, 2019

Performance monitoring is an old problem. As technology has advanced, we've had to evolve how we monitor applications. Initially, performance monitoring largely involved sending ICMP messages to start troubleshooting a down or slow application. Applications have gotten much more complex, so this is no longer enough. Now we need to know not just whether an application is broken, but why it broke. So APM has had to evolve over the years for us to get there. But how did this evolution take place, and what happens next? Let's find out ...

October 08, 2019

There are some IT organizations that are using DevOps methodology but are wary of getting bogged down in ITSM procedures. But without at least some ITSM controls in place, organizations lose their focus on systematic customer engagement, making it harder for them to scale ...

October 07, 2019
OK, I admit it. "Service modeling" is an awkward term, especially when you're trying to frame three rather controversial acronyms in the same overall place: CMDB, CMS and DDM. Nevertheless, that's exactly what we did in EMA's most recent research: <span style="font-style: italic;">Service Modeling in the Age of Cloud and Containers</span>. The goal was to establish a more holistic context for looking at the synergies and differences across all these areas ...
October 03, 2019

If you have deployed a Java application in production, you've probably encountered a situation where the application suddenly starts to take up a large amount of CPU. When this happens, application response becomes sluggish and users begin to complain about slow response. Often the solution to this problem is to restart the application and, lo and behold, the problem goes away — only to reappear a few days later. A key question then is: how to troubleshoot high CPU usage of a Java application? ...

October 02, 2019

Operations are no longer tethered tightly to a main office, as the headquarters-centric model has been retired in favor of a more decentralized enterprise structure. Rather than focus the business around a single location, enterprises are now comprised of a web of remote offices and individuals, where network connectivity has broken down the geographic barriers that in the past limited the availability of talent and resources. Key to the success of the decentralized enterprise model is a new generation of collaboration and communication tools ...

October 01, 2019

To better understand the AI maturity of businesses, Dotscience conducted a survey of 500 industry professionals. Research findings indicate that although enterprises are dedicating significant time and resources towards their AI deployments, many data science and ML teams don't have the adequate tools needed to properly collaborate on, build and deploy AI models efficiently ...