Managing Data Center Costs with BSM
January 11, 2011
Rich Ptak
Share this

Often, IT budget costs appear expensive as a result of being inflated by arbitrary allocation and loading of costs that should be shared or are otherwise improperly assigned. Unfortunately, old habits die-hard and the misallocation of costs continues to the detriment of both the organization and mainframe computing. The results are business decisions that result in more expensive, less efficient computing operations. This can be remedied using today’s automated cost tracking and reporting tools in conjunction with a program of cost optimization.

The Cost Reduction Trap

The success of IT operations is all about the cost effective and performance compliant delivery of the specific services that the enterprise, business or customer operations need and demand. A critical, and all too frequently poorly implemented, element of that delivery is managing the costs of data center operations. This is especially true during tough economic times when the pressure is on to review and reduce costs to improve bottom line performance as quickly as possible. All too frequently, the activity becomes an operational trap as the focus almost immediately degrades to a focus on highly visible, sweeping budget cuts, with little real analysis of and attention to the impact on operations and service delivery. The history of IT cost reporting has been notable more for casualness than accuracy.

An aggressive cost reduction strategy will attempt a fast return by identifying and cutting big-ticket budget items and removing staff. Such actions are usually compounded by poor decision-making with actions based on inaccurate cost data. Such efforts fail to deliver lasting savings as the rush to cut ‘low-hanging’ fruit fails to address less obvious inefficiencies and process problems that will continue to undercut performance unnoticed and uncorrected.

Misapplied infrastructure and staffing cuts degrades service performance and delivery levels. The result is to alienate customers (users), reduce revenues and increase costs due to inefficiencies in operations.

Most cost reduction programs fail for one or more of these reasons:

1. Lack of accurate data on costs and return, partially a result of using bad data and partially a lack of understanding of operational processes.

2. The tactical nature of programs focused on high profile items thus chasing symptoms rather than identifying and eliminating systemic operational and utilization problems.

3. Operating with a limited vision that focuses on silos of operational costs rather than trying to identify cross-functional problems due to outmoded policies, processes and procedures.

Cost Optimization with BSM

Successful Business Service Management (BSM) includes a cost management program, where cost optimization – not simply cost cutting – is the major focus and guiding principle. The object of the exercise is to make sure the best return is obtained from every dollar spent, while assuring the optimal utilization of data center infrastructure. The ‘return’ being measured is the contribution to and support of the successful delivery of business critical services by IT.

Cost optimization requires an emphasis on an accurate understanding of the cost of a service delivery. Bad data risks cutting critical IT services or making damaging changes to infrastructure that will negatively impact its contribution to business success. Today’s cost tracking tools can accurately relate IT infrastructure costs to service delivery. They include customizable reports on the use of IT infrastructure, by whom, for how long and to what purpose and effect. Such data when combined with automated performance management tools yields longer-lasting, more effective results than any aggressive but miss-focused cost reduction effort.

Accurate data relating costs to services allows you to build a comprehensive view of operations to identify savings opportunities. An end-to-end view of enterprise IT operations provides an understanding of the processes, policies and interactions involved in the delivery of business services. This enables the identification of outmoded, unnecessary or conflict causing operational inefficiencies for change or elimination.

The Final Word: Success in business is not a matter of simply reducing costs and eliminating expenses. For today’s service oriented enterprises, unreflective pursuit of the cheapest solutions and short-cut business practices can cause irreparable damage to customer relationships. Making the best business decisions requires an accurate understanding and allocation of the costs of doing business in order to optimize resource utilization and performance.

About Rich Ptak

Rich Ptak, Managing Partner at Ptak, Noel & Associates LLC. has over 30 years experience in systems product management, working closely with Fortune 50 companies in developing product direction and strategies at a global level. Previously Ptak held positions as Senior Vice President at Hurwitz Group and D.H. Brown Associates. Earlier in his career he held engineering and marketing management positions with Western Electric’s Electronic Switch Manufacturing Division and Digital Equipment Corporation. He is frequently quoted in major business and trade press. Ptak holds a master’s in business administration from the University of Chicago and a master of science in engineering from Kansas State University.

Share this

The Latest

July 25, 2024

The 2024 State of the Data Center Report from CoreSite shows that although C-suite confidence in the economy remains high, a VUCA (volatile, uncertain, complex, ambiguous) environment has many business leaders proceeding with caution when it comes to their IT and data ecosystems, with an emphasis on cost control and predictability, flexibility and risk management ...

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...