Skip to main content

Managing Data Center Costs with BSM

Often, IT budget costs appear expensive as a result of being inflated by arbitrary allocation and loading of costs that should be shared or are otherwise improperly assigned. Unfortunately, old habits die-hard and the misallocation of costs continues to the detriment of both the organization and mainframe computing. The results are business decisions that result in more expensive, less efficient computing operations. This can be remedied using today’s automated cost tracking and reporting tools in conjunction with a program of cost optimization.

The Cost Reduction Trap

The success of IT operations is all about the cost effective and performance compliant delivery of the specific services that the enterprise, business or customer operations need and demand. A critical, and all too frequently poorly implemented, element of that delivery is managing the costs of data center operations. This is especially true during tough economic times when the pressure is on to review and reduce costs to improve bottom line performance as quickly as possible. All too frequently, the activity becomes an operational trap as the focus almost immediately degrades to a focus on highly visible, sweeping budget cuts, with little real analysis of and attention to the impact on operations and service delivery. The history of IT cost reporting has been notable more for casualness than accuracy.

An aggressive cost reduction strategy will attempt a fast return by identifying and cutting big-ticket budget items and removing staff. Such actions are usually compounded by poor decision-making with actions based on inaccurate cost data. Such efforts fail to deliver lasting savings as the rush to cut ‘low-hanging’ fruit fails to address less obvious inefficiencies and process problems that will continue to undercut performance unnoticed and uncorrected.

Misapplied infrastructure and staffing cuts degrades service performance and delivery levels. The result is to alienate customers (users), reduce revenues and increase costs due to inefficiencies in operations.

Most cost reduction programs fail for one or more of these reasons:

1. Lack of accurate data on costs and return, partially a result of using bad data and partially a lack of understanding of operational processes.

2. The tactical nature of programs focused on high profile items thus chasing symptoms rather than identifying and eliminating systemic operational and utilization problems.

3. Operating with a limited vision that focuses on silos of operational costs rather than trying to identify cross-functional problems due to outmoded policies, processes and procedures.

Cost Optimization with BSM

Successful Business Service Management (BSM) includes a cost management program, where cost optimization – not simply cost cutting – is the major focus and guiding principle. The object of the exercise is to make sure the best return is obtained from every dollar spent, while assuring the optimal utilization of data center infrastructure. The ‘return’ being measured is the contribution to and support of the successful delivery of business critical services by IT.

Cost optimization requires an emphasis on an accurate understanding of the cost of a service delivery. Bad data risks cutting critical IT services or making damaging changes to infrastructure that will negatively impact its contribution to business success. Today’s cost tracking tools can accurately relate IT infrastructure costs to service delivery. They include customizable reports on the use of IT infrastructure, by whom, for how long and to what purpose and effect. Such data when combined with automated performance management tools yields longer-lasting, more effective results than any aggressive but miss-focused cost reduction effort.

Accurate data relating costs to services allows you to build a comprehensive view of operations to identify savings opportunities. An end-to-end view of enterprise IT operations provides an understanding of the processes, policies and interactions involved in the delivery of business services. This enables the identification of outmoded, unnecessary or conflict causing operational inefficiencies for change or elimination.

The Final Word: Success in business is not a matter of simply reducing costs and eliminating expenses. For today’s service oriented enterprises, unreflective pursuit of the cheapest solutions and short-cut business practices can cause irreparable damage to customer relationships. Making the best business decisions requires an accurate understanding and allocation of the costs of doing business in order to optimize resource utilization and performance.

About Rich Ptak

Rich Ptak, Managing Partner at Ptak, Noel & Associates LLC. has over 30 years experience in systems product management, working closely with Fortune 50 companies in developing product direction and strategies at a global level. Previously Ptak held positions as Senior Vice President at Hurwitz Group and D.H. Brown Associates. Earlier in his career he held engineering and marketing management positions with Western Electric’s Electronic Switch Manufacturing Division and Digital Equipment Corporation. He is frequently quoted in major business and trade press. Ptak holds a master’s in business administration from the University of Chicago and a master of science in engineering from Kansas State University.

Hot Topics

The Latest

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...

Managing Data Center Costs with BSM

Often, IT budget costs appear expensive as a result of being inflated by arbitrary allocation and loading of costs that should be shared or are otherwise improperly assigned. Unfortunately, old habits die-hard and the misallocation of costs continues to the detriment of both the organization and mainframe computing. The results are business decisions that result in more expensive, less efficient computing operations. This can be remedied using today’s automated cost tracking and reporting tools in conjunction with a program of cost optimization.

The Cost Reduction Trap

The success of IT operations is all about the cost effective and performance compliant delivery of the specific services that the enterprise, business or customer operations need and demand. A critical, and all too frequently poorly implemented, element of that delivery is managing the costs of data center operations. This is especially true during tough economic times when the pressure is on to review and reduce costs to improve bottom line performance as quickly as possible. All too frequently, the activity becomes an operational trap as the focus almost immediately degrades to a focus on highly visible, sweeping budget cuts, with little real analysis of and attention to the impact on operations and service delivery. The history of IT cost reporting has been notable more for casualness than accuracy.

An aggressive cost reduction strategy will attempt a fast return by identifying and cutting big-ticket budget items and removing staff. Such actions are usually compounded by poor decision-making with actions based on inaccurate cost data. Such efforts fail to deliver lasting savings as the rush to cut ‘low-hanging’ fruit fails to address less obvious inefficiencies and process problems that will continue to undercut performance unnoticed and uncorrected.

Misapplied infrastructure and staffing cuts degrades service performance and delivery levels. The result is to alienate customers (users), reduce revenues and increase costs due to inefficiencies in operations.

Most cost reduction programs fail for one or more of these reasons:

1. Lack of accurate data on costs and return, partially a result of using bad data and partially a lack of understanding of operational processes.

2. The tactical nature of programs focused on high profile items thus chasing symptoms rather than identifying and eliminating systemic operational and utilization problems.

3. Operating with a limited vision that focuses on silos of operational costs rather than trying to identify cross-functional problems due to outmoded policies, processes and procedures.

Cost Optimization with BSM

Successful Business Service Management (BSM) includes a cost management program, where cost optimization – not simply cost cutting – is the major focus and guiding principle. The object of the exercise is to make sure the best return is obtained from every dollar spent, while assuring the optimal utilization of data center infrastructure. The ‘return’ being measured is the contribution to and support of the successful delivery of business critical services by IT.

Cost optimization requires an emphasis on an accurate understanding of the cost of a service delivery. Bad data risks cutting critical IT services or making damaging changes to infrastructure that will negatively impact its contribution to business success. Today’s cost tracking tools can accurately relate IT infrastructure costs to service delivery. They include customizable reports on the use of IT infrastructure, by whom, for how long and to what purpose and effect. Such data when combined with automated performance management tools yields longer-lasting, more effective results than any aggressive but miss-focused cost reduction effort.

Accurate data relating costs to services allows you to build a comprehensive view of operations to identify savings opportunities. An end-to-end view of enterprise IT operations provides an understanding of the processes, policies and interactions involved in the delivery of business services. This enables the identification of outmoded, unnecessary or conflict causing operational inefficiencies for change or elimination.

The Final Word: Success in business is not a matter of simply reducing costs and eliminating expenses. For today’s service oriented enterprises, unreflective pursuit of the cheapest solutions and short-cut business practices can cause irreparable damage to customer relationships. Making the best business decisions requires an accurate understanding and allocation of the costs of doing business in order to optimize resource utilization and performance.

About Rich Ptak

Rich Ptak, Managing Partner at Ptak, Noel & Associates LLC. has over 30 years experience in systems product management, working closely with Fortune 50 companies in developing product direction and strategies at a global level. Previously Ptak held positions as Senior Vice President at Hurwitz Group and D.H. Brown Associates. Earlier in his career he held engineering and marketing management positions with Western Electric’s Electronic Switch Manufacturing Division and Digital Equipment Corporation. He is frequently quoted in major business and trade press. Ptak holds a master’s in business administration from the University of Chicago and a master of science in engineering from Kansas State University.

Hot Topics

The Latest

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...