Skip to main content

Avoiding Cost Traps in Cloud Monitoring

Martin Hirschvogel
Checkmk

Choosing the right approach is critical with cloud monitoring in hybrid environments. Otherwise, you may drive up costs with features you don’t need and risk diminishing the visibility of your on-premises IT.

The complexity of IT infrastructures is constantly growing as organizations continue to combine cloud-based services with on-premises or edge IT infrastructure and adopt Kubernetes or serverless computing services. To ensure that their hybrid IT infrastructure performs optimally, ITOps teams need a monitoring solution that is capable of providing comprehensive visibility while easing their burden.

Different Monitoring Requirements

To avoid blind spots and budget bloat, there are two main questions ITOps needs to consider:

What applications and resources do we run in which part of the infrastructure?

And what monitoring requirements result from this?

This is especially important when considering cloud monitoring solutions. While they provide numerous functions for monitoring applications and computing resources residing in the cloud, they have limitations when it comes to monitoring on-premises environments. So, by operating all their business-critical IT assets locally and "only" virtual machines in the cloud, organizations would risk driving up expenses and impacting IT operations by implementing cloud monitoring.

NonTransparent Pricing Models

Even if an organization is running mission-critical workloads in the cloud, choosing a cloud monitoring solution can quickly result in costs that are unexpected, but ultimately avoidable. This is due to cloud monitoring providers' sometimes opaque billing models that impose a kind of penalty tax on the benefits of the cloud, such as flexibility and scalability. When you add subscriptions for additional features to the high base fee for the software, the initial cost quickly becomes unmanageable.

A virtual server in a popular configuration costs about $100 per month from a hyperscaler. Basic monitoring for such a host typically starts at $15 to $30 from cloud monitoring providers, and the cost can be many times higher depending on the desired feature set and sizing. Even simple monitoring of the operating system can quickly add up to at least 30 percent of the hosting bill.

Expensive Host-Based Billing

Host-based billing may seem simple at first glance. Yet the question arises as to whether host-based billing makes sense at all in a serverless world with managed services, etc., where hosts no longer play a major role.

Also, in a serverless world with managed services from cloud providers, it is difficult to quantify hosts. In the end, this will inevitably lead to the gradual introduction of secondary pricing metrics and, from the user's perspective, to costs that are difficult to predict and a lack of price transparency.

The conceptual problems of host-based pricing are particularly evident in the fact that many monitoring providers have introduced limits and additional price dimensions. For example, in some cases only a certain number of containers per host are included in cloud monitoring. However, this limit is usually quickly exceeded and additional fees apply for each additional container.

Artificial Limits and Custom Metrics

Custom metrics, which allow special data to be included in monitoring, can also quickly drive up costs. This is especially the case if custom metrics are essential for monitoring and you can only obtain useful monitoring by adding them. Artificial currencies or units in monitoring, such as those used to retrieve custom metrics, logs, or user-defined events, and which have complex conversion formulas, also do not necessarily provide a transparent view of costs.

Monitoring costs also vary depending on the cloud provider. For example, with a hyperscaler, all of the API calls that are required to monitor the cloud services cost money. With another provider, the API calls may be free, but you may run into rate limits. These are all cost factors that should be taken into account from the outset when choosing a monitoring solution.

Evaluating a cloud monitoring solution also includes ensuring that the solution supports all of the necessary features and services. Essential features, such as an SSO solution based on the SAML standard, should not be reserved for the higher-tier product and the associated more expensive plan levels.

Wrong Incentives and Exclusive Access

The pricing model of a good monitoring solution should also not create incentives to compromise on infrastructure architecture for cost reasons. For example, if an organization has to pay per monitoring instance, there is a strong temptation to save costs by minimizing the number of instances. However, there is a risk that the monitoring will not scale with the company's infrastructure — negating a key benefit of the cloud.

The goal of IT monitoring is to provide critical insight into IT infrastructure health and performance. Access to monitoring is critical for various teams to gain important insights for their daily work and to ensure smooth IT operations. However, charging on a per-user basis for monitoring could result in this information being made available only to an exclusive group to keep costs down. As a result, responsible individuals and teams would be denied visibility into the IT assets that are important to them, and the monitoring would be of no value to them.

Avoiding Cost Traps

A look at the market shows that the pricing of many monitoring vendors can quickly blow the monitoring budget due to hidden costs or subsequent price drivers — or even encourage the creation of poor IT architectures. If you are not careful, you can quickly end up paying 30 percent of your computing costs for monitoring. For comparison, common benchmarks suggest that ITOps should spend no more than 3 to 15 percent of its IT budget on observability, depending on the industry and the size of the organization.

Organizations should develop clear strategies and understand which business areas are running and will run on which parts of their IT architecture. Only by understanding your cloud and on-premises monitoring needs can you find a tailored solution with a precise and predictable pricing model, rather than paying a lot of money for an oversized solution that may not fit your infrastructure.

Martin Hirschvogel is Chief Product Officer at Checkmk

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

Avoiding Cost Traps in Cloud Monitoring

Martin Hirschvogel
Checkmk

Choosing the right approach is critical with cloud monitoring in hybrid environments. Otherwise, you may drive up costs with features you don’t need and risk diminishing the visibility of your on-premises IT.

The complexity of IT infrastructures is constantly growing as organizations continue to combine cloud-based services with on-premises or edge IT infrastructure and adopt Kubernetes or serverless computing services. To ensure that their hybrid IT infrastructure performs optimally, ITOps teams need a monitoring solution that is capable of providing comprehensive visibility while easing their burden.

Different Monitoring Requirements

To avoid blind spots and budget bloat, there are two main questions ITOps needs to consider:

What applications and resources do we run in which part of the infrastructure?

And what monitoring requirements result from this?

This is especially important when considering cloud monitoring solutions. While they provide numerous functions for monitoring applications and computing resources residing in the cloud, they have limitations when it comes to monitoring on-premises environments. So, by operating all their business-critical IT assets locally and "only" virtual machines in the cloud, organizations would risk driving up expenses and impacting IT operations by implementing cloud monitoring.

NonTransparent Pricing Models

Even if an organization is running mission-critical workloads in the cloud, choosing a cloud monitoring solution can quickly result in costs that are unexpected, but ultimately avoidable. This is due to cloud monitoring providers' sometimes opaque billing models that impose a kind of penalty tax on the benefits of the cloud, such as flexibility and scalability. When you add subscriptions for additional features to the high base fee for the software, the initial cost quickly becomes unmanageable.

A virtual server in a popular configuration costs about $100 per month from a hyperscaler. Basic monitoring for such a host typically starts at $15 to $30 from cloud monitoring providers, and the cost can be many times higher depending on the desired feature set and sizing. Even simple monitoring of the operating system can quickly add up to at least 30 percent of the hosting bill.

Expensive Host-Based Billing

Host-based billing may seem simple at first glance. Yet the question arises as to whether host-based billing makes sense at all in a serverless world with managed services, etc., where hosts no longer play a major role.

Also, in a serverless world with managed services from cloud providers, it is difficult to quantify hosts. In the end, this will inevitably lead to the gradual introduction of secondary pricing metrics and, from the user's perspective, to costs that are difficult to predict and a lack of price transparency.

The conceptual problems of host-based pricing are particularly evident in the fact that many monitoring providers have introduced limits and additional price dimensions. For example, in some cases only a certain number of containers per host are included in cloud monitoring. However, this limit is usually quickly exceeded and additional fees apply for each additional container.

Artificial Limits and Custom Metrics

Custom metrics, which allow special data to be included in monitoring, can also quickly drive up costs. This is especially the case if custom metrics are essential for monitoring and you can only obtain useful monitoring by adding them. Artificial currencies or units in monitoring, such as those used to retrieve custom metrics, logs, or user-defined events, and which have complex conversion formulas, also do not necessarily provide a transparent view of costs.

Monitoring costs also vary depending on the cloud provider. For example, with a hyperscaler, all of the API calls that are required to monitor the cloud services cost money. With another provider, the API calls may be free, but you may run into rate limits. These are all cost factors that should be taken into account from the outset when choosing a monitoring solution.

Evaluating a cloud monitoring solution also includes ensuring that the solution supports all of the necessary features and services. Essential features, such as an SSO solution based on the SAML standard, should not be reserved for the higher-tier product and the associated more expensive plan levels.

Wrong Incentives and Exclusive Access

The pricing model of a good monitoring solution should also not create incentives to compromise on infrastructure architecture for cost reasons. For example, if an organization has to pay per monitoring instance, there is a strong temptation to save costs by minimizing the number of instances. However, there is a risk that the monitoring will not scale with the company's infrastructure — negating a key benefit of the cloud.

The goal of IT monitoring is to provide critical insight into IT infrastructure health and performance. Access to monitoring is critical for various teams to gain important insights for their daily work and to ensure smooth IT operations. However, charging on a per-user basis for monitoring could result in this information being made available only to an exclusive group to keep costs down. As a result, responsible individuals and teams would be denied visibility into the IT assets that are important to them, and the monitoring would be of no value to them.

Avoiding Cost Traps

A look at the market shows that the pricing of many monitoring vendors can quickly blow the monitoring budget due to hidden costs or subsequent price drivers — or even encourage the creation of poor IT architectures. If you are not careful, you can quickly end up paying 30 percent of your computing costs for monitoring. For comparison, common benchmarks suggest that ITOps should spend no more than 3 to 15 percent of its IT budget on observability, depending on the industry and the size of the organization.

Organizations should develop clear strategies and understand which business areas are running and will run on which parts of their IT architecture. Only by understanding your cloud and on-premises monitoring needs can you find a tailored solution with a precise and predictable pricing model, rather than paying a lot of money for an oversized solution that may not fit your infrastructure.

Martin Hirschvogel is Chief Product Officer at Checkmk

Hot Topics

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...