Skip to main content

Avoiding Cost Traps in Cloud Monitoring

Martin Hirschvogel
Checkmk

Choosing the right approach is critical with cloud monitoring in hybrid environments. Otherwise, you may drive up costs with features you don’t need and risk diminishing the visibility of your on-premises IT.

The complexity of IT infrastructures is constantly growing as organizations continue to combine cloud-based services with on-premises or edge IT infrastructure and adopt Kubernetes or serverless computing services. To ensure that their hybrid IT infrastructure performs optimally, ITOps teams need a monitoring solution that is capable of providing comprehensive visibility while easing their burden.

Different Monitoring Requirements

To avoid blind spots and budget bloat, there are two main questions ITOps needs to consider:

What applications and resources do we run in which part of the infrastructure?

And what monitoring requirements result from this?

This is especially important when considering cloud monitoring solutions. While they provide numerous functions for monitoring applications and computing resources residing in the cloud, they have limitations when it comes to monitoring on-premises environments. So, by operating all their business-critical IT assets locally and "only" virtual machines in the cloud, organizations would risk driving up expenses and impacting IT operations by implementing cloud monitoring.

NonTransparent Pricing Models

Even if an organization is running mission-critical workloads in the cloud, choosing a cloud monitoring solution can quickly result in costs that are unexpected, but ultimately avoidable. This is due to cloud monitoring providers' sometimes opaque billing models that impose a kind of penalty tax on the benefits of the cloud, such as flexibility and scalability. When you add subscriptions for additional features to the high base fee for the software, the initial cost quickly becomes unmanageable.

A virtual server in a popular configuration costs about $100 per month from a hyperscaler. Basic monitoring for such a host typically starts at $15 to $30 from cloud monitoring providers, and the cost can be many times higher depending on the desired feature set and sizing. Even simple monitoring of the operating system can quickly add up to at least 30 percent of the hosting bill.

Expensive Host-Based Billing

Host-based billing may seem simple at first glance. Yet the question arises as to whether host-based billing makes sense at all in a serverless world with managed services, etc., where hosts no longer play a major role.

Also, in a serverless world with managed services from cloud providers, it is difficult to quantify hosts. In the end, this will inevitably lead to the gradual introduction of secondary pricing metrics and, from the user's perspective, to costs that are difficult to predict and a lack of price transparency.

The conceptual problems of host-based pricing are particularly evident in the fact that many monitoring providers have introduced limits and additional price dimensions. For example, in some cases only a certain number of containers per host are included in cloud monitoring. However, this limit is usually quickly exceeded and additional fees apply for each additional container.

Artificial Limits and Custom Metrics

Custom metrics, which allow special data to be included in monitoring, can also quickly drive up costs. This is especially the case if custom metrics are essential for monitoring and you can only obtain useful monitoring by adding them. Artificial currencies or units in monitoring, such as those used to retrieve custom metrics, logs, or user-defined events, and which have complex conversion formulas, also do not necessarily provide a transparent view of costs.

Monitoring costs also vary depending on the cloud provider. For example, with a hyperscaler, all of the API calls that are required to monitor the cloud services cost money. With another provider, the API calls may be free, but you may run into rate limits. These are all cost factors that should be taken into account from the outset when choosing a monitoring solution.

Evaluating a cloud monitoring solution also includes ensuring that the solution supports all of the necessary features and services. Essential features, such as an SSO solution based on the SAML standard, should not be reserved for the higher-tier product and the associated more expensive plan levels.

Wrong Incentives and Exclusive Access

The pricing model of a good monitoring solution should also not create incentives to compromise on infrastructure architecture for cost reasons. For example, if an organization has to pay per monitoring instance, there is a strong temptation to save costs by minimizing the number of instances. However, there is a risk that the monitoring will not scale with the company's infrastructure — negating a key benefit of the cloud.

The goal of IT monitoring is to provide critical insight into IT infrastructure health and performance. Access to monitoring is critical for various teams to gain important insights for their daily work and to ensure smooth IT operations. However, charging on a per-user basis for monitoring could result in this information being made available only to an exclusive group to keep costs down. As a result, responsible individuals and teams would be denied visibility into the IT assets that are important to them, and the monitoring would be of no value to them.

Avoiding Cost Traps

A look at the market shows that the pricing of many monitoring vendors can quickly blow the monitoring budget due to hidden costs or subsequent price drivers — or even encourage the creation of poor IT architectures. If you are not careful, you can quickly end up paying 30 percent of your computing costs for monitoring. For comparison, common benchmarks suggest that ITOps should spend no more than 3 to 15 percent of its IT budget on observability, depending on the industry and the size of the organization.

Organizations should develop clear strategies and understand which business areas are running and will run on which parts of their IT architecture. Only by understanding your cloud and on-premises monitoring needs can you find a tailored solution with a precise and predictable pricing model, rather than paying a lot of money for an oversized solution that may not fit your infrastructure.

Martin Hirschvogel is Chief Product Officer at Checkmk

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

Avoiding Cost Traps in Cloud Monitoring

Martin Hirschvogel
Checkmk

Choosing the right approach is critical with cloud monitoring in hybrid environments. Otherwise, you may drive up costs with features you don’t need and risk diminishing the visibility of your on-premises IT.

The complexity of IT infrastructures is constantly growing as organizations continue to combine cloud-based services with on-premises or edge IT infrastructure and adopt Kubernetes or serverless computing services. To ensure that their hybrid IT infrastructure performs optimally, ITOps teams need a monitoring solution that is capable of providing comprehensive visibility while easing their burden.

Different Monitoring Requirements

To avoid blind spots and budget bloat, there are two main questions ITOps needs to consider:

What applications and resources do we run in which part of the infrastructure?

And what monitoring requirements result from this?

This is especially important when considering cloud monitoring solutions. While they provide numerous functions for monitoring applications and computing resources residing in the cloud, they have limitations when it comes to monitoring on-premises environments. So, by operating all their business-critical IT assets locally and "only" virtual machines in the cloud, organizations would risk driving up expenses and impacting IT operations by implementing cloud monitoring.

NonTransparent Pricing Models

Even if an organization is running mission-critical workloads in the cloud, choosing a cloud monitoring solution can quickly result in costs that are unexpected, but ultimately avoidable. This is due to cloud monitoring providers' sometimes opaque billing models that impose a kind of penalty tax on the benefits of the cloud, such as flexibility and scalability. When you add subscriptions for additional features to the high base fee for the software, the initial cost quickly becomes unmanageable.

A virtual server in a popular configuration costs about $100 per month from a hyperscaler. Basic monitoring for such a host typically starts at $15 to $30 from cloud monitoring providers, and the cost can be many times higher depending on the desired feature set and sizing. Even simple monitoring of the operating system can quickly add up to at least 30 percent of the hosting bill.

Expensive Host-Based Billing

Host-based billing may seem simple at first glance. Yet the question arises as to whether host-based billing makes sense at all in a serverless world with managed services, etc., where hosts no longer play a major role.

Also, in a serverless world with managed services from cloud providers, it is difficult to quantify hosts. In the end, this will inevitably lead to the gradual introduction of secondary pricing metrics and, from the user's perspective, to costs that are difficult to predict and a lack of price transparency.

The conceptual problems of host-based pricing are particularly evident in the fact that many monitoring providers have introduced limits and additional price dimensions. For example, in some cases only a certain number of containers per host are included in cloud monitoring. However, this limit is usually quickly exceeded and additional fees apply for each additional container.

Artificial Limits and Custom Metrics

Custom metrics, which allow special data to be included in monitoring, can also quickly drive up costs. This is especially the case if custom metrics are essential for monitoring and you can only obtain useful monitoring by adding them. Artificial currencies or units in monitoring, such as those used to retrieve custom metrics, logs, or user-defined events, and which have complex conversion formulas, also do not necessarily provide a transparent view of costs.

Monitoring costs also vary depending on the cloud provider. For example, with a hyperscaler, all of the API calls that are required to monitor the cloud services cost money. With another provider, the API calls may be free, but you may run into rate limits. These are all cost factors that should be taken into account from the outset when choosing a monitoring solution.

Evaluating a cloud monitoring solution also includes ensuring that the solution supports all of the necessary features and services. Essential features, such as an SSO solution based on the SAML standard, should not be reserved for the higher-tier product and the associated more expensive plan levels.

Wrong Incentives and Exclusive Access

The pricing model of a good monitoring solution should also not create incentives to compromise on infrastructure architecture for cost reasons. For example, if an organization has to pay per monitoring instance, there is a strong temptation to save costs by minimizing the number of instances. However, there is a risk that the monitoring will not scale with the company's infrastructure — negating a key benefit of the cloud.

The goal of IT monitoring is to provide critical insight into IT infrastructure health and performance. Access to monitoring is critical for various teams to gain important insights for their daily work and to ensure smooth IT operations. However, charging on a per-user basis for monitoring could result in this information being made available only to an exclusive group to keep costs down. As a result, responsible individuals and teams would be denied visibility into the IT assets that are important to them, and the monitoring would be of no value to them.

Avoiding Cost Traps

A look at the market shows that the pricing of many monitoring vendors can quickly blow the monitoring budget due to hidden costs or subsequent price drivers — or even encourage the creation of poor IT architectures. If you are not careful, you can quickly end up paying 30 percent of your computing costs for monitoring. For comparison, common benchmarks suggest that ITOps should spend no more than 3 to 15 percent of its IT budget on observability, depending on the industry and the size of the organization.

Organizations should develop clear strategies and understand which business areas are running and will run on which parts of their IT architecture. Only by understanding your cloud and on-premises monitoring needs can you find a tailored solution with a precise and predictable pricing model, rather than paying a lot of money for an oversized solution that may not fit your infrastructure.

Martin Hirschvogel is Chief Product Officer at Checkmk

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...