Skip to main content

Monitoring as Code: Worth The Hype?

Hannes Lenke
Checkly

Configuring application Monitoring as Code (MaC) is the next logical step in modern software development. Today, configuring monitoring is often an overly manual process. It's a bottleneck that DevOps teams are addressing to ship code faster with greater confidence.

Before we explore the relatively new MaC concept, we should step back and discuss the "as Code" movement in general. The most prominent current example is Infrastructure as Code (IaC), which became the gold standard for infrastructure provisioning in recent years. IaC lets developers write files that define how servers should be set up. Building on that concept, IaC tools apply those configurations automatically, often fully integrated into the CI/CD process.

Bringing key aspects of the software development workflow closer to the application code enables developers to automate and ultimately ship their services faster and more often, continuously. Hence ‘as code' has become popular in recent years. However, continuous delivery (CD) requires more than infrastructure automation. It also requires automation of other software delivery aspects. Without this additional automation, how would DevOps teams be able to ship code updates dozens of times a day or even more often?

Next to automation, one key aspect of CD is that cross-functional DevOps teams are now responsible for their services from one end to the other. The motto "You build it; you test it; you run it!" rings true for teams not only tasked to ship often but to simultaneously test and operate those deployed services. It's vital for modern DevOps teams to embrace automation for other functions in their pipeline, including crucial aspects like monitoring. In that context, health and performance monitoring need to be described as code too.

Let's look at some key reasons why monitoring as code is here to stay.

Monitoring shouldn't become the bottleneck for software delivery

Creating checks for larger APIs or websites are often repetitive manual tasks that require a lot of time. In addition, the demand on DevOps teams to make daily — or even hourly — changes to target applications translates into exploding workloads and testing requirements.
In contrast, defining something as code enables you to replicate the actions you would usually do manually — using a UI or CLI — and automate these.

Lack of transparency makes cross-team collaboration harder

Traditional monitoring processes require manual provisioning, meaning users need to create tickets to have new monitoring resources provisioned for them or request permission to apply the changes themselves. In turn, central IT teams are often required to work through different UIs and flows.

This makes it difficult to maintain consistency across an entire infrastructure while simultaneously avoiding duplication of effort across teams. It also complicated the task of auditing changes, making it difficult to review wrongly configured monitoring checks, thereby lengthening an important feedback loop.

Monitoring should be CI/CD integrated

Eventually, the speed of checks-provisioning does not match the pace at which the target applications are evolving. This results from a mismatch of approaches: the CI/CD workflow through which the websites and APIs are iterated upon on one side vs. the fully manual approach on the other.

Applying lessons learned from IaC, MaC brings check definitions closer to the application's source code by having them written as code.

This method allows check definitions to live in source control, boosting cross-team visibility. Additionally, code is text, which is useful for version control and generating an audit trail of all changes. This makes it easier to roll back changes in case of incidents.

With software taking over the provisioning of monitoring checks, hundreds or thousands of checks can be created or edited in a matter of seconds. This is a game-changer for development, operations, and DevOps teams, allowing them to reallocate time spent on manual configuration toward improving the coverage and robustness of their monitoring setup.

To summarize, MaC is revolutionizing the way monitoring is configured by providing:

1. Better scalability through faster, more efficient provisioning

2. Increased transparency and easier rollbacks via source control

3. Unification of previously fragmented processes in a CI/CD workflow

Hannes Lenke is CEO and Co-Founder of Checkly

Hot Topics

The Latest

People want to be doing more engaging work, yet their day often gets overrun by addressing urgent IT tickets. But thanks to advances in AI "vibe coding," where a user describes what they want in plain English and the AI turns it into working code, IT teams can automate ticketing workflows and offload much of that work. Password resets that used to take 5 minutes per request now get resolved automatically ...

Governments and social platforms face an escalating challenge: hyperrealistic synthetic media now spreads faster than legacy moderation systems can react. From pandemic-related conspiracies to manipulated election content, disinformation has moved beyond "false text" into the realm of convincing audiovisual deception ...

Traditional monitoring often stops at uptime and server health without any integrated insights. Cross-platform observability covers not just infrastructure telemetry but also client-side behavior, distributed service interactions, and the contextual data that connects them. Emerging technologies like OpenTelemetry, eBPF, and AI-driven anomaly detection have made this vision more achievable, but only if organizations ground their observability strategy in well-defined pillars. Here are the five foundational pillars of cross-platform observability that modern engineering teams should focus on for seamless platform performance ...

For all the attention AI receives in corporate slide decks and strategic roadmaps, many businesses are struggling to translate that ambition into something that holds up at scale. At least, that's the picture that emerged from a recent Forrester study commissioned by Tines ...

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

Monitoring as Code: Worth The Hype?

Hannes Lenke
Checkly

Configuring application Monitoring as Code (MaC) is the next logical step in modern software development. Today, configuring monitoring is often an overly manual process. It's a bottleneck that DevOps teams are addressing to ship code faster with greater confidence.

Before we explore the relatively new MaC concept, we should step back and discuss the "as Code" movement in general. The most prominent current example is Infrastructure as Code (IaC), which became the gold standard for infrastructure provisioning in recent years. IaC lets developers write files that define how servers should be set up. Building on that concept, IaC tools apply those configurations automatically, often fully integrated into the CI/CD process.

Bringing key aspects of the software development workflow closer to the application code enables developers to automate and ultimately ship their services faster and more often, continuously. Hence ‘as code' has become popular in recent years. However, continuous delivery (CD) requires more than infrastructure automation. It also requires automation of other software delivery aspects. Without this additional automation, how would DevOps teams be able to ship code updates dozens of times a day or even more often?

Next to automation, one key aspect of CD is that cross-functional DevOps teams are now responsible for their services from one end to the other. The motto "You build it; you test it; you run it!" rings true for teams not only tasked to ship often but to simultaneously test and operate those deployed services. It's vital for modern DevOps teams to embrace automation for other functions in their pipeline, including crucial aspects like monitoring. In that context, health and performance monitoring need to be described as code too.

Let's look at some key reasons why monitoring as code is here to stay.

Monitoring shouldn't become the bottleneck for software delivery

Creating checks for larger APIs or websites are often repetitive manual tasks that require a lot of time. In addition, the demand on DevOps teams to make daily — or even hourly — changes to target applications translates into exploding workloads and testing requirements.
In contrast, defining something as code enables you to replicate the actions you would usually do manually — using a UI or CLI — and automate these.

Lack of transparency makes cross-team collaboration harder

Traditional monitoring processes require manual provisioning, meaning users need to create tickets to have new monitoring resources provisioned for them or request permission to apply the changes themselves. In turn, central IT teams are often required to work through different UIs and flows.

This makes it difficult to maintain consistency across an entire infrastructure while simultaneously avoiding duplication of effort across teams. It also complicated the task of auditing changes, making it difficult to review wrongly configured monitoring checks, thereby lengthening an important feedback loop.

Monitoring should be CI/CD integrated

Eventually, the speed of checks-provisioning does not match the pace at which the target applications are evolving. This results from a mismatch of approaches: the CI/CD workflow through which the websites and APIs are iterated upon on one side vs. the fully manual approach on the other.

Applying lessons learned from IaC, MaC brings check definitions closer to the application's source code by having them written as code.

This method allows check definitions to live in source control, boosting cross-team visibility. Additionally, code is text, which is useful for version control and generating an audit trail of all changes. This makes it easier to roll back changes in case of incidents.

With software taking over the provisioning of monitoring checks, hundreds or thousands of checks can be created or edited in a matter of seconds. This is a game-changer for development, operations, and DevOps teams, allowing them to reallocate time spent on manual configuration toward improving the coverage and robustness of their monitoring setup.

To summarize, MaC is revolutionizing the way monitoring is configured by providing:

1. Better scalability through faster, more efficient provisioning

2. Increased transparency and easier rollbacks via source control

3. Unification of previously fragmented processes in a CI/CD workflow

Hannes Lenke is CEO and Co-Founder of Checkly

Hot Topics

The Latest

People want to be doing more engaging work, yet their day often gets overrun by addressing urgent IT tickets. But thanks to advances in AI "vibe coding," where a user describes what they want in plain English and the AI turns it into working code, IT teams can automate ticketing workflows and offload much of that work. Password resets that used to take 5 minutes per request now get resolved automatically ...

Governments and social platforms face an escalating challenge: hyperrealistic synthetic media now spreads faster than legacy moderation systems can react. From pandemic-related conspiracies to manipulated election content, disinformation has moved beyond "false text" into the realm of convincing audiovisual deception ...

Traditional monitoring often stops at uptime and server health without any integrated insights. Cross-platform observability covers not just infrastructure telemetry but also client-side behavior, distributed service interactions, and the contextual data that connects them. Emerging technologies like OpenTelemetry, eBPF, and AI-driven anomaly detection have made this vision more achievable, but only if organizations ground their observability strategy in well-defined pillars. Here are the five foundational pillars of cross-platform observability that modern engineering teams should focus on for seamless platform performance ...

For all the attention AI receives in corporate slide decks and strategic roadmaps, many businesses are struggling to translate that ambition into something that holds up at scale. At least, that's the picture that emerged from a recent Forrester study commissioned by Tines ...

From smart factories and autonomous vehicles to real-time analytics and intelligent building systems, the demand for instant, local data processing is exploding. To meet these needs, organizations are leaning into edge computing. The promise? Faster performance, reduced latency and less strain on centralized infrastructure. But there's a catch: Not every network is ready to support edge deployments ...

Every digital customer interaction, every cloud deployment, and every AI model depends on the same foundation: the ability to see, understand, and act on data in real time ... Recent data from Splunk confirms that 74% of the business leaders believe observability is essential to monitoring critical business processes, and 66% feel it's key to understanding user journeys. Because while the unknown is inevitable, observability makes it manageable. Let's explore why ...

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...