Skip to main content

Monitoring as Code: Worth The Hype?

Hannes Lenke
Checkly

Configuring application Monitoring as Code (MaC) is the next logical step in modern software development. Today, configuring monitoring is often an overly manual process. It's a bottleneck that DevOps teams are addressing to ship code faster with greater confidence.

Before we explore the relatively new MaC concept, we should step back and discuss the "as Code" movement in general. The most prominent current example is Infrastructure as Code (IaC), which became the gold standard for infrastructure provisioning in recent years. IaC lets developers write files that define how servers should be set up. Building on that concept, IaC tools apply those configurations automatically, often fully integrated into the CI/CD process.

Bringing key aspects of the software development workflow closer to the application code enables developers to automate and ultimately ship their services faster and more often, continuously. Hence ‘as code' has become popular in recent years. However, continuous delivery (CD) requires more than infrastructure automation. It also requires automation of other software delivery aspects. Without this additional automation, how would DevOps teams be able to ship code updates dozens of times a day or even more often?

Next to automation, one key aspect of CD is that cross-functional DevOps teams are now responsible for their services from one end to the other. The motto "You build it; you test it; you run it!" rings true for teams not only tasked to ship often but to simultaneously test and operate those deployed services. It's vital for modern DevOps teams to embrace automation for other functions in their pipeline, including crucial aspects like monitoring. In that context, health and performance monitoring need to be described as code too.

Let's look at some key reasons why monitoring as code is here to stay.

Monitoring shouldn't become the bottleneck for software delivery

Creating checks for larger APIs or websites are often repetitive manual tasks that require a lot of time. In addition, the demand on DevOps teams to make daily — or even hourly — changes to target applications translates into exploding workloads and testing requirements.
In contrast, defining something as code enables you to replicate the actions you would usually do manually — using a UI or CLI — and automate these.

Lack of transparency makes cross-team collaboration harder

Traditional monitoring processes require manual provisioning, meaning users need to create tickets to have new monitoring resources provisioned for them or request permission to apply the changes themselves. In turn, central IT teams are often required to work through different UIs and flows.

This makes it difficult to maintain consistency across an entire infrastructure while simultaneously avoiding duplication of effort across teams. It also complicated the task of auditing changes, making it difficult to review wrongly configured monitoring checks, thereby lengthening an important feedback loop.

Monitoring should be CI/CD integrated

Eventually, the speed of checks-provisioning does not match the pace at which the target applications are evolving. This results from a mismatch of approaches: the CI/CD workflow through which the websites and APIs are iterated upon on one side vs. the fully manual approach on the other.

Applying lessons learned from IaC, MaC brings check definitions closer to the application's source code by having them written as code.

This method allows check definitions to live in source control, boosting cross-team visibility. Additionally, code is text, which is useful for version control and generating an audit trail of all changes. This makes it easier to roll back changes in case of incidents.

With software taking over the provisioning of monitoring checks, hundreds or thousands of checks can be created or edited in a matter of seconds. This is a game-changer for development, operations, and DevOps teams, allowing them to reallocate time spent on manual configuration toward improving the coverage and robustness of their monitoring setup.

To summarize, MaC is revolutionizing the way monitoring is configured by providing:

1. Better scalability through faster, more efficient provisioning

2. Increased transparency and easier rollbacks via source control

3. Unification of previously fragmented processes in a CI/CD workflow

Hannes Lenke is CEO and Co-Founder of Checkly

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

Monitoring as Code: Worth The Hype?

Hannes Lenke
Checkly

Configuring application Monitoring as Code (MaC) is the next logical step in modern software development. Today, configuring monitoring is often an overly manual process. It's a bottleneck that DevOps teams are addressing to ship code faster with greater confidence.

Before we explore the relatively new MaC concept, we should step back and discuss the "as Code" movement in general. The most prominent current example is Infrastructure as Code (IaC), which became the gold standard for infrastructure provisioning in recent years. IaC lets developers write files that define how servers should be set up. Building on that concept, IaC tools apply those configurations automatically, often fully integrated into the CI/CD process.

Bringing key aspects of the software development workflow closer to the application code enables developers to automate and ultimately ship their services faster and more often, continuously. Hence ‘as code' has become popular in recent years. However, continuous delivery (CD) requires more than infrastructure automation. It also requires automation of other software delivery aspects. Without this additional automation, how would DevOps teams be able to ship code updates dozens of times a day or even more often?

Next to automation, one key aspect of CD is that cross-functional DevOps teams are now responsible for their services from one end to the other. The motto "You build it; you test it; you run it!" rings true for teams not only tasked to ship often but to simultaneously test and operate those deployed services. It's vital for modern DevOps teams to embrace automation for other functions in their pipeline, including crucial aspects like monitoring. In that context, health and performance monitoring need to be described as code too.

Let's look at some key reasons why monitoring as code is here to stay.

Monitoring shouldn't become the bottleneck for software delivery

Creating checks for larger APIs or websites are often repetitive manual tasks that require a lot of time. In addition, the demand on DevOps teams to make daily — or even hourly — changes to target applications translates into exploding workloads and testing requirements.
In contrast, defining something as code enables you to replicate the actions you would usually do manually — using a UI or CLI — and automate these.

Lack of transparency makes cross-team collaboration harder

Traditional monitoring processes require manual provisioning, meaning users need to create tickets to have new monitoring resources provisioned for them or request permission to apply the changes themselves. In turn, central IT teams are often required to work through different UIs and flows.

This makes it difficult to maintain consistency across an entire infrastructure while simultaneously avoiding duplication of effort across teams. It also complicated the task of auditing changes, making it difficult to review wrongly configured monitoring checks, thereby lengthening an important feedback loop.

Monitoring should be CI/CD integrated

Eventually, the speed of checks-provisioning does not match the pace at which the target applications are evolving. This results from a mismatch of approaches: the CI/CD workflow through which the websites and APIs are iterated upon on one side vs. the fully manual approach on the other.

Applying lessons learned from IaC, MaC brings check definitions closer to the application's source code by having them written as code.

This method allows check definitions to live in source control, boosting cross-team visibility. Additionally, code is text, which is useful for version control and generating an audit trail of all changes. This makes it easier to roll back changes in case of incidents.

With software taking over the provisioning of monitoring checks, hundreds or thousands of checks can be created or edited in a matter of seconds. This is a game-changer for development, operations, and DevOps teams, allowing them to reallocate time spent on manual configuration toward improving the coverage and robustness of their monitoring setup.

To summarize, MaC is revolutionizing the way monitoring is configured by providing:

1. Better scalability through faster, more efficient provisioning

2. Increased transparency and easier rollbacks via source control

3. Unification of previously fragmented processes in a CI/CD workflow

Hannes Lenke is CEO and Co-Founder of Checkly

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...