Configuring application Monitoring as Code (MaC) is the next logical step in modern software development. Today, configuring monitoring is often an overly manual process. It's a bottleneck that DevOps teams are addressing to ship code faster with greater confidence.
Before we explore the relatively new MaC concept, we should step back and discuss the "as Code" movement in general. The most prominent current example is Infrastructure as Code (IaC), which became the gold standard for infrastructure provisioning in recent years. IaC lets developers write files that define how servers should be set up. Building on that concept, IaC tools apply those configurations automatically, often fully integrated into the CI/CD process.
Bringing key aspects of the software development workflow closer to the application code enables developers to automate and ultimately ship their services faster and more often, continuously. Hence ‘as code' has become popular in recent years. However, continuous delivery (CD) requires more than infrastructure automation. It also requires automation of other software delivery aspects. Without this additional automation, how would DevOps teams be able to ship code updates dozens of times a day or even more often?
Next to automation, one key aspect of CD is that cross-functional DevOps teams are now responsible for their services from one end to the other. The motto "You build it; you test it; you run it!" rings true for teams not only tasked to ship often but to simultaneously test and operate those deployed services. It's vital for modern DevOps teams to embrace automation for other functions in their pipeline, including crucial aspects like monitoring. In that context, health and performance monitoring need to be described as code too.
Let's look at some key reasons why monitoring as code is here to stay.
Monitoring shouldn't become the bottleneck for software delivery
Creating checks for larger APIs or websites are often repetitive manual tasks that require a lot of time. In addition, the demand on DevOps teams to make daily — or even hourly — changes to target applications translates into exploding workloads and testing requirements.
In contrast, defining something as code enables you to replicate the actions you would usually do manually — using a UI or CLI — and automate these.
Lack of transparency makes cross-team collaboration harder
Traditional monitoring processes require manual provisioning, meaning users need to create tickets to have new monitoring resources provisioned for them or request permission to apply the changes themselves. In turn, central IT teams are often required to work through different UIs and flows.
This makes it difficult to maintain consistency across an entire infrastructure while simultaneously avoiding duplication of effort across teams. It also complicated the task of auditing changes, making it difficult to review wrongly configured monitoring checks, thereby lengthening an important feedback loop.
Monitoring should be CI/CD integrated
Eventually, the speed of checks-provisioning does not match the pace at which the target applications are evolving. This results from a mismatch of approaches: the CI/CD workflow through which the websites and APIs are iterated upon on one side vs. the fully manual approach on the other.
Applying lessons learned from IaC, MaC brings check definitions closer to the application's source code by having them written as code.
This method allows check definitions to live in source control, boosting cross-team visibility. Additionally, code is text, which is useful for version control and generating an audit trail of all changes. This makes it easier to roll back changes in case of incidents.
With software taking over the provisioning of monitoring checks, hundreds or thousands of checks can be created or edited in a matter of seconds. This is a game-changer for development, operations, and DevOps teams, allowing them to reallocate time spent on manual configuration toward improving the coverage and robustness of their monitoring setup.
To summarize, MaC is revolutionizing the way monitoring is configured by providing:
1. Better scalability through faster, more efficient provisioning
2. Increased transparency and easier rollbacks via source control
3. Unification of previously fragmented processes in a CI/CD workflow
The Latest
To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.
An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...