Observability: The Next Frontier for AIOps
September 24, 2020

Will Cappelli
Moogsoft

Share this

Enterprise ITOM and ITSM teams have been welcoming of AIOps, believing that it has the potential to deliver great value to them as their IT environments become more distributed, hybrid and complex. Not so with DevOps teams.

Listen to Will Cappelli discuss AIOps and Observability on the AI+ITOPS Podcast

It's safe to say they've kept AIOps at arm's length, because they don't think it's relevant nor useful for what they do. Instead, to manage the software code they develop and deploy, they've focused on observability.

In concrete terms, this means that for your typical DevOps pros, if the app delivered to their production environment is observable, that's all they need. They're skeptical of what, if anything, AIOps can contribute in this scenario.

This blog will explain why AIOps can help DevOps teams manage their environments with unprecedented accuracy and velocity, and outline the benefits of combining AIOps with observability.


AIOps: Room to Grow its Adoption and Functionality

In truth, there isn't one universally effective set of metrics that works for every team to measure the value that AIOps delivers. This is an issue not just for AIOps but for many ITOM and ITSM technologies as well. In fact, many enterprise IT teams who invested in AIOps in recent years are now carefully watching their deployments to assess their value before deciding whether or not to expand on them.

Still, there's a lot of room for AIOps adoption to grow, because there are many enterprises that haven't adopted it at all. That's why many vendors are trying to position themselves as AIOps players, to be part of a growing market. For this reason, the AIOps market has now gotten crowded.

So how can AIOps as a practice innovate and evolve at this point? What AIOps innovations can deliver unique capabilities that will set it apart from the pack of existing varieties? Clearly, the way to do this is to tailor, expand and apply AI-functionality to observability data. Such a solution would appeal strongly to the DevOps community, and dissolve its historical reluctance and skepticism towards AIOps.

But What is Observability?

However, there's an issue. When you press DevOps pros a little bit and ask them what observability is, you get three very different answers. The first is that observability is nothing more than traditional monitoring applied to a DevOps environment and toolset. This is flat out wrong.

Another meaning you'll hear given to observability is its traditional one: That it's a property of the system being monitored. In other words, observability isn't about the technology doing the monitoring or the observing, but rather it's the self-descriptive data a system generates.

According to this definition, people monitoring these systems can obtain an accurate picture of the changes occurring in them and of their causal relationships. However, it's clear that this view of observability, while related to the second one, is a dead end. It's just a stream of raw data and nothing else.

A third definition is that, compared with traditional monitoring, observability is a fundamentally different way of looking at and getting data from the environment being managed. And it needs to be, because the DevOps world is one of continuous integration, continuous delivery and continuous change — a world that's highly componentized and dynamic.

The way traditional monitoring tools take data from an environment, filter it, and generate events isn't appropriate for DevOps. You need to observe changes that happen so quickly that trying to fit the data into any kind of pre-arranged structure just falls short. You won't be able to see what's going on in the environment.

Instead, DevOps teams need to access the raw data generated by their toolset and environment, and perform analytics directly on it. That raw data is made up of metrics, traces, logs and events. So observability is indeed a revolution, a drastic shift away from all the pre-built filters and the pre-packaged models of traditional monitoring systems.

This definition is the one that serves up a potential for technological innovation and for delivering the most value through AIOps, because DevOps teams do need help to make sense of this raw data stream, and act accordingly.

AI analysis and automation applied to observability can deliver this assistance to DevOps teams. Such an approach would take the raw data from the DevOps environment and give DevOps practitioners an understanding of the systems that they're developing and delivering.

With these insights, DevOps teams can more effectively decide on actions to fix problems, or to improve performance.

So what's involved in combining AIOps and observability?

Metrics, traces, logs and events must first be collected and analyzed. Metrics captures a temporal dimension of what's happening, through its time-series data. Traces map a path through a topology, so they provide a spatial dimension -- a trace is a chain of execution across different system components, usually microservices. Logs and events provide a record of unstructured events.

With AIOps analysis, metrics reveal anomalies, traces show topology-based microservice relationships, and unstructured logs and events provide the foundation for triggering a significant alert.

Machine learning algorithms would then come into play to indicate an uncommon occurrence, pinpoint unusual metrics, traces, logs and events, and correlate them using temporal, spatial and textual criteria. The next step in the process would be the identification of a probable root cause of the problem, based on the history of previously resolved incidents. Then, ideally, automated remedial actions would be carried out.

Clearly, this combination of AIOps and observability would offer tremendous value to DevOps teams, as it would automate the detection, diagnosis and remediation of problems with the speed and accuracy required in their CI/CD environments. This would represent a breakthrough for AIOps: Earning the appreciation of reticent DevOps teams by giving them deep insights into observability data, and unparalleled visibility into their environments.

Will Cappelli is Field CTO at Moogsoft
Share this

The Latest

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...

March 11, 2024

Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...

March 07, 2024
In this digital era, consumers prefer a seamless user experience, and here, the significance of performance testing cannot be overstated. Application performance testing is essential in ensuring that your software products, websites, or other related systems operate seamlessly under varying conditions. However, the cost of poor performance extends beyond technical glitches and slow load times; it can directly affect customer satisfaction and brand reputation. Understand the tangible and intangible consequences of poor application performance and how it can affect your business ...
March 06, 2024

Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...

March 05, 2024

Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...

March 04, 2024

This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...