Enterprise ITOM and ITSM teams have been welcoming of AIOps, believing that it has the potential to deliver great value to them as their IT environments become more distributed, hybrid and complex. Not so with DevOps teams.
Listen to Will Cappelli discuss AIOps and Observability on the AI+ITOPS Podcast
It's safe to say they've kept AIOps at arm's length, because they don't think it's relevant nor useful for what they do. Instead, to manage the software code they develop and deploy, they've focused on observability.
In concrete terms, this means that for your typical DevOps pros, if the app delivered to their production environment is observable, that's all they need. They're skeptical of what, if anything, AIOps can contribute in this scenario.
This blog will explain why AIOps can help DevOps teams manage their environments with unprecedented accuracy and velocity, and outline the benefits of combining AIOps with observability.
AIOps: Room to Grow its Adoption and Functionality
In truth, there isn't one universally effective set of metrics that works for every team to measure the value that AIOps delivers. This is an issue not just for AIOps but for many ITOM and ITSM technologies as well. In fact, many enterprise IT teams who invested in AIOps in recent years are now carefully watching their deployments to assess their value before deciding whether or not to expand on them.
Still, there's a lot of room for AIOps adoption to grow, because there are many enterprises that haven't adopted it at all. That's why many vendors are trying to position themselves as AIOps players, to be part of a growing market. For this reason, the AIOps market has now gotten crowded.
So how can AIOps as a practice innovate and evolve at this point? What AIOps innovations can deliver unique capabilities that will set it apart from the pack of existing varieties? Clearly, the way to do this is to tailor, expand and apply AI-functionality to observability data. Such a solution would appeal strongly to the DevOps community, and dissolve its historical reluctance and skepticism towards AIOps.
But What is Observability?
However, there's an issue. When you press DevOps pros a little bit and ask them what observability is, you get three very different answers. The first is that observability is nothing more than traditional monitoring applied to a DevOps environment and toolset. This is flat out wrong.
Another meaning you'll hear given to observability is its traditional one: That it's a property of the system being monitored. In other words, observability isn't about the technology doing the monitoring or the observing, but rather it's the self-descriptive data a system generates.
According to this definition, people monitoring these systems can obtain an accurate picture of the changes occurring in them and of their causal relationships. However, it's clear that this view of observability, while related to the second one, is a dead end. It's just a stream of raw data and nothing else.
A third definition is that, compared with traditional monitoring, observability is a fundamentally different way of looking at and getting data from the environment being managed. And it needs to be, because the DevOps world is one of continuous integration, continuous delivery and continuous change — a world that's highly componentized and dynamic.
The way traditional monitoring tools take data from an environment, filter it, and generate events isn't appropriate for DevOps. You need to observe changes that happen so quickly that trying to fit the data into any kind of pre-arranged structure just falls short. You won't be able to see what's going on in the environment.
Instead, DevOps teams need to access the raw data generated by their toolset and environment, and perform analytics directly on it. That raw data is made up of metrics, traces, logs and events. So observability is indeed a revolution, a drastic shift away from all the pre-built filters and the pre-packaged models of traditional monitoring systems.
This definition is the one that serves up a potential for technological innovation and for delivering the most value through AIOps, because DevOps teams do need help to make sense of this raw data stream, and act accordingly.
AI analysis and automation applied to observability can deliver this assistance to DevOps teams. Such an approach would take the raw data from the DevOps environment and give DevOps practitioners an understanding of the systems that they're developing and delivering.
With these insights, DevOps teams can more effectively decide on actions to fix problems, or to improve performance.
So what's involved in combining AIOps and observability?
Metrics, traces, logs and events must first be collected and analyzed. Metrics captures a temporal dimension of what's happening, through its time-series data. Traces map a path through a topology, so they provide a spatial dimension -- a trace is a chain of execution across different system components, usually microservices. Logs and events provide a record of unstructured events.
With AIOps analysis, metrics reveal anomalies, traces show topology-based microservice relationships, and unstructured logs and events provide the foundation for triggering a significant alert.
Machine learning algorithms would then come into play to indicate an uncommon occurrence, pinpoint unusual metrics, traces, logs and events, and correlate them using temporal, spatial and textual criteria. The next step in the process would be the identification of a probable root cause of the problem, based on the history of previously resolved incidents. Then, ideally, automated remedial actions would be carried out.
Clearly, this combination of AIOps and observability would offer tremendous value to DevOps teams, as it would automate the detection, diagnosis and remediation of problems with the speed and accuracy required in their CI/CD environments. This would represent a breakthrough for AIOps: Earning the appreciation of reticent DevOps teams by giving them deep insights into observability data, and unparalleled visibility into their environments.
The Latest
In Part 1 of this two-part series, I defined multi-CDN and explored how and why this approach is used by streaming services, e-commerce platforms, gaming companies and global enterprises for fast and reliable content delivery ... Now, in Part 2 of the series, I'll explore one of the biggest challenges of multi-CDN: observability.
CDNs consist of geographically distributed data centers with servers that cache and serve content close to end users to reduce latency and improve load times. Each data center is strategically placed so that digital signals can rapidly travel from one "point of presence" to the next, getting the digital signal to the viewer as fast as possible ... Multi-CDN refers to the strategy of utilizing multiple CDNs to deliver digital content across the internet ...
We surveyed IT professionals on their attitudes and practices regarding using Generative AI with databases. We asked how they are layering the technology in with their systems, where it's working the best for them, and what their concerns are ...
40% of generative AI (GenAI) solutions will be multimodal (text, image, audio and video) by 2027, up from 1% in 2023, according to Gartner ...
Today's digital business landscape evolves rapidly ... Among the areas primed for innovation, the long-standing ticket-based IT support model stands out as particularly outdated. Emerging as a game-changer, the concept of the "ticketless enterprise" promises to shift IT management from a reactive stance to a proactive approach ...
In MEAN TIME TO INSIGHT Episode 10, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Generative AI ...
By 2026, 30% of enterprises will automate more than half of their network activities, an increase from under 10% in mid-2023, according to Gartner ...
A recent report by Enterprise Management Associates (EMA) reveals that nearly 95% of organizations use a combination of do-it-yourself (DIY) and vendor solutions for network automation, yet only 28% believe they have successfully implemented their automation strategy. Why is this mixed approach so popular if many engineers feel that their overall program is not successful? ...
As AI improves and strengthens various product innovations and technology functions, it's also influencing and infiltrating the observability space ... Observability helps translate technical stability into customer satisfaction and business success and AI amplifies this by driving continuous improvement at scale ...
Technical debt is a pressing issue for many organizations, stifling innovation and leading to costly inefficiencies ... Despite these challenges, 90% of IT leaders are planning to boost their spending on emerging technologies like AI in 2025 ... As budget season approaches, it's important for IT leaders to address technical debt to ensure that their 2025 budgets are allocated effectively and support successful technology adoption ...