Skip to main content

How Observability Helps Ingest and Normalize Data for DevOps Engineers

Richard Whitehead
Moogsoft

Humans naturally love structure. Just take books, for example. We've been ingesting and normalizing data through bookmaking since ancient times. In bookmaking, we transport, or ingest, data (in the form of text and images) from the spoken word or author's imagination to a physical structure. Covers denote the information's beginning and end, and a table of contents and chapters categorize, or normalize, the data.

The same logic applies to modern computer data. Humans prefer information that is easy to understand, and we make sense of unstructured data — whether it's text or time series data — by ingesting and normalizing it.

DevOps, SRE and other operations teams use observability solutions with AIOps to ingest and normalize data to get visibility into tech stacks from a centralized system, reduce noise and understand the data's context for quicker mean time to recovery (MTTR). With AI using these processes to produce actionable insights, teams are free to spend more time innovating and providing superior service assurance.

Let's explore AI's role in ingestion and normalization, and then dive into correlation and deduplication too:

How Is Data Ingested into an Observability Platform?

Solutions that provide observability with AIOps are flexible, incorporating data from a broad range of sources. These monitoring systems ingest event management data, like alerts, log events and time series data. Modern observability solutions also notify teams about system changes, which is critical considering an environmental change instigates most system failures. In the end, any data source is fair game, as long as the data tells you something about your real-time operational environment.

The data source dictates how your monitoring tool ingests the information. The first, more preferred method is a continuous data stream. The alternative is a pull mechanism, like a Prometheus pattern, which scrapes data at regular intervals. In older applications, you may have to use a creative plug-in or adapter that converts information into an accessible format and enables teams to query an application or system for data.

So why move all of this data into an observability platform? Transporting information from multiple sources and putting it into a centralized system can reveal the big picture behind the data.

How Is Data Normalized?

Once data is coming into your observability platform, it's helpful to normalize the information according to its common features. AI can extract information from unstructured data and elevate it to a feature, like a source or timestamp. These features allow you to sort or query the data or, in more sophisticated environments, apply AI-based techniques such as natural language processing (NLP).

As you normalize data, it helps to understand the incoming format and structure. If you're going to map fields and break down the message into component parts, understand what part of the message is variable and what part is static.

You can use enrichment techniques if data doesn't have a required field, appropriate feature or required information. Enrichment skirts the lack of information by finding a key to cross-reference with an external data source.

How Does Observability with AIOps Reduce Toil?

When you have normalized data, you can use AI to detect problems quickly through correlation and deduplication. Imagine if your system fails and you have to dig through hundreds of logs to see how the environment changed. That's time-consuming, not to mention boring.

Correlate, or group, data based on common characteristics like service, class or description field. Time is also handy operational information and serves as a practical classifier. Let's go back to our system failure. If you just made an environmental change, understanding the time the alerts came in helps pinpoint the problem.

Correlation can also mimic human behavior, which is a challenge for most computer systems. For example, online checkout processes are complex, with many integrated, interdependent parts. An intelligent observability tool with AIOps can correlate data alerts related to a checkout process using NLP. If that's an issue, your observability platform will group all of the alerts associated with the stem word "check," which accommodates derivations and variations like "checking," "Check," and "check out."

Let's move on to the benefits of deduplicating normalizing data. You're working and, suddenly, a "CPU overloaded" alert pops up. You start fixing the issue, but another "CPU overloaded" alert hits your inbox. And it's followed by 30 more similar alerts. That's distracting and not particularly useful.

Deduplication reduces noise and minimizes incident volumes by eliminating excessive copies of the data. Instead of the monitoring system telling you that the CPU is overloaded 32 separate times, AI compresses repeated messages into one stateful message. Deduplication can seem trivial, especially compared to techniques like NLP, but the devil is in the details. Understanding when a message indicates a new issue, rather than just a repeated message, must be considered.

Intelligent observability with AIOps centralizes data and makes it easier for teams to understand. And when these systems detect incidents, AI-enabled correlation and deduplication minimize the impact of this unplanned work. The downstream effects on DevOps practitioners and SRE teams are significant. These teams can spend less time putting out fires and more time focusing their time and attention on keeping up with the constant demand to innovate and delight customers.

Richard Whitehead is Chief Evangelist at Moogsoft

Hot Topics

The Latest

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

How Observability Helps Ingest and Normalize Data for DevOps Engineers

Richard Whitehead
Moogsoft

Humans naturally love structure. Just take books, for example. We've been ingesting and normalizing data through bookmaking since ancient times. In bookmaking, we transport, or ingest, data (in the form of text and images) from the spoken word or author's imagination to a physical structure. Covers denote the information's beginning and end, and a table of contents and chapters categorize, or normalize, the data.

The same logic applies to modern computer data. Humans prefer information that is easy to understand, and we make sense of unstructured data — whether it's text or time series data — by ingesting and normalizing it.

DevOps, SRE and other operations teams use observability solutions with AIOps to ingest and normalize data to get visibility into tech stacks from a centralized system, reduce noise and understand the data's context for quicker mean time to recovery (MTTR). With AI using these processes to produce actionable insights, teams are free to spend more time innovating and providing superior service assurance.

Let's explore AI's role in ingestion and normalization, and then dive into correlation and deduplication too:

How Is Data Ingested into an Observability Platform?

Solutions that provide observability with AIOps are flexible, incorporating data from a broad range of sources. These monitoring systems ingest event management data, like alerts, log events and time series data. Modern observability solutions also notify teams about system changes, which is critical considering an environmental change instigates most system failures. In the end, any data source is fair game, as long as the data tells you something about your real-time operational environment.

The data source dictates how your monitoring tool ingests the information. The first, more preferred method is a continuous data stream. The alternative is a pull mechanism, like a Prometheus pattern, which scrapes data at regular intervals. In older applications, you may have to use a creative plug-in or adapter that converts information into an accessible format and enables teams to query an application or system for data.

So why move all of this data into an observability platform? Transporting information from multiple sources and putting it into a centralized system can reveal the big picture behind the data.

How Is Data Normalized?

Once data is coming into your observability platform, it's helpful to normalize the information according to its common features. AI can extract information from unstructured data and elevate it to a feature, like a source or timestamp. These features allow you to sort or query the data or, in more sophisticated environments, apply AI-based techniques such as natural language processing (NLP).

As you normalize data, it helps to understand the incoming format and structure. If you're going to map fields and break down the message into component parts, understand what part of the message is variable and what part is static.

You can use enrichment techniques if data doesn't have a required field, appropriate feature or required information. Enrichment skirts the lack of information by finding a key to cross-reference with an external data source.

How Does Observability with AIOps Reduce Toil?

When you have normalized data, you can use AI to detect problems quickly through correlation and deduplication. Imagine if your system fails and you have to dig through hundreds of logs to see how the environment changed. That's time-consuming, not to mention boring.

Correlate, or group, data based on common characteristics like service, class or description field. Time is also handy operational information and serves as a practical classifier. Let's go back to our system failure. If you just made an environmental change, understanding the time the alerts came in helps pinpoint the problem.

Correlation can also mimic human behavior, which is a challenge for most computer systems. For example, online checkout processes are complex, with many integrated, interdependent parts. An intelligent observability tool with AIOps can correlate data alerts related to a checkout process using NLP. If that's an issue, your observability platform will group all of the alerts associated with the stem word "check," which accommodates derivations and variations like "checking," "Check," and "check out."

Let's move on to the benefits of deduplicating normalizing data. You're working and, suddenly, a "CPU overloaded" alert pops up. You start fixing the issue, but another "CPU overloaded" alert hits your inbox. And it's followed by 30 more similar alerts. That's distracting and not particularly useful.

Deduplication reduces noise and minimizes incident volumes by eliminating excessive copies of the data. Instead of the monitoring system telling you that the CPU is overloaded 32 separate times, AI compresses repeated messages into one stateful message. Deduplication can seem trivial, especially compared to techniques like NLP, but the devil is in the details. Understanding when a message indicates a new issue, rather than just a repeated message, must be considered.

Intelligent observability with AIOps centralizes data and makes it easier for teams to understand. And when these systems detect incidents, AI-enabled correlation and deduplication minimize the impact of this unplanned work. The downstream effects on DevOps practitioners and SRE teams are significant. These teams can spend less time putting out fires and more time focusing their time and attention on keeping up with the constant demand to innovate and delight customers.

Richard Whitehead is Chief Evangelist at Moogsoft

Hot Topics

The Latest

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...