Skip to main content

Mezmo Unveils Observability Pipeline

Mezmo unveiled its Observability Pipeline, which enables teams to control, enrich, and correlate machine data for actionable insights and faster decisions.

Mezmo's Observability Pipeline helps organizations better control their observability data and deliver increasing business value. It centralizes the flow of data from various sources, adds context to make data more valuable, and then routes it to destinations to drive actionability.

“Data provides a competitive advantage, but organizations struggle to extract real value. First-generation observability data pipelines focus primarily on data movement and control, reducing the amount of data collected, but fall short on delivering value. Preprocessing data is a great first step,” said Tucker Callaway, CEO, Mezmo. “We’ve built on that foundation and our success in making log data actionable to create a smart observability data pipeline that enriches and correlates high volumes of data in motion to provide additional context and drive action.”

Mezmo’s Observability Pipeline provides access and control to ensure that the right data is flowing into the right systems in the right format for analysis, minimizing costs and enabling new workflows. This smart pipeline integrates Mezmo’s best-in-class log analysis features, including search, alerting, and visualization capabilities, to augment and analyze data in motion, delivering intelligent, actionable insights to mitigate risk and make decisions faster.

The flexible, easy-to-use solution enriches workflows, streamlines the adoption of best practices, and enables new observability data use cases. Customers can route data from any source, such as cloud platforms, Fluentd, Logstash, Syslog, and others, to many destinations for various use cases, including Splunk, S3, and Mezmo’s Log Analysis platform.

Support for OpenTelemetry further helps simplify the ingestion of data and makes data more actionable with enrichment of the OpenTelemetry attributes.

Mezmo also helps transform sensitive data to meet regulatory and compliance requirements, such as PII. Control features simplify the management of multiple sources and destinations while protecting against runaway data flow.

The Latest

Seeing is believing, or in this case, seeing is understanding, according to New Relic's 2025 Observability Forecast for Retail and eCommerce report. Retailers who want to provide exceptional customer experiences while improving IT operations efficiency are leaning on observability ... Here are five key takeaways from the report ...

Technology leaders across the federal landscape are facing, and will continue to face, an uphill battle when it comes to fortifying their digital environments against hostile and persistent threat actors. On one hand, they are being asked to push digital transformation ... On the other hand, they are facing the fiscal uncertainty of continuing resolutions (CR) and government shutdowns looming near and far. In the face of these challenges, CIOs, CTOs, and CISOs must figure out how to modernize legacy systems and infrastructure while doing more with less and still defending against external and internal threats ...

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Mezmo Unveils Observability Pipeline

Mezmo unveiled its Observability Pipeline, which enables teams to control, enrich, and correlate machine data for actionable insights and faster decisions.

Mezmo's Observability Pipeline helps organizations better control their observability data and deliver increasing business value. It centralizes the flow of data from various sources, adds context to make data more valuable, and then routes it to destinations to drive actionability.

“Data provides a competitive advantage, but organizations struggle to extract real value. First-generation observability data pipelines focus primarily on data movement and control, reducing the amount of data collected, but fall short on delivering value. Preprocessing data is a great first step,” said Tucker Callaway, CEO, Mezmo. “We’ve built on that foundation and our success in making log data actionable to create a smart observability data pipeline that enriches and correlates high volumes of data in motion to provide additional context and drive action.”

Mezmo’s Observability Pipeline provides access and control to ensure that the right data is flowing into the right systems in the right format for analysis, minimizing costs and enabling new workflows. This smart pipeline integrates Mezmo’s best-in-class log analysis features, including search, alerting, and visualization capabilities, to augment and analyze data in motion, delivering intelligent, actionable insights to mitigate risk and make decisions faster.

The flexible, easy-to-use solution enriches workflows, streamlines the adoption of best practices, and enables new observability data use cases. Customers can route data from any source, such as cloud platforms, Fluentd, Logstash, Syslog, and others, to many destinations for various use cases, including Splunk, S3, and Mezmo’s Log Analysis platform.

Support for OpenTelemetry further helps simplify the ingestion of data and makes data more actionable with enrichment of the OpenTelemetry attributes.

Mezmo also helps transform sensitive data to meet regulatory and compliance requirements, such as PII. Control features simplify the management of multiple sources and destinations while protecting against runaway data flow.

The Latest

Seeing is believing, or in this case, seeing is understanding, according to New Relic's 2025 Observability Forecast for Retail and eCommerce report. Retailers who want to provide exceptional customer experiences while improving IT operations efficiency are leaning on observability ... Here are five key takeaways from the report ...

Technology leaders across the federal landscape are facing, and will continue to face, an uphill battle when it comes to fortifying their digital environments against hostile and persistent threat actors. On one hand, they are being asked to push digital transformation ... On the other hand, they are facing the fiscal uncertainty of continuing resolutions (CR) and government shutdowns looming near and far. In the face of these challenges, CIOs, CTOs, and CISOs must figure out how to modernize legacy systems and infrastructure while doing more with less and still defending against external and internal threats ...

Reliability is no longer proven by uptime alone, according to the The SRE Report 2026 from LogicMonitor. In the AI era, it is experienced through speed, consistency, and user trust, and increasingly judged by business impact. As digital services grow more complex and AI systems move into production, traditional monitoring approaches are struggling to keep pace, increasing the need for AI-first observability that spans applications, infrastructure, and the Internet ...

If AI is the engine of a modern organization, then data engineering is the road system beneath it. You can build the most powerful engine in the world, but without paved roads, traffic signals, and bridges that can support its weight, it will stall. In many enterprises, the engine is ready. The roads are not ...

In the world of digital-first business, there is no tolerance for service outages. Businesses know that outages are the quickest way to lose money and customers. For smaller organizations, unplanned downtime could even force the business to close ... A new study from PagerDuty, The State of AI-First Operations, reveals that companies actively incorporating AI into operations now view operational resilience as a growth driver rather than a cost center. But how are they achieving it? ...

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...