
Mezmo unveiled new solutions to optimize observability costs for Datadog users.
Mezmo Telemetry Pipeline now includes comprehensive insights and optimization workflows for Datadog users, providing SREs and developers with the flexibility needed to profile and reduce large telemetry data volumes, thereby improving cost efficiency and maximizing value from their data.
Mezmo's new optimization workflow is designed to easily understand the data in the stream and make decisions on where to direct the data before it is stored in Datadog. With a clear view of what data is most valuable — and most costly — teams can consolidate common data patterns and make adjustments in storage, easily reducing log volume by as much as 40%. The simple, self-guided workflow ensures faster time to value. Teams can begin reducing data volume and seeing cost optimization in as little as 15 minutes.
“Datadog generates massive amounts of telemetry data, and companies are forced to store it all because they cannot easily determine what is important. Then, at the end of the billing cycle, they are stunned by the ever-increasing costs,” said Lauren Nagel, VP of Product for Mezmo. “Mezmo helps them cut through the noise to understand their data; work smarter, not harder; and, ultimately, identify opportunities for cost optimization that align with business goals.”
Keeping all telemetry data in a full-stack observability tool, like Datadog, is noisy, challenging to manage, and expensive. Mezmo’s new capabilities make it easier for companies to streamline data management while slashing observability costs. With Mezmo, teams get:
- Dedicated Datadog cost optimization workflow: Users can employ Mezmo flow, a guided experience for building telemetry pipelines, to profile Datadog logs, metrics, and tags to better understand operational value and estimate billing impact. This solution allows teams to discern what data is valuable, identify repetitive patterns, and apply optimizations to reduce overall data volume, helping to manage costs and avoid overage charges. Streamlining data processing before it reaches Datadog allows companies to manage Datadog costs predictably and ensure that they’re getting the most value from their data without unnecessary spend.
- Responsive pipelines: Empowering SREs and developers, responsive pipelines enable the dynamic adjustment of telemetry data processing based on triggers such as incidents and deployments, automatically providing high-fidelity data for troubleshooting. At the same time, live tail instantly streams parsed data, allowing teams to quickly spot and resolve issues as they occur, resulting in faster mean time to resolution (MTTR), reduced data costs, and enhanced incident response effectiveness. Teams can leverage a 4-hour “rewind buffer” with full-fidelity information immediately from the time the incident occurred. Available in private beta, this capability ensures that teams have the data needed to answer key questions about what happened pre-incident and facilitate a quicker diagnosis of the root cause.
- Advanced trace sampling for optimal data insight: Users can now choose how they want to sample their trace data — either head-based or tail-based sampling — to reduce noise and accelerate insight discovery. SREs and developers can now be confident that they have the necessary traces for troubleshooting, making them more productive while reducing MTTR. Reducing the mental toil of managing data leads to improved developer experiences, greater opportunities for innovation, and better business outcomes.
The Latest
Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...
Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...
The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...
The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...
In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...
AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.
The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...
The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...
Seamless shopping is a basic demand of today's boundaryless consumer — one with little patience for friction, limited tolerance for disconnected experiences and minimal hesitation in switching brands. Customers expect intuitive, highly personalized experiences and the ability to move effortlessly across physical and digital channels within the same journey. Failure to deliver can cost dearly ...
If your best engineers spend their days sorting tickets and resetting access, you are wasting talent. New global data shows that employees in the IT sector rank among the least motivated across industries. They're under a lot of pressure from many angles. Pressure to upskill and uncertainty around what agentic AI means for job security is creating anxiety. Meanwhile, these roles often function like an on-call job and require many repetitive tasks ...