Skip to main content

Mezmo Announces Cost Optimization Workflow for Datadog Users

Mezmo unveiled new solutions to optimize observability costs for Datadog users. 

Mezmo Telemetry Pipeline now includes comprehensive insights and optimization workflows for Datadog users, providing SREs and developers with the flexibility needed to profile and reduce large telemetry data volumes, thereby improving cost efficiency and maximizing value from their data.

Mezmo's new optimization workflow is designed to easily understand the data in the stream and make decisions on where to direct the data before it is stored in Datadog. With a clear view of what data is most valuable — and most costly — teams can consolidate common data patterns and make adjustments in storage, easily reducing log volume by as much as 40%. The simple, self-guided workflow ensures faster time to value. Teams can begin reducing data volume and seeing cost optimization in as little as 15 minutes.

“Datadog generates massive amounts of telemetry data, and companies are forced to store it all because they cannot easily determine what is important. Then, at the end of the billing cycle, they are stunned by the ever-increasing costs,” said Lauren Nagel, VP of Product for Mezmo. “Mezmo helps them cut through the noise to understand their data; work smarter, not harder; and, ultimately, identify opportunities for cost optimization that align with business goals.”

Keeping all telemetry data in a full-stack observability tool, like Datadog, is noisy, challenging to manage, and expensive. Mezmo’s new capabilities make it easier for companies to streamline data management while slashing observability costs. With Mezmo, teams get:

  • Dedicated Datadog cost optimization workflow: Users can employ Mezmo flow, a guided experience for building telemetry pipelines, to profile Datadog logs, metrics, and tags to better understand operational value and estimate billing impact. This solution allows teams to discern what data is valuable, identify repetitive patterns, and apply optimizations to reduce overall data volume, helping to manage costs and avoid overage charges. Streamlining data processing before it reaches Datadog allows companies to manage Datadog costs predictably and ensure that they’re getting the most value from their data without unnecessary spend.
  • Responsive pipelines: Empowering SREs and developers, responsive pipelines enable the dynamic adjustment of telemetry data processing based on triggers such as incidents and deployments, automatically providing high-fidelity data for troubleshooting. At the same time, live tail instantly streams parsed data, allowing teams to quickly spot and resolve issues as they occur, resulting in faster mean time to resolution (MTTR), reduced data costs, and enhanced incident response effectiveness. Teams can leverage a 4-hour “rewind buffer” with full-fidelity information immediately from the time the incident occurred. Available in private beta, this capability ensures that teams have the data needed to answer key questions about what happened pre-incident and facilitate a quicker diagnosis of the root cause.
  • Advanced trace sampling for optimal data insight: Users can now choose how they want to sample their trace data — either head-based or tail-based sampling — to reduce noise and accelerate insight discovery. SREs and developers can now be confident that they have the necessary traces for troubleshooting, making them more productive while reducing MTTR. Reducing the mental toil of managing data leads to improved developer experiences, greater opportunities for innovation, and better business outcomes.

The Latest

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...

Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...

While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...

Mezmo Announces Cost Optimization Workflow for Datadog Users

Mezmo unveiled new solutions to optimize observability costs for Datadog users. 

Mezmo Telemetry Pipeline now includes comprehensive insights and optimization workflows for Datadog users, providing SREs and developers with the flexibility needed to profile and reduce large telemetry data volumes, thereby improving cost efficiency and maximizing value from their data.

Mezmo's new optimization workflow is designed to easily understand the data in the stream and make decisions on where to direct the data before it is stored in Datadog. With a clear view of what data is most valuable — and most costly — teams can consolidate common data patterns and make adjustments in storage, easily reducing log volume by as much as 40%. The simple, self-guided workflow ensures faster time to value. Teams can begin reducing data volume and seeing cost optimization in as little as 15 minutes.

“Datadog generates massive amounts of telemetry data, and companies are forced to store it all because they cannot easily determine what is important. Then, at the end of the billing cycle, they are stunned by the ever-increasing costs,” said Lauren Nagel, VP of Product for Mezmo. “Mezmo helps them cut through the noise to understand their data; work smarter, not harder; and, ultimately, identify opportunities for cost optimization that align with business goals.”

Keeping all telemetry data in a full-stack observability tool, like Datadog, is noisy, challenging to manage, and expensive. Mezmo’s new capabilities make it easier for companies to streamline data management while slashing observability costs. With Mezmo, teams get:

  • Dedicated Datadog cost optimization workflow: Users can employ Mezmo flow, a guided experience for building telemetry pipelines, to profile Datadog logs, metrics, and tags to better understand operational value and estimate billing impact. This solution allows teams to discern what data is valuable, identify repetitive patterns, and apply optimizations to reduce overall data volume, helping to manage costs and avoid overage charges. Streamlining data processing before it reaches Datadog allows companies to manage Datadog costs predictably and ensure that they’re getting the most value from their data without unnecessary spend.
  • Responsive pipelines: Empowering SREs and developers, responsive pipelines enable the dynamic adjustment of telemetry data processing based on triggers such as incidents and deployments, automatically providing high-fidelity data for troubleshooting. At the same time, live tail instantly streams parsed data, allowing teams to quickly spot and resolve issues as they occur, resulting in faster mean time to resolution (MTTR), reduced data costs, and enhanced incident response effectiveness. Teams can leverage a 4-hour “rewind buffer” with full-fidelity information immediately from the time the incident occurred. Available in private beta, this capability ensures that teams have the data needed to answer key questions about what happened pre-incident and facilitate a quicker diagnosis of the root cause.
  • Advanced trace sampling for optimal data insight: Users can now choose how they want to sample their trace data — either head-based or tail-based sampling — to reduce noise and accelerate insight discovery. SREs and developers can now be confident that they have the necessary traces for troubleshooting, making them more productive while reducing MTTR. Reducing the mental toil of managing data leads to improved developer experiences, greater opportunities for innovation, and better business outcomes.

The Latest

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...

Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...

While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...