
Mezmo unveiled Mezmo Flow, a guided experience for building telemetry pipelines.
With Mezmo Flow, users can quickly onboard new log sources, profile data, and implement recommended optimizations with a single click, to reduce log volumes by more than 40%. With this release, Mezmo enables next generation log management, a pipeline-first log analysis solution that helps companies control incoming data volumes, identify the most valuable data, and glean insights faster, without the need to index data in expensive observability tools.
Mezmo supports intelligent pipelines that automatically analyze telemetry data sources, identify noisy log patterns, and create a data-optimizing pipeline that routes to any observability platform. With Mezmo Flow, users can create their first log volume reduction pipeline in less than 15 minutes, retaining the most valuable data and preventing unnecessary charges, overages, and spikes. Next generation log management is a pipeline-first log analysis that improves the quality of critical application logs to improve signal-to-noise ratio for increased developer productivity. Alerts and notifications on data in motion can help users take timely actions for accidental application log volume spikes or changes in metrics.
“Users want an easy way to access and utilize the telemetry data,” said Tucker Callaway, CEO, Mezmo. “With Mezmo Flow, SREs and developers can process the most relevant data to debug and troubleshoot issues quickly, allowing them to remain focused on delivering innovative products to their customers.”
As part of its recent release, Mezmo is also introducing a series of new capabilities to simplify action and control for developers and SREs. These include:
- Data profiler enhancements: Analyze and understand structured and unstructured logs while continuously monitoring log volume trends across applications.
- Processor groups: Create multifunctional, reusable pipeline components, improving pipeline development time and ensuring standardization and governance over data management.
- Shared resources: Configure sources once and use them for multiple pipelines. This ensures data is delivered to the right users in their preferred tools with as little overhead as possible.
- Data aggregation for insights: Collect and aggregate telemetry metrics such as log volume or errors per application, host, and user-defined label. The aggregated data is available as interactive reports to gain insights such as application log volume or error trends and can be used to detect anomalies such as volume surges and alert users to help prevent overages.
The Latest
In MEAN TIME TO INSIGHT Episode 12, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses purchasing new network observability solutions....
There's an image problem with mobile app security. While it's critical for highly regulated industries like financial services, it is often overlooked in others. This usually comes down to development priorities, which typically fall into three categories: user experience, app performance, and app security. When dealing with finite resources such as time, shifting priorities, and team skill sets, engineering teams often have to prioritize one over the others. Usually, security is the odd man out ...

IT outages, caused by poor-quality software updates, are no longer rare incidents but rather frequent occurrences, directly impacting over half of US consumers. According to the 2024 Software Failure Sentiment Report from Harness, many now equate these failures to critical public health crises ...
In just a few months, Google will again head to Washington DC and meet with the government for a two-week remedy trial to cement the fate of what happens to Chrome and its search business in the face of ongoing antitrust court case(s). Or, Google may proactively decide to make changes, putting the power in its hands to outline a suitable remedy. Regardless of the outcome, one thing is sure: there will be far more implications for AI than just a shift in Google's Search business ...

In today's fast-paced digital world, Application Performance Monitoring (APM) is crucial for maintaining the health of an organization's digital ecosystem. However, the complexities of modern IT environments, including distributed architectures, hybrid clouds, and dynamic workloads, present significant challenges ... This blog explores the challenges of implementing application performance monitoring (APM) and offers strategies for overcoming them ...
Service disruptions remain a critical concern for IT and business executives, with 88% of respondents saying they believe another major incident will occur in the next 12 months, according to a study from PagerDuty ...
IT infrastructure (on-premises, cloud, or hybrid) is becoming larger and more complex. IT management tools need data to drive better decision making and more process automation to complement manual intervention by IT staff. That is why smart organizations invest in the systems and strategies needed to make their IT infrastructure more resilient in the event of disruption, and why many are turning to application performance monitoring (APM) in conjunction with high availability (HA) clusters ...
In today's data-driven world, the management of databases has become increasingly complex and critical. The following are findings from Redgate's 2025 The State of the Database Landscape report ...
With the 2027 deadline for SAP S/4HANA migrations fast approaching, organizations are accelerating their transition plans ... For organizations that intend to remain on SAP ECC in the near-term, the focus has shifted to improving operational efficiencies and meeting demands for faster cycle times ...
As applications expand and systems intertwine, performance bottlenecks, quality lapses, and disjointed pipelines threaten progress. To stay ahead, leading organizations are turning to three foundational strategies: developer-first observability, API platform adoption, and sustainable test growth ...