
Honeycomb announced a major milestone in helping enterprises maximize the value of their observability data.
Building towards a fully integrated telemetry pipeline, Honeycomb adds the ability to access archived telemetry data with a single click for full fidelity analysis from low cost storage, as well as powerful new ways to sample telemetry data and control costs. Honeycomb's latest release will help enterprises enable their engineers at all levels to debug production systems, optimize performance, and gain actionable insights while controlling observability spend.
Honeycomb helps customers mitigate the risk of AI in production by giving them fast access to the right contextual data without added overhead, out of control observability spend or operational complexity.
"Telemetry pipelines have rapidly become essential infrastructure, but managing data volume is just the baseline," said Christine Yen, CEO of Honeycomb. "We envision a future where pipelines truly integrated into observability platforms can dynamically link data ingestion to its actual usage, fundamentally reshaping how engineering teams manage their observability costs. With our new ability to enhance datasets, we've effectively eliminated the tradeoffs in data management, ensuring teams retain access to all their critical telemetry—even the bits they hadn't thought would matter. True lossless observability is now a reality."
The new capabilities provide teams with complete control over telemetry pipelines so they can get clarity about how their systems and services behave in production, for any subset of users, with no dead ends.
- Enhance provides full-fidelity data on demand and in budget: Honeycomb users now don't have to think about where their data lives or wait for rehydration. They can instantly retrieve dropped or expired logs and traces from their own S3, right from the Honeycomb UI during escalations, audits or incident retrospectives. This unlocks analysis of full-fidelity data when it matters most without unnecessary spend or disrupted investigations.
- Pipeline Builder enables customized control without complexity: A built-in graphical interface eliminates the need for working with YAML or external configuration. Teams can now adapt their telemetry strategy on the fly by building pipelines for receiving, transforming and exporting data, and applying tail-based sampling for managing data volumes. This intuitive UI empowers developers, SREs and observability teams to capture the right signals to accelerate troubleshooting and tune their data strategy with ease. This enables a future where a fully integrated Pipeline Builder sets the stage for tight feedback loops between your telemetry and the insights you surface.
These capabilities paired with Honeycomb's support for OpenTelemetry helps enterprises to scale their adoption of the open standard. Teams can also centrally manage large fleets of OpenTelemetry Collectors across Linux, Windows, Kubernetes and even legacy environments from one unified control plane. This allows Honeycomb customers to centrally manage, configure and monitor large-scale OpenTelemetry deployments while minimizing operational overhead.
The Latest
While 87% of manufacturing leaders and technical specialists report that ROI from their AIOps initiatives has met or exceeded expectations, only 37% say they are fully prepared to operationalize AI at scale, according to The Future of IT Operations in the AI Era, a report from Riverbed ...
Many organizations rely on cloud-first architectures to aggregate, analyze, and act on their operational data ... However, not all environments are conducive to cloud-first architectures ... There are limitations to cloud-first architectures that render them ineffective in mission-critical situations where responsiveness, cost control, and data sovereignty are non-negotiable; these limitations include ...
For years, cybersecurity was built around a simple assumption: protect the physical network and trust everything inside it. That model made sense when employees worked in offices, applications lived in data centers, and devices rarely left the building. Today's reality is fluid: people work from everywhere, applications run across multiple clouds, and AI-driven agents are beginning to act on behalf of users. But while the old perimeter dissolved, a new one quietly emerged ...
For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...
Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...
Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...
Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...
Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...
AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...
More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...