
Honeycomb announced that the Honeycomb Metrics feature is generally available for its enterprise customers.
Organizations using Honeycomb for observability now have a new metrics capability to quickly identify and resolve system issues.
Used together—Honeycomb’s observability platform for exploring application data and Honeycomb Metrics for monitoring system data—developers now have complete visibility into how their code is behaving and performing, down to the individual request level, and down to the health of the underlying systems running their code. The sum of application behavior and system behavior determines the user’s experience, which can now be explored in its entirety, without having to switch between different tools, resulting in faster debugging workflows.
Honeycomb now gives engineering teams the best of both worlds: metrics for debugging system issues and event-based observability to debug application issues. It provides a single interface to identify and diagnose complex performance and quality issues, regardless of where they originate. Organizations using Honeycomb can gain quick insights into their application-level issues through observability and system-level issues through Honeycomb Metrics. This reduces the need to context switch and connect the dots between tools, which reduces the cognitive load for the team, and results in faster issue resolution.
“For too long, engineering teams have been forced to cobble together traditional monitoring tools and use metrics in ways that have proven ineffective in diagnosing the performance issues common in today’s complex environments,” said Christine Yen, CEO of Honeycomb. “Honeycomb Metrics, with native support for system-level metrics, together with event-driven observability at the application level, provides a best-in-class solution. It is a continuation of our commitment to help organizations across the globe boost their business performance.”
Honeycomb Metrics, available to all enterprise customers, ingests metrics data from OpenTelemetry, Prometheus, or Amazon CloudWatch. Visualizations of systems metrics are then created alongside application data visualizations. Customers can then quickly correlate or rule out the impacts underlying systems have on application performance, making it substantially faster and easier to identify the source of issues.
The Latest
For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...
Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...
Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...
Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...
Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...
AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...
More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...
In MEAN TIME TO INSIGHT Episode 21, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AI-driven NetOps ...
Enterprise IT has become increasingly complex and fragmented. Organizations are juggling dozens — sometimes hundreds — of different tools for endpoint management, security, app delivery, and employee experience. Each one needs its own license, its own maintenance, and its own integration. The result is a patchwork of overlapping tools, data stuck in silos, security vulnerabilities, and IT teams are spending more time managing software than actually getting work done ...
2025 was the year everybody finally saw the cracks in the foundation. If you were running production workloads, you probably lived through at least one outage you could not explain to your executives without pulling up a diagram and a whiteboard ...