
Chronosphere is launching the Observability Data Optimization Cycle – a vendor-neutral framework to help companies regain control over observability data growth.
Chronosphere is also introducing new product features to support this framework, enabling teams to better understand and optimize the management of their cloud observability resources.
The Observability Data Optimization Cycle helps organizations to better understand and take action on the cost of their observability data through new features that support a process consisting of Analyzing, Refining and Operating:
- Centralized Governance: Provides engineering teams with broader authority to control data growth and predictability by enabling the Central Observability Team (COT) with information on how much data each team is using. It also assigns licensed capacity to individual teams so they can each prioritize based on their allotted amount of data.
- Usage Analyzer: Allows teams to view the cost and value of their data side by side, illustrating how and where the data is used, the volume of data used over a specific period of time, and which engineers are using the data.
- Shaping Policy UI: Helps teams preview the impact of shaping policies before implementing them so they can make adjustments when necessary.
- Derived Metrics: Makes metrics more straightforward by allowing organizations to store complex, high-value queries with more user-friendly names and visualizations.
"As more organizations adopt cloud native architectures, engineers are drowning in the massive amount of observability data that comes with it," said Martin Mao, CEO of Chronosphere. "This is causing an explosion in observability costs, while simultaneously overwhelming engineers in the troubleshooting process, leading to longer incidents and unhappy customers. Our new framework and features helps organizations achieve the best possible observability outcomes while keeping costs under control."
The Latest
Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...
Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...
For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...
PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...
The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...
Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...
Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...
On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...
Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...
Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...