Skip to main content

Chronosphere Launches Observability Data Optimization Cycle

Chronosphere is launching the Observability Data Optimization Cycle, a vendor-neutral framework.

The framework helps companies regain control over observability data growth.

Chronosphere is also introducing new product features to support this framework, enabling teams to better understand and optimize the management of their cloud observability resources.

The Observability Data Optimization Cycle helps organizations overcome these challenges by enabling them to better understand and take action on the cost of their observability data through new features that support a process consisting of Analyzing, Refining and Operating:

- Centralized Governance: Provides engineering teams with broader authority to control data growth and predictability by enabling the Central Observability Team (COT) with information on how much data each team is using. It also assigns licensed capacity to individual teams so they can each prioritize based on their allotted amount of data.

- Usage Analyzer: Allows teams to view the cost and value of their data side by side, illustrating how and where the data is used, the volume of data used over a specific period of time, and which engineers are using the data.

- Shaping Policy UI: Helps teams preview the impact of shaping policies before implementing them so they can make adjustments when necessary.

- Derived Metrics: Makes metrics more straightforward by allowing organizations to store complex, high-value queries with more user-friendly names and visualizations.

"As more organizations adopt cloud native architectures, engineers are drowning in the massive amount of observability data that comes with it," said Martin Mao, CEO of Chronosphere. "This is causing an explosion in observability costs, while simultaneously overwhelming engineers in the troubleshooting process, leading to longer incidents and unhappy customers. Our new framework and features helps organizations achieve the best possible observability outcomes while keeping costs under control."

The Latest

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...

Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...

While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...

Chronosphere Launches Observability Data Optimization Cycle

Chronosphere is launching the Observability Data Optimization Cycle, a vendor-neutral framework.

The framework helps companies regain control over observability data growth.

Chronosphere is also introducing new product features to support this framework, enabling teams to better understand and optimize the management of their cloud observability resources.

The Observability Data Optimization Cycle helps organizations overcome these challenges by enabling them to better understand and take action on the cost of their observability data through new features that support a process consisting of Analyzing, Refining and Operating:

- Centralized Governance: Provides engineering teams with broader authority to control data growth and predictability by enabling the Central Observability Team (COT) with information on how much data each team is using. It also assigns licensed capacity to individual teams so they can each prioritize based on their allotted amount of data.

- Usage Analyzer: Allows teams to view the cost and value of their data side by side, illustrating how and where the data is used, the volume of data used over a specific period of time, and which engineers are using the data.

- Shaping Policy UI: Helps teams preview the impact of shaping policies before implementing them so they can make adjustments when necessary.

- Derived Metrics: Makes metrics more straightforward by allowing organizations to store complex, high-value queries with more user-friendly names and visualizations.

"As more organizations adopt cloud native architectures, engineers are drowning in the massive amount of observability data that comes with it," said Martin Mao, CEO of Chronosphere. "This is causing an explosion in observability costs, while simultaneously overwhelming engineers in the troubleshooting process, leading to longer incidents and unhappy customers. Our new framework and features helps organizations achieve the best possible observability outcomes while keeping costs under control."

The Latest

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...

Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...

While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...