Skip to main content

Grafana Labs Releases Application Observability and Acquires Asserts.ai

Grafana Labs announced a range of new updates to help make it easier and faster for users to get value from observability, including the acquisition of Asserts.ai, whose technology will help Grafana Cloud users better understand their observability data and find issues more quickly, from the infrastructure to the application layer.

The company is also announcing the general availability (GA) of the Application Observability solution in Grafana Cloud, its fully managed observability offering, and Grafana Beyla, the eBPF-based application auto-instrumentation open source project that allows users to get started with application observability faster.

Asserts.ai provides out-of-the-box insights into relationships over time among various system components, enabling users to better understand and navigate their applications and infrastructure. Asserts.ai serves as a contextual layer for Prometheus metrics and provides an opinionated set of alerts and dashboards so that users can more efficiently perform root cause analysis and resolve issues more quickly.

​​"Over the past two years, the biggest needs we've heard from our customers have been to make it easier to understand their observability data, to extend observability into the application layer, and to get deeper, contextualized analytics,” said Tom Wilkie, CTO of Grafana Labs. “The GA of our Application Observability solution in Grafana Cloud, plus the Asserts acquisition, are big steps toward meeting those customer needs and providing an easier-to-use, integrated, and opinionated user experience.”

Grafana Cloud Application Observability – which is available to all users of Grafana Cloud, including the forever-free tier – provides SREs and developers an out-of-the-box experience to accelerate root cause analysis and minimize mean time to resolution (MTTR) of complex application problems. With its native support for both OpenTelemetry and Prometheus, Application Observability helps organizations looking to avoid vendor lock-in.

Founded in 2020 by Manoj Acharya, an early engineering leader and VP at AppDynamics, Asserts "helps us quickly correlate problems in the cluster and surfaces previously unknown, lingering issues,” said Gabriel Creti, Engineering Manager and Technical Lead at Heka.ai. “Asserts also offloads toil from our operations teams. By providing pre-made alerting rules and dashboards, it streamlines the painful job of maintaining them. It has always been a plus that the Asserts UI embeds directly into Grafana. We are excited to see it become a native part of the LGTM (Loki, Grafana, Tempo, Mimir) Stack going forward.”

Asserts in Grafana Cloud will be demoed during the keynote at ObservabilityCON 2023, and available in private preview soon.

To help users get started with application observability faster, Grafana Labs launched the open source project Grafana Beyla, which is now GA. Often, instrumenting an application for observability requires adding a language agent to the deployment or package, manually adding tracepoints, and then redeploying. By deploying Beyla as a daemon set in Kubernetes, you can instrument all services of the OpenTelemetry demo with a single command, without modifying source code. Based on eBPF (Extended Berkeley Packet Filter), which allows you to attach your own programs to different points of the Linux kernel, Beyla auto-instruments HTTP/gRPC applications written in Go, C/C++, Rust, Python, Ruby, Java, NodeJS, .NET, and more. It provides vendor-agnostic Rate-Errors-Duration (RED) metrics and traces, exported in the OpenTelemetry format and as native Prometheus metrics.

Grafana Labs is also rolling out enhancements across its product portfolio that solve some of the biggest issues customers are facing today. From reducing costs to removing toil to resolving issues quicker, these new updates make it easier for customers to manage and optimize their observability strategy from the ground up.

- Cost management hub: Grafana Labs is announcing a centralized hub with a suite of cost management tools for Grafana Cloud administrators to make it easier to manage, control, and optimize their spend. The suite of tools, developed in direct response to customer feedback, introduces two new features, Log Volume Explorer and Usage Attribution Report, in public preview. Also included in the suite is the GA of Adaptive Metrics for all tiers of Grafana Cloud, with a new interactive UI to apply and remove recommendations for aggregating unused and partially used metrics into lower cardinality versions of themselves to reduce costs.

- AI/ML enhancements: Grafana Labs has released an open source LLM app to enable large language model-based extensions to Grafana. Grafana Labs’ "big tent," open source approach allows developers to leverage public data sets, connect their own LLM and vector databases, and build LLM-powered experiences in Grafana faster and better together as a community. Additionally, AI/ML is being leveraged in feature development across Grafana Labs, prioritizing ways to help admins and developers remove toil and solve problems. New developments include Sift, a powerful diagnostic assistant in Grafana Cloud designed to automatically discover contributing causes to incidents across metric, log, and tracing data; Grafana Incident auto-summary, a tool that summarizes the key details from your incident timelines with a single click; and generative AI features to help create dashboard metadata and simplify writing PromQL queries.

- Simplifying service level objectives: The interest and demand for SLOs continue to increase – according to Grafana Labs’ 2023 Observability Survey, more than half of respondents say they are using SLOs or moving in that direction. Grafana SLO makes it easy to create, manage, and scale service level objectives, SLO dashboards, and error budget alerts in Grafana Cloud, enabling users to monitor the services that have the most impact on their customers' experience and ensure they stay healthy. The GA of Grafana SLO supports teams using as-code provisioning, via API or Terraform and handles all of the cascading recording rules, eliminating manual query management.

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

Grafana Labs Releases Application Observability and Acquires Asserts.ai

Grafana Labs announced a range of new updates to help make it easier and faster for users to get value from observability, including the acquisition of Asserts.ai, whose technology will help Grafana Cloud users better understand their observability data and find issues more quickly, from the infrastructure to the application layer.

The company is also announcing the general availability (GA) of the Application Observability solution in Grafana Cloud, its fully managed observability offering, and Grafana Beyla, the eBPF-based application auto-instrumentation open source project that allows users to get started with application observability faster.

Asserts.ai provides out-of-the-box insights into relationships over time among various system components, enabling users to better understand and navigate their applications and infrastructure. Asserts.ai serves as a contextual layer for Prometheus metrics and provides an opinionated set of alerts and dashboards so that users can more efficiently perform root cause analysis and resolve issues more quickly.

​​"Over the past two years, the biggest needs we've heard from our customers have been to make it easier to understand their observability data, to extend observability into the application layer, and to get deeper, contextualized analytics,” said Tom Wilkie, CTO of Grafana Labs. “The GA of our Application Observability solution in Grafana Cloud, plus the Asserts acquisition, are big steps toward meeting those customer needs and providing an easier-to-use, integrated, and opinionated user experience.”

Grafana Cloud Application Observability – which is available to all users of Grafana Cloud, including the forever-free tier – provides SREs and developers an out-of-the-box experience to accelerate root cause analysis and minimize mean time to resolution (MTTR) of complex application problems. With its native support for both OpenTelemetry and Prometheus, Application Observability helps organizations looking to avoid vendor lock-in.

Founded in 2020 by Manoj Acharya, an early engineering leader and VP at AppDynamics, Asserts "helps us quickly correlate problems in the cluster and surfaces previously unknown, lingering issues,” said Gabriel Creti, Engineering Manager and Technical Lead at Heka.ai. “Asserts also offloads toil from our operations teams. By providing pre-made alerting rules and dashboards, it streamlines the painful job of maintaining them. It has always been a plus that the Asserts UI embeds directly into Grafana. We are excited to see it become a native part of the LGTM (Loki, Grafana, Tempo, Mimir) Stack going forward.”

Asserts in Grafana Cloud will be demoed during the keynote at ObservabilityCON 2023, and available in private preview soon.

To help users get started with application observability faster, Grafana Labs launched the open source project Grafana Beyla, which is now GA. Often, instrumenting an application for observability requires adding a language agent to the deployment or package, manually adding tracepoints, and then redeploying. By deploying Beyla as a daemon set in Kubernetes, you can instrument all services of the OpenTelemetry demo with a single command, without modifying source code. Based on eBPF (Extended Berkeley Packet Filter), which allows you to attach your own programs to different points of the Linux kernel, Beyla auto-instruments HTTP/gRPC applications written in Go, C/C++, Rust, Python, Ruby, Java, NodeJS, .NET, and more. It provides vendor-agnostic Rate-Errors-Duration (RED) metrics and traces, exported in the OpenTelemetry format and as native Prometheus metrics.

Grafana Labs is also rolling out enhancements across its product portfolio that solve some of the biggest issues customers are facing today. From reducing costs to removing toil to resolving issues quicker, these new updates make it easier for customers to manage and optimize their observability strategy from the ground up.

- Cost management hub: Grafana Labs is announcing a centralized hub with a suite of cost management tools for Grafana Cloud administrators to make it easier to manage, control, and optimize their spend. The suite of tools, developed in direct response to customer feedback, introduces two new features, Log Volume Explorer and Usage Attribution Report, in public preview. Also included in the suite is the GA of Adaptive Metrics for all tiers of Grafana Cloud, with a new interactive UI to apply and remove recommendations for aggregating unused and partially used metrics into lower cardinality versions of themselves to reduce costs.

- AI/ML enhancements: Grafana Labs has released an open source LLM app to enable large language model-based extensions to Grafana. Grafana Labs’ "big tent," open source approach allows developers to leverage public data sets, connect their own LLM and vector databases, and build LLM-powered experiences in Grafana faster and better together as a community. Additionally, AI/ML is being leveraged in feature development across Grafana Labs, prioritizing ways to help admins and developers remove toil and solve problems. New developments include Sift, a powerful diagnostic assistant in Grafana Cloud designed to automatically discover contributing causes to incidents across metric, log, and tracing data; Grafana Incident auto-summary, a tool that summarizes the key details from your incident timelines with a single click; and generative AI features to help create dashboard metadata and simplify writing PromQL queries.

- Simplifying service level objectives: The interest and demand for SLOs continue to increase – according to Grafana Labs’ 2023 Observability Survey, more than half of respondents say they are using SLOs or moving in that direction. Grafana SLO makes it easy to create, manage, and scale service level objectives, SLO dashboards, and error budget alerts in Grafana Cloud, enabling users to monitor the services that have the most impact on their customers' experience and ensure they stay healthy. The GA of Grafana SLO supports teams using as-code provisioning, via API or Terraform and handles all of the cascading recording rules, eliminating manual query management.

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.