Skip to main content

Grafana Labs Releases Application Observability and Acquires Asserts.ai

Grafana Labs announced a range of new updates to help make it easier and faster for users to get value from observability, including the acquisition of Asserts.ai, whose technology will help Grafana Cloud users better understand their observability data and find issues more quickly, from the infrastructure to the application layer.

The company is also announcing the general availability (GA) of the Application Observability solution in Grafana Cloud, its fully managed observability offering, and Grafana Beyla, the eBPF-based application auto-instrumentation open source project that allows users to get started with application observability faster.

Asserts.ai provides out-of-the-box insights into relationships over time among various system components, enabling users to better understand and navigate their applications and infrastructure. Asserts.ai serves as a contextual layer for Prometheus metrics and provides an opinionated set of alerts and dashboards so that users can more efficiently perform root cause analysis and resolve issues more quickly.

​​"Over the past two years, the biggest needs we've heard from our customers have been to make it easier to understand their observability data, to extend observability into the application layer, and to get deeper, contextualized analytics,” said Tom Wilkie, CTO of Grafana Labs. “The GA of our Application Observability solution in Grafana Cloud, plus the Asserts acquisition, are big steps toward meeting those customer needs and providing an easier-to-use, integrated, and opinionated user experience.”

Grafana Cloud Application Observability – which is available to all users of Grafana Cloud, including the forever-free tier – provides SREs and developers an out-of-the-box experience to accelerate root cause analysis and minimize mean time to resolution (MTTR) of complex application problems. With its native support for both OpenTelemetry and Prometheus, Application Observability helps organizations looking to avoid vendor lock-in.

Founded in 2020 by Manoj Acharya, an early engineering leader and VP at AppDynamics, Asserts "helps us quickly correlate problems in the cluster and surfaces previously unknown, lingering issues,” said Gabriel Creti, Engineering Manager and Technical Lead at Heka.ai. “Asserts also offloads toil from our operations teams. By providing pre-made alerting rules and dashboards, it streamlines the painful job of maintaining them. It has always been a plus that the Asserts UI embeds directly into Grafana. We are excited to see it become a native part of the LGTM (Loki, Grafana, Tempo, Mimir) Stack going forward.”

Asserts in Grafana Cloud will be demoed during the keynote at ObservabilityCON 2023, and available in private preview soon.

To help users get started with application observability faster, Grafana Labs launched the open source project Grafana Beyla, which is now GA. Often, instrumenting an application for observability requires adding a language agent to the deployment or package, manually adding tracepoints, and then redeploying. By deploying Beyla as a daemon set in Kubernetes, you can instrument all services of the OpenTelemetry demo with a single command, without modifying source code. Based on eBPF (Extended Berkeley Packet Filter), which allows you to attach your own programs to different points of the Linux kernel, Beyla auto-instruments HTTP/gRPC applications written in Go, C/C++, Rust, Python, Ruby, Java, NodeJS, .NET, and more. It provides vendor-agnostic Rate-Errors-Duration (RED) metrics and traces, exported in the OpenTelemetry format and as native Prometheus metrics.

Grafana Labs is also rolling out enhancements across its product portfolio that solve some of the biggest issues customers are facing today. From reducing costs to removing toil to resolving issues quicker, these new updates make it easier for customers to manage and optimize their observability strategy from the ground up.

- Cost management hub: Grafana Labs is announcing a centralized hub with a suite of cost management tools for Grafana Cloud administrators to make it easier to manage, control, and optimize their spend. The suite of tools, developed in direct response to customer feedback, introduces two new features, Log Volume Explorer and Usage Attribution Report, in public preview. Also included in the suite is the GA of Adaptive Metrics for all tiers of Grafana Cloud, with a new interactive UI to apply and remove recommendations for aggregating unused and partially used metrics into lower cardinality versions of themselves to reduce costs.

- AI/ML enhancements: Grafana Labs has released an open source LLM app to enable large language model-based extensions to Grafana. Grafana Labs’ "big tent," open source approach allows developers to leverage public data sets, connect their own LLM and vector databases, and build LLM-powered experiences in Grafana faster and better together as a community. Additionally, AI/ML is being leveraged in feature development across Grafana Labs, prioritizing ways to help admins and developers remove toil and solve problems. New developments include Sift, a powerful diagnostic assistant in Grafana Cloud designed to automatically discover contributing causes to incidents across metric, log, and tracing data; Grafana Incident auto-summary, a tool that summarizes the key details from your incident timelines with a single click; and generative AI features to help create dashboard metadata and simplify writing PromQL queries.

- Simplifying service level objectives: The interest and demand for SLOs continue to increase – according to Grafana Labs’ 2023 Observability Survey, more than half of respondents say they are using SLOs or moving in that direction. Grafana SLO makes it easy to create, manage, and scale service level objectives, SLO dashboards, and error budget alerts in Grafana Cloud, enabling users to monitor the services that have the most impact on their customers' experience and ensure they stay healthy. The GA of Grafana SLO supports teams using as-code provisioning, via API or Terraform and handles all of the cascading recording rules, eliminating manual query management.

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

Grafana Labs Releases Application Observability and Acquires Asserts.ai

Grafana Labs announced a range of new updates to help make it easier and faster for users to get value from observability, including the acquisition of Asserts.ai, whose technology will help Grafana Cloud users better understand their observability data and find issues more quickly, from the infrastructure to the application layer.

The company is also announcing the general availability (GA) of the Application Observability solution in Grafana Cloud, its fully managed observability offering, and Grafana Beyla, the eBPF-based application auto-instrumentation open source project that allows users to get started with application observability faster.

Asserts.ai provides out-of-the-box insights into relationships over time among various system components, enabling users to better understand and navigate their applications and infrastructure. Asserts.ai serves as a contextual layer for Prometheus metrics and provides an opinionated set of alerts and dashboards so that users can more efficiently perform root cause analysis and resolve issues more quickly.

​​"Over the past two years, the biggest needs we've heard from our customers have been to make it easier to understand their observability data, to extend observability into the application layer, and to get deeper, contextualized analytics,” said Tom Wilkie, CTO of Grafana Labs. “The GA of our Application Observability solution in Grafana Cloud, plus the Asserts acquisition, are big steps toward meeting those customer needs and providing an easier-to-use, integrated, and opinionated user experience.”

Grafana Cloud Application Observability – which is available to all users of Grafana Cloud, including the forever-free tier – provides SREs and developers an out-of-the-box experience to accelerate root cause analysis and minimize mean time to resolution (MTTR) of complex application problems. With its native support for both OpenTelemetry and Prometheus, Application Observability helps organizations looking to avoid vendor lock-in.

Founded in 2020 by Manoj Acharya, an early engineering leader and VP at AppDynamics, Asserts "helps us quickly correlate problems in the cluster and surfaces previously unknown, lingering issues,” said Gabriel Creti, Engineering Manager and Technical Lead at Heka.ai. “Asserts also offloads toil from our operations teams. By providing pre-made alerting rules and dashboards, it streamlines the painful job of maintaining them. It has always been a plus that the Asserts UI embeds directly into Grafana. We are excited to see it become a native part of the LGTM (Loki, Grafana, Tempo, Mimir) Stack going forward.”

Asserts in Grafana Cloud will be demoed during the keynote at ObservabilityCON 2023, and available in private preview soon.

To help users get started with application observability faster, Grafana Labs launched the open source project Grafana Beyla, which is now GA. Often, instrumenting an application for observability requires adding a language agent to the deployment or package, manually adding tracepoints, and then redeploying. By deploying Beyla as a daemon set in Kubernetes, you can instrument all services of the OpenTelemetry demo with a single command, without modifying source code. Based on eBPF (Extended Berkeley Packet Filter), which allows you to attach your own programs to different points of the Linux kernel, Beyla auto-instruments HTTP/gRPC applications written in Go, C/C++, Rust, Python, Ruby, Java, NodeJS, .NET, and more. It provides vendor-agnostic Rate-Errors-Duration (RED) metrics and traces, exported in the OpenTelemetry format and as native Prometheus metrics.

Grafana Labs is also rolling out enhancements across its product portfolio that solve some of the biggest issues customers are facing today. From reducing costs to removing toil to resolving issues quicker, these new updates make it easier for customers to manage and optimize their observability strategy from the ground up.

- Cost management hub: Grafana Labs is announcing a centralized hub with a suite of cost management tools for Grafana Cloud administrators to make it easier to manage, control, and optimize their spend. The suite of tools, developed in direct response to customer feedback, introduces two new features, Log Volume Explorer and Usage Attribution Report, in public preview. Also included in the suite is the GA of Adaptive Metrics for all tiers of Grafana Cloud, with a new interactive UI to apply and remove recommendations for aggregating unused and partially used metrics into lower cardinality versions of themselves to reduce costs.

- AI/ML enhancements: Grafana Labs has released an open source LLM app to enable large language model-based extensions to Grafana. Grafana Labs’ "big tent," open source approach allows developers to leverage public data sets, connect their own LLM and vector databases, and build LLM-powered experiences in Grafana faster and better together as a community. Additionally, AI/ML is being leveraged in feature development across Grafana Labs, prioritizing ways to help admins and developers remove toil and solve problems. New developments include Sift, a powerful diagnostic assistant in Grafana Cloud designed to automatically discover contributing causes to incidents across metric, log, and tracing data; Grafana Incident auto-summary, a tool that summarizes the key details from your incident timelines with a single click; and generative AI features to help create dashboard metadata and simplify writing PromQL queries.

- Simplifying service level objectives: The interest and demand for SLOs continue to increase – according to Grafana Labs’ 2023 Observability Survey, more than half of respondents say they are using SLOs or moving in that direction. Grafana SLO makes it easy to create, manage, and scale service level objectives, SLO dashboards, and error budget alerts in Grafana Cloud, enabling users to monitor the services that have the most impact on their customers' experience and ensure they stay healthy. The GA of Grafana SLO supports teams using as-code provisioning, via API or Terraform and handles all of the cascading recording rules, eliminating manual query management.

The Latest

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...