Skip to main content

Understanding Observability Data's Impact Across an Organization

Tucker Callaway
Mezmo

As demand for digital services increases and distributed systems become more complex, organizations must collect and process a growing amount of observability data (logs, metrics, and traces). Site reliability engineers (SREs), developers, and security engineers use observability data to learn how their applications and environments are performing so they can successfully respond to issues and mitigate risk.

With use cases expanding across many business units, it's important for organizations to know how users in various roles use observability data. A new report from The Harris Poll and Mezmo explores this concept. Based on a survey of 300 SREs, developers, and security engineers in the US, the study digs into key pain points and how companies might use observability pipelines to help make decisions faster.

Observability Data Is a Part of Daily Usage

More than half of SREs, developers, and security engineers use observability data daily, with another third of people in each role using it two to three times per week. Typical machine data interaction looks different for each role. SREs focus on troubleshooting, analytics, and monitoring uptime; developers on troubleshooting and debugging; and security engineers on cybersecurity, firewall integrity, and threat detection.

The Amount of Data Is Escalating

Data volume is increasing considerably and becoming difficult to control as data is spread across many systems and apps. While respondents in all three roles use a median of four data sources to get their jobs done, SREs and developers often use three separate products to access that data, and security engineers use two. And over the last 12 months, developers and security engineers have seen a median of two new data sources being added, and SREs have seen three.

Adding new data sources and controlling the flow of data has become an overly complex process involving many different tools that don't integrate well and provide delayed insights. Organizations must harness all this data to make real-time business decisions because a slight delay can cause issues.

Difficult to Control Skyrocketing Costs

In addition to data volume, the three groups listed cost control as a top challenge. Specifically, 92% of SREs, 99% of developers, and 97% of security engineers say it's hard to manage the costs of collecting and storing data. High volume of data creates budget pressures across the organization as budgets are not increasing proportionally to the cost. Organizations must look for ways to extract more value from their telemetry data by making data available to wider teams for additional use cases. This requires free flow of usable telemetry data to any platform of choice.

Making Data Actionable with Observability Pipelines

Most professionals in all three roles agree that newly adopted technology, like observability pipelines, must integrate with existing data management platforms. When looking at observability pipelines to help better control and take action on data, all three roles report that supporting cloud data sources is essential. SREs and developers are also interested in making sure that cloud application data sources are supported, while SREs and security engineers need to be sure that there is firewall data source support. However, teams are not just looking for collecting data but need various transformations to add additional context to the data. They are looking for capabilities such as log transformations, sampling, enrichment, and augmentation to make data more meaningful and actionable.

As the report reveals, the importance of observability data is growing, but organizations are challenged with making this data actionable. Observability data pipelines are an emerging technology organizations can use to collect, transform, and route all this data to various teams for greater actionability. Once organizations can understand how different groups use this data, they'll be able to extract greater value for the business.

Tucker Callaway is CEO of Mezmo

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

Understanding Observability Data's Impact Across an Organization

Tucker Callaway
Mezmo

As demand for digital services increases and distributed systems become more complex, organizations must collect and process a growing amount of observability data (logs, metrics, and traces). Site reliability engineers (SREs), developers, and security engineers use observability data to learn how their applications and environments are performing so they can successfully respond to issues and mitigate risk.

With use cases expanding across many business units, it's important for organizations to know how users in various roles use observability data. A new report from The Harris Poll and Mezmo explores this concept. Based on a survey of 300 SREs, developers, and security engineers in the US, the study digs into key pain points and how companies might use observability pipelines to help make decisions faster.

Observability Data Is a Part of Daily Usage

More than half of SREs, developers, and security engineers use observability data daily, with another third of people in each role using it two to three times per week. Typical machine data interaction looks different for each role. SREs focus on troubleshooting, analytics, and monitoring uptime; developers on troubleshooting and debugging; and security engineers on cybersecurity, firewall integrity, and threat detection.

The Amount of Data Is Escalating

Data volume is increasing considerably and becoming difficult to control as data is spread across many systems and apps. While respondents in all three roles use a median of four data sources to get their jobs done, SREs and developers often use three separate products to access that data, and security engineers use two. And over the last 12 months, developers and security engineers have seen a median of two new data sources being added, and SREs have seen three.

Adding new data sources and controlling the flow of data has become an overly complex process involving many different tools that don't integrate well and provide delayed insights. Organizations must harness all this data to make real-time business decisions because a slight delay can cause issues.

Difficult to Control Skyrocketing Costs

In addition to data volume, the three groups listed cost control as a top challenge. Specifically, 92% of SREs, 99% of developers, and 97% of security engineers say it's hard to manage the costs of collecting and storing data. High volume of data creates budget pressures across the organization as budgets are not increasing proportionally to the cost. Organizations must look for ways to extract more value from their telemetry data by making data available to wider teams for additional use cases. This requires free flow of usable telemetry data to any platform of choice.

Making Data Actionable with Observability Pipelines

Most professionals in all three roles agree that newly adopted technology, like observability pipelines, must integrate with existing data management platforms. When looking at observability pipelines to help better control and take action on data, all three roles report that supporting cloud data sources is essential. SREs and developers are also interested in making sure that cloud application data sources are supported, while SREs and security engineers need to be sure that there is firewall data source support. However, teams are not just looking for collecting data but need various transformations to add additional context to the data. They are looking for capabilities such as log transformations, sampling, enrichment, and augmentation to make data more meaningful and actionable.

As the report reveals, the importance of observability data is growing, but organizations are challenged with making this data actionable. Observability data pipelines are an emerging technology organizations can use to collect, transform, and route all this data to various teams for greater actionability. Once organizations can understand how different groups use this data, they'll be able to extract greater value for the business.

Tucker Callaway is CEO of Mezmo

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...