Skip to main content

The Unseen Cost of Observability: The Need for Continuous Code Improvement

Cory Virok
Rollbar

Developers are getting better at building software, but we're not getting better at fixing it.

The problem is that fixing bugs and errors is still a very manual process. Developers have to dedicate significant time and effort investigating what went wrong before they can even begin to fix issues. That's because traditional observability tools will tell you if your infrastructure is having problems, but don't provide the context a developer needs to fix the code or how to prioritize them based on business requirements. Also, traditional observability tools produce far too much noise and too many false positives, leading to alert fatigue.

This drains developer time and productivity — and can result in a fair amount of frustration.

Fixing Bugs and Errors Is Developers' No. 1 Pain Point

Rollbar research reveals that fixing bugs and errors in code is developers' No. 1 pain point.

The research, based on a survey of nearly 1,000 developers, also indicates that 88% of developers feel that traditional tools used for error monitoring fall short of their expectations.

The developer survey group explained that traditional error monitoring is lacking because:

■ It requires them to manually respond to errors (39%)

■ It takes them too long to find all of the details they need to fix bugs and errors (36%)

■ It focuses on system stability and not enough on code health (31%)

■ It makes it difficult to detect errors (29%)

■ Its approach to error aggregation is either too broad or too narrow (23%)

With Traditional Troubleshooting, Developers Spend Significant Time Investigating Problems

This example will illustrate how many of these challenges can play out for an organization.

Imagine that you launched a new web app feature after ensuring the feature passed all tests. But in the morning, the support team finds that your highest paying customer has reported a single issue. Then another issue comes in from the same customer, and then another. The frustrated customer then mentions your company on Twitter in an effort to get your attention.

Customer support escalates this issue to their lead. The lead brings in the product manager, who asks someone to investigate. Your company's site reliability engineering (SRE) team investigates, but everything is looking good as far as they can see. Their telemetry shows that the error response rate is about the same, all servers are up, and the database is in good shape.

Eventually, a lead developer is tasked to investigate. Essentially, this individual needs to answer one question as quickly as possible: How do I reproduce this? To get the answer, the developer must talk to the customer to understand exactly what issue that customer is facing. This typically takes several hours of back and forth between the customer and developer.

Ultimately, the developer determines that the issue is on a single URL. This leads the developer to look into a log file to try to understand when and where this is happening. The developer finds one log line that has the stack trace with this error message: "The request parameter is invalid." This provides a clue that leads the developer to the line of code that needs to be checked.

The developer runs git blame on the file, which identifies the code's original author. The author joins the investigation squad. A few hours later, the squad figures out the cause of the issue and how they can fix it. They release a new build, and they ask customer support to check in with the customer to see if that customer is still experiencing the problem. By that point, the customer has gone to bed. Now the team must wait until tomorrow morning to get feedback.

That Delays Issue Resolution and Doesn't Work at Scale

The example above illustrates that troubleshooting for bugs and errors is still manual. That results in slow mean time to awareness (MTTA) and mean time to repair (MTTR).

Traditional troubleshooting tools also don't scale. That's a big problem because it prevents developer teams from moving quickly, whether they are working on shipping new releases, creating new features or even just contending with tech debt.

Most Observability Solutions Fall Short - Leaving Customers to Report Problems

Nearly half (46%) of developers said they have error monitoring solutions. But while most tools will tell you what's broken, they won't provide the context needed to understand issues and prioritize fixes. This helps explain why a whopping 88% of developers said that they only find out about bugs and errors from user complaints reported through the app or via social media.

Part of the problem is that developers frequently use tools which focus on system metrics and logging to solve challenges that address whether or not an app is working — and if not, why not. Modern observability tools aim to answer such questions as: Which microservice latency is causing 502s or which line of code is causing an elevated error rate?

But observability tools create problems of their own. For example, they generate too much noise, which leads to an inability to automate. That, in turn, results in slower triaging, fixes and remediation. The bottom line is that the process is still far too manual, slow, and not scalable.

Continuous Code Improvement Enables Fast Understanding and Action

What's really needed is more contextual information to find the root cause of errors, faster. Grouping together similar root causes also can alleviate alert fatigue. This enables developers to easily identify the source of bugs and errors — and resolve issues before customers complain.

This is now possible using continuous code improvement, which enables developers to observe and act on issues — often before customers are even aware that such problems exist.

Continuous code improvement also makes developers more productive because they can now spend less time debugging and more time building innovative solutions that add new value.

Cory Virok is CTO and Co-Founder of Rollbar

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

The Unseen Cost of Observability: The Need for Continuous Code Improvement

Cory Virok
Rollbar

Developers are getting better at building software, but we're not getting better at fixing it.

The problem is that fixing bugs and errors is still a very manual process. Developers have to dedicate significant time and effort investigating what went wrong before they can even begin to fix issues. That's because traditional observability tools will tell you if your infrastructure is having problems, but don't provide the context a developer needs to fix the code or how to prioritize them based on business requirements. Also, traditional observability tools produce far too much noise and too many false positives, leading to alert fatigue.

This drains developer time and productivity — and can result in a fair amount of frustration.

Fixing Bugs and Errors Is Developers' No. 1 Pain Point

Rollbar research reveals that fixing bugs and errors in code is developers' No. 1 pain point.

The research, based on a survey of nearly 1,000 developers, also indicates that 88% of developers feel that traditional tools used for error monitoring fall short of their expectations.

The developer survey group explained that traditional error monitoring is lacking because:

■ It requires them to manually respond to errors (39%)

■ It takes them too long to find all of the details they need to fix bugs and errors (36%)

■ It focuses on system stability and not enough on code health (31%)

■ It makes it difficult to detect errors (29%)

■ Its approach to error aggregation is either too broad or too narrow (23%)

With Traditional Troubleshooting, Developers Spend Significant Time Investigating Problems

This example will illustrate how many of these challenges can play out for an organization.

Imagine that you launched a new web app feature after ensuring the feature passed all tests. But in the morning, the support team finds that your highest paying customer has reported a single issue. Then another issue comes in from the same customer, and then another. The frustrated customer then mentions your company on Twitter in an effort to get your attention.

Customer support escalates this issue to their lead. The lead brings in the product manager, who asks someone to investigate. Your company's site reliability engineering (SRE) team investigates, but everything is looking good as far as they can see. Their telemetry shows that the error response rate is about the same, all servers are up, and the database is in good shape.

Eventually, a lead developer is tasked to investigate. Essentially, this individual needs to answer one question as quickly as possible: How do I reproduce this? To get the answer, the developer must talk to the customer to understand exactly what issue that customer is facing. This typically takes several hours of back and forth between the customer and developer.

Ultimately, the developer determines that the issue is on a single URL. This leads the developer to look into a log file to try to understand when and where this is happening. The developer finds one log line that has the stack trace with this error message: "The request parameter is invalid." This provides a clue that leads the developer to the line of code that needs to be checked.

The developer runs git blame on the file, which identifies the code's original author. The author joins the investigation squad. A few hours later, the squad figures out the cause of the issue and how they can fix it. They release a new build, and they ask customer support to check in with the customer to see if that customer is still experiencing the problem. By that point, the customer has gone to bed. Now the team must wait until tomorrow morning to get feedback.

That Delays Issue Resolution and Doesn't Work at Scale

The example above illustrates that troubleshooting for bugs and errors is still manual. That results in slow mean time to awareness (MTTA) and mean time to repair (MTTR).

Traditional troubleshooting tools also don't scale. That's a big problem because it prevents developer teams from moving quickly, whether they are working on shipping new releases, creating new features or even just contending with tech debt.

Most Observability Solutions Fall Short - Leaving Customers to Report Problems

Nearly half (46%) of developers said they have error monitoring solutions. But while most tools will tell you what's broken, they won't provide the context needed to understand issues and prioritize fixes. This helps explain why a whopping 88% of developers said that they only find out about bugs and errors from user complaints reported through the app or via social media.

Part of the problem is that developers frequently use tools which focus on system metrics and logging to solve challenges that address whether or not an app is working — and if not, why not. Modern observability tools aim to answer such questions as: Which microservice latency is causing 502s or which line of code is causing an elevated error rate?

But observability tools create problems of their own. For example, they generate too much noise, which leads to an inability to automate. That, in turn, results in slower triaging, fixes and remediation. The bottom line is that the process is still far too manual, slow, and not scalable.

Continuous Code Improvement Enables Fast Understanding and Action

What's really needed is more contextual information to find the root cause of errors, faster. Grouping together similar root causes also can alleviate alert fatigue. This enables developers to easily identify the source of bugs and errors — and resolve issues before customers complain.

This is now possible using continuous code improvement, which enables developers to observe and act on issues — often before customers are even aware that such problems exist.

Continuous code improvement also makes developers more productive because they can now spend less time debugging and more time building innovative solutions that add new value.

Cory Virok is CTO and Co-Founder of Rollbar

Hot Topics

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...