Skip to main content

The Unseen Cost of Observability: The Need for Continuous Code Improvement

Cory Virok
Rollbar

Developers are getting better at building software, but we're not getting better at fixing it.

The problem is that fixing bugs and errors is still a very manual process. Developers have to dedicate significant time and effort investigating what went wrong before they can even begin to fix issues. That's because traditional observability tools will tell you if your infrastructure is having problems, but don't provide the context a developer needs to fix the code or how to prioritize them based on business requirements. Also, traditional observability tools produce far too much noise and too many false positives, leading to alert fatigue.

This drains developer time and productivity — and can result in a fair amount of frustration.

Fixing Bugs and Errors Is Developers' No. 1 Pain Point

Rollbar research reveals that fixing bugs and errors in code is developers' No. 1 pain point.

The research, based on a survey of nearly 1,000 developers, also indicates that 88% of developers feel that traditional tools used for error monitoring fall short of their expectations.

The developer survey group explained that traditional error monitoring is lacking because:

■ It requires them to manually respond to errors (39%)

■ It takes them too long to find all of the details they need to fix bugs and errors (36%)

■ It focuses on system stability and not enough on code health (31%)

■ It makes it difficult to detect errors (29%)

■ Its approach to error aggregation is either too broad or too narrow (23%)

With Traditional Troubleshooting, Developers Spend Significant Time Investigating Problems

This example will illustrate how many of these challenges can play out for an organization.

Imagine that you launched a new web app feature after ensuring the feature passed all tests. But in the morning, the support team finds that your highest paying customer has reported a single issue. Then another issue comes in from the same customer, and then another. The frustrated customer then mentions your company on Twitter in an effort to get your attention.

Customer support escalates this issue to their lead. The lead brings in the product manager, who asks someone to investigate. Your company's site reliability engineering (SRE) team investigates, but everything is looking good as far as they can see. Their telemetry shows that the error response rate is about the same, all servers are up, and the database is in good shape.

Eventually, a lead developer is tasked to investigate. Essentially, this individual needs to answer one question as quickly as possible: How do I reproduce this? To get the answer, the developer must talk to the customer to understand exactly what issue that customer is facing. This typically takes several hours of back and forth between the customer and developer.

Ultimately, the developer determines that the issue is on a single URL. This leads the developer to look into a log file to try to understand when and where this is happening. The developer finds one log line that has the stack trace with this error message: "The request parameter is invalid." This provides a clue that leads the developer to the line of code that needs to be checked.

The developer runs git blame on the file, which identifies the code's original author. The author joins the investigation squad. A few hours later, the squad figures out the cause of the issue and how they can fix it. They release a new build, and they ask customer support to check in with the customer to see if that customer is still experiencing the problem. By that point, the customer has gone to bed. Now the team must wait until tomorrow morning to get feedback.

That Delays Issue Resolution and Doesn't Work at Scale

The example above illustrates that troubleshooting for bugs and errors is still manual. That results in slow mean time to awareness (MTTA) and mean time to repair (MTTR).

Traditional troubleshooting tools also don't scale. That's a big problem because it prevents developer teams from moving quickly, whether they are working on shipping new releases, creating new features or even just contending with tech debt.

Most Observability Solutions Fall Short - Leaving Customers to Report Problems

Nearly half (46%) of developers said they have error monitoring solutions. But while most tools will tell you what's broken, they won't provide the context needed to understand issues and prioritize fixes. This helps explain why a whopping 88% of developers said that they only find out about bugs and errors from user complaints reported through the app or via social media.

Part of the problem is that developers frequently use tools which focus on system metrics and logging to solve challenges that address whether or not an app is working — and if not, why not. Modern observability tools aim to answer such questions as: Which microservice latency is causing 502s or which line of code is causing an elevated error rate?

But observability tools create problems of their own. For example, they generate too much noise, which leads to an inability to automate. That, in turn, results in slower triaging, fixes and remediation. The bottom line is that the process is still far too manual, slow, and not scalable.

Continuous Code Improvement Enables Fast Understanding and Action

What's really needed is more contextual information to find the root cause of errors, faster. Grouping together similar root causes also can alleviate alert fatigue. This enables developers to easily identify the source of bugs and errors — and resolve issues before customers complain.

This is now possible using continuous code improvement, which enables developers to observe and act on issues — often before customers are even aware that such problems exist.

Continuous code improvement also makes developers more productive because they can now spend less time debugging and more time building innovative solutions that add new value.

Cory Virok is CTO and Co-Founder of Rollbar

Hot Topics

The Latest

Today's enterprises exist in rapidly growing, complex IT landscapes that can inadvertently create silos and lead to the accumulation of disparate tools. To successfully manage such growth, these organizations must realize the requisite shift in corporate culture and workflow management needed to build trust in new technologies. This is particularly true in cases where enterprises are turning to automation and autonomic IT to offload the burden from IT professionals. This interplay between technology and culture is crucial in guiding teams using AIOps and observability solutions to proactively manage operations and transition toward a machine-driven IT ecosystem ...

Gartner identified the top data and analytics (D&A) trends for 2025 that are driving the emergence of a wide range of challenges, including organizational and human issues ...

Traditional network monitoring, while valuable, often falls short in providing the context needed to truly understand network behavior. This is where observability shines. In this blog, we'll compare and contrast traditional network monitoring and observability — highlighting the benefits of this evolving approach ...

A recent Rocket Software and Foundry study found that just 28% of organizations fully leverage their mainframe data, a concerning statistic given its critical role in powering AI models, predictive analytics, and informed decision-making ...

What kind of ROI is your organization seeing on its technology investments? If your answer is "it's complicated," you're not alone. According to a recent study conducted by Apptio ... there is a disconnect between enterprise technology spending and organizations' ability to measure the results ...

In today’s data and AI driven world, enterprises across industries are utilizing AI to invent new business models, reimagine business and achieve efficiency in operations. However, enterprises may face challenges like flawed or biased AI decisions, sensitive data breaches and rising regulatory risks ...

In MEAN TIME TO INSIGHT Episode 12, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses purchasing new network observability solutions.... 

There's an image problem with mobile app security. While it's critical for highly regulated industries like financial services, it is often overlooked in others. This usually comes down to development priorities, which typically fall into three categories: user experience, app performance, and app security. When dealing with finite resources such as time, shifting priorities, and team skill sets, engineering teams often have to prioritize one over the others. Usually, security is the odd man out ...

Image
Guardsquare

IT outages, caused by poor-quality software updates, are no longer rare incidents but rather frequent occurrences, directly impacting over half of US consumers. According to the 2024 Software Failure Sentiment Report from Harness, many now equate these failures to critical public health crises ...

In just a few months, Google will again head to Washington DC and meet with the government for a two-week remedy trial to cement the fate of what happens to Chrome and its search business in the face of ongoing antitrust court case(s). Or, Google may proactively decide to make changes, putting the power in its hands to outline a suitable remedy. Regardless of the outcome, one thing is sure: there will be far more implications for AI than just a shift in Google's Search business ... 

Image
Chrome

The Unseen Cost of Observability: The Need for Continuous Code Improvement

Cory Virok
Rollbar

Developers are getting better at building software, but we're not getting better at fixing it.

The problem is that fixing bugs and errors is still a very manual process. Developers have to dedicate significant time and effort investigating what went wrong before they can even begin to fix issues. That's because traditional observability tools will tell you if your infrastructure is having problems, but don't provide the context a developer needs to fix the code or how to prioritize them based on business requirements. Also, traditional observability tools produce far too much noise and too many false positives, leading to alert fatigue.

This drains developer time and productivity — and can result in a fair amount of frustration.

Fixing Bugs and Errors Is Developers' No. 1 Pain Point

Rollbar research reveals that fixing bugs and errors in code is developers' No. 1 pain point.

The research, based on a survey of nearly 1,000 developers, also indicates that 88% of developers feel that traditional tools used for error monitoring fall short of their expectations.

The developer survey group explained that traditional error monitoring is lacking because:

■ It requires them to manually respond to errors (39%)

■ It takes them too long to find all of the details they need to fix bugs and errors (36%)

■ It focuses on system stability and not enough on code health (31%)

■ It makes it difficult to detect errors (29%)

■ Its approach to error aggregation is either too broad or too narrow (23%)

With Traditional Troubleshooting, Developers Spend Significant Time Investigating Problems

This example will illustrate how many of these challenges can play out for an organization.

Imagine that you launched a new web app feature after ensuring the feature passed all tests. But in the morning, the support team finds that your highest paying customer has reported a single issue. Then another issue comes in from the same customer, and then another. The frustrated customer then mentions your company on Twitter in an effort to get your attention.

Customer support escalates this issue to their lead. The lead brings in the product manager, who asks someone to investigate. Your company's site reliability engineering (SRE) team investigates, but everything is looking good as far as they can see. Their telemetry shows that the error response rate is about the same, all servers are up, and the database is in good shape.

Eventually, a lead developer is tasked to investigate. Essentially, this individual needs to answer one question as quickly as possible: How do I reproduce this? To get the answer, the developer must talk to the customer to understand exactly what issue that customer is facing. This typically takes several hours of back and forth between the customer and developer.

Ultimately, the developer determines that the issue is on a single URL. This leads the developer to look into a log file to try to understand when and where this is happening. The developer finds one log line that has the stack trace with this error message: "The request parameter is invalid." This provides a clue that leads the developer to the line of code that needs to be checked.

The developer runs git blame on the file, which identifies the code's original author. The author joins the investigation squad. A few hours later, the squad figures out the cause of the issue and how they can fix it. They release a new build, and they ask customer support to check in with the customer to see if that customer is still experiencing the problem. By that point, the customer has gone to bed. Now the team must wait until tomorrow morning to get feedback.

That Delays Issue Resolution and Doesn't Work at Scale

The example above illustrates that troubleshooting for bugs and errors is still manual. That results in slow mean time to awareness (MTTA) and mean time to repair (MTTR).

Traditional troubleshooting tools also don't scale. That's a big problem because it prevents developer teams from moving quickly, whether they are working on shipping new releases, creating new features or even just contending with tech debt.

Most Observability Solutions Fall Short - Leaving Customers to Report Problems

Nearly half (46%) of developers said they have error monitoring solutions. But while most tools will tell you what's broken, they won't provide the context needed to understand issues and prioritize fixes. This helps explain why a whopping 88% of developers said that they only find out about bugs and errors from user complaints reported through the app or via social media.

Part of the problem is that developers frequently use tools which focus on system metrics and logging to solve challenges that address whether or not an app is working — and if not, why not. Modern observability tools aim to answer such questions as: Which microservice latency is causing 502s or which line of code is causing an elevated error rate?

But observability tools create problems of their own. For example, they generate too much noise, which leads to an inability to automate. That, in turn, results in slower triaging, fixes and remediation. The bottom line is that the process is still far too manual, slow, and not scalable.

Continuous Code Improvement Enables Fast Understanding and Action

What's really needed is more contextual information to find the root cause of errors, faster. Grouping together similar root causes also can alleviate alert fatigue. This enables developers to easily identify the source of bugs and errors — and resolve issues before customers complain.

This is now possible using continuous code improvement, which enables developers to observe and act on issues — often before customers are even aware that such problems exist.

Continuous code improvement also makes developers more productive because they can now spend less time debugging and more time building innovative solutions that add new value.

Cory Virok is CTO and Co-Founder of Rollbar

Hot Topics

The Latest

Today's enterprises exist in rapidly growing, complex IT landscapes that can inadvertently create silos and lead to the accumulation of disparate tools. To successfully manage such growth, these organizations must realize the requisite shift in corporate culture and workflow management needed to build trust in new technologies. This is particularly true in cases where enterprises are turning to automation and autonomic IT to offload the burden from IT professionals. This interplay between technology and culture is crucial in guiding teams using AIOps and observability solutions to proactively manage operations and transition toward a machine-driven IT ecosystem ...

Gartner identified the top data and analytics (D&A) trends for 2025 that are driving the emergence of a wide range of challenges, including organizational and human issues ...

Traditional network monitoring, while valuable, often falls short in providing the context needed to truly understand network behavior. This is where observability shines. In this blog, we'll compare and contrast traditional network monitoring and observability — highlighting the benefits of this evolving approach ...

A recent Rocket Software and Foundry study found that just 28% of organizations fully leverage their mainframe data, a concerning statistic given its critical role in powering AI models, predictive analytics, and informed decision-making ...

What kind of ROI is your organization seeing on its technology investments? If your answer is "it's complicated," you're not alone. According to a recent study conducted by Apptio ... there is a disconnect between enterprise technology spending and organizations' ability to measure the results ...

In today’s data and AI driven world, enterprises across industries are utilizing AI to invent new business models, reimagine business and achieve efficiency in operations. However, enterprises may face challenges like flawed or biased AI decisions, sensitive data breaches and rising regulatory risks ...

In MEAN TIME TO INSIGHT Episode 12, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses purchasing new network observability solutions.... 

There's an image problem with mobile app security. While it's critical for highly regulated industries like financial services, it is often overlooked in others. This usually comes down to development priorities, which typically fall into three categories: user experience, app performance, and app security. When dealing with finite resources such as time, shifting priorities, and team skill sets, engineering teams often have to prioritize one over the others. Usually, security is the odd man out ...

Image
Guardsquare

IT outages, caused by poor-quality software updates, are no longer rare incidents but rather frequent occurrences, directly impacting over half of US consumers. According to the 2024 Software Failure Sentiment Report from Harness, many now equate these failures to critical public health crises ...

In just a few months, Google will again head to Washington DC and meet with the government for a two-week remedy trial to cement the fate of what happens to Chrome and its search business in the face of ongoing antitrust court case(s). Or, Google may proactively decide to make changes, putting the power in its hands to outline a suitable remedy. Regardless of the outcome, one thing is sure: there will be far more implications for AI than just a shift in Google's Search business ... 

Image
Chrome