The Unseen Cost of Observability: The Need for Continuous Code Improvement
March 29, 2021

Cory Virok
Rollbar

Share this

Developers are getting better at building software, but we're not getting better at fixing it.

The problem is that fixing bugs and errors is still a very manual process. Developers have to dedicate significant time and effort investigating what went wrong before they can even begin to fix issues. That's because traditional observability tools will tell you if your infrastructure is having problems, but don't provide the context a developer needs to fix the code or how to prioritize them based on business requirements. Also, traditional observability tools produce far too much noise and too many false positives, leading to alert fatigue.

This drains developer time and productivity — and can result in a fair amount of frustration.

Fixing Bugs and Errors Is Developers' No. 1 Pain Point

Rollbar research reveals that fixing bugs and errors in code is developers' No. 1 pain point.

The research, based on a survey of nearly 1,000 developers, also indicates that 88% of developers feel that traditional tools used for error monitoring fall short of their expectations.

The developer survey group explained that traditional error monitoring is lacking because:

■ It requires them to manually respond to errors (39%)

■ It takes them too long to find all of the details they need to fix bugs and errors (36%)

■ It focuses on system stability and not enough on code health (31%)

■ It makes it difficult to detect errors (29%)

■ Its approach to error aggregation is either too broad or too narrow (23%)

With Traditional Troubleshooting, Developers Spend Significant Time Investigating Problems

This example will illustrate how many of these challenges can play out for an organization.

Imagine that you launched a new web app feature after ensuring the feature passed all tests. But in the morning, the support team finds that your highest paying customer has reported a single issue. Then another issue comes in from the same customer, and then another. The frustrated customer then mentions your company on Twitter in an effort to get your attention.

Customer support escalates this issue to their lead. The lead brings in the product manager, who asks someone to investigate. Your company's site reliability engineering (SRE) team investigates, but everything is looking good as far as they can see. Their telemetry shows that the error response rate is about the same, all servers are up, and the database is in good shape.

Eventually, a lead developer is tasked to investigate. Essentially, this individual needs to answer one question as quickly as possible: How do I reproduce this? To get the answer, the developer must talk to the customer to understand exactly what issue that customer is facing. This typically takes several hours of back and forth between the customer and developer.

Ultimately, the developer determines that the issue is on a single URL. This leads the developer to look into a log file to try to understand when and where this is happening. The developer finds one log line that has the stack trace with this error message: "The request parameter is invalid." This provides a clue that leads the developer to the line of code that needs to be checked.

The developer runs git blame on the file, which identifies the code's original author. The author joins the investigation squad. A few hours later, the squad figures out the cause of the issue and how they can fix it. They release a new build, and they ask customer support to check in with the customer to see if that customer is still experiencing the problem. By that point, the customer has gone to bed. Now the team must wait until tomorrow morning to get feedback.

That Delays Issue Resolution and Doesn't Work at Scale

The example above illustrates that troubleshooting for bugs and errors is still manual. That results in slow mean time to awareness (MTTA) and mean time to repair (MTTR).

Traditional troubleshooting tools also don't scale. That's a big problem because it prevents developer teams from moving quickly, whether they are working on shipping new releases, creating new features or even just contending with tech debt.

Most Observability Solutions Fall Short - Leaving Customers to Report Problems

Nearly half (46%) of developers said they have error monitoring solutions. But while most tools will tell you what's broken, they won't provide the context needed to understand issues and prioritize fixes. This helps explain why a whopping 88% of developers said that they only find out about bugs and errors from user complaints reported through the app or via social media.

Part of the problem is that developers frequently use tools which focus on system metrics and logging to solve challenges that address whether or not an app is working — and if not, why not. Modern observability tools aim to answer such questions as: Which microservice latency is causing 502s or which line of code is causing an elevated error rate?

But observability tools create problems of their own. For example, they generate too much noise, which leads to an inability to automate. That, in turn, results in slower triaging, fixes and remediation. The bottom line is that the process is still far too manual, slow, and not scalable.

Continuous Code Improvement Enables Fast Understanding and Action

What's really needed is more contextual information to find the root cause of errors, faster. Grouping together similar root causes also can alleviate alert fatigue. This enables developers to easily identify the source of bugs and errors — and resolve issues before customers complain.

This is now possible using continuous code improvement, which enables developers to observe and act on issues — often before customers are even aware that such problems exist.

Continuous code improvement also makes developers more productive because they can now spend less time debugging and more time building innovative solutions that add new value.

Cory Virok is CTO and Co-Founder of Rollbar
Share this

The Latest

September 16, 2021

Achieve more with less. How many of you feel that pressure — or, even worse, hear those words — trickle down from leadership? The reality is that overworked and under-resourced IT departments will only lead to chronic errors, missed deadlines and service assurance failures. After all, we're only human. So what are overburdened IT departments to do? Reduce the human factor. In a word: automate ...

September 15, 2021

On average, data innovators release twice as many products and increase employee productivity at double the rate of organizations with less mature data strategies, according to the State of Data Innovation report from Splunk ...

September 14, 2021

While 90% of respondents believe observability is important and strategic to their business — and 94% believe it to be strategic to their role — just 26% noted mature observability practices within their business, according to the 2021 Observability Forecast ...

September 13, 2021

Let's explore a few of the most prominent app success indicators and how app engineers can shift their development strategy to better meet the needs of today's app users ...

September 09, 2021

Business enterprises aiming at digital transformation or IT companies developing new software applications face challenges in developing eye-catching, robust, fast-loading, mobile-friendly, content-rich, and user-friendly software. However, with increased pressure to reduce costs and save time, business enterprises often give a short shrift to performance testing services ...

September 08, 2021

DevOps, SRE and other operations teams use observability solutions with AIOps to ingest and normalize data to get visibility into tech stacks from a centralized system, reduce noise and understand the data's context for quicker mean time to recovery (MTTR). With AI using these processes to produce actionable insights, teams are free to spend more time innovating and providing superior service assurance. Let's explore AI's role in ingestion and normalization, and then dive into correlation and deduplication too ...

September 07, 2021

As we look into the future direction of observability, we are paying attention to the rise of artificial intelligence, machine learning, security, and more. I asked top industry experts — DevOps Institute Ambassadors — to offer their predictions for the future of observability. The following are 10 predictions ...

September 01, 2021

One thing is certain: The hybrid workplace, a term we helped define in early 2020, with its human-centric work design, is the future. However, this new hybrid work flexibility does not come without its costs. According to Microsoft ... weekly meeting times for MS Teams users increased 148%, between February 2020 and February 2021 they saw a 40 billion increase in the number of emails, weekly per person team chats is up 45% (and climbing), and people working on Office Docs increased by 66%. This speaks to the need to further optimize remote interactions to avoid burnout ...

August 31, 2021

Here's how it happens: You're deploying a new technology, thinking everything's going smoothly, when the alerts start coming in. Your rollout has hit a snag. Whole groups of users are complaining about poor performance on their devices. Some can't access applications at all. You've now blown your service-level agreement (SLA). You might have just introduced a new security vulnerability. In the worst case, your big expensive product launch has missed the mark altogether. "How did this happen?" you're asking yourself. "Didn't we test everything before we deployed?" ...

August 30, 2021

The Fastly outage in June 2021 showed how one inconspicuous coding error can cause worldwide chaos. A single Fastly customer making a legitimate configuration change, triggered a hidden bug that sent half of the internet offline, including web giants like Amazon and Reddit. Ultimately, this incident illustrates why organizations must test their software in production ...