Skip to main content

The Power of Deep Code Insights

Jina Na
AppDynamics

The rise of technologies like cloud computing and automated delivery pipelines has enabled teams to deliver software at breakneck speed. In fact, top tech companies deploy software hundreds, even thousands of times per day, raising the bar for digital services. To stay competitive, organizations in every industry must match the pace of innovation set by these digital-native companies.

However, it's challenging to maintain heterogeneous applications and ensure that service is not only available, but also delighting users and driving business outcomes. From the front-end experience to back-end architecture, a web of third-party services, legacy data centers and a distributed, multi cloud infrastructure are supporting the application.

If you're unable to effectively manage these complex application environments, your business is impacted — an outage leads to poor user experience which leads to lost business and impact to your organizational productivity and resources. Take for instance what recently happened with the Iowa caucus app. Coding issues led to significant delays in counting and reporting important primary results, which led other states, such as Nevada, to pull two previously developed apps for their own primary elections, losing out on tens of thousands of dollars.

This example shows that while deployment velocity has increased exponentially, traditional approaches to troubleshooting fall short when it comes to equipping developers (and IT teams) with enough information to pinpoint the root cause of application code issues.

In fact, according to Stripe Research, developers spend roughly 17.3 hours each week debugging, refactoring and modifying bad code — valuable time that could be spent writing more code, shipping better products and innovating. The bottom line? Nearly $300B (US) in lost developer productivity every year.

What happened in Iowa is just one example of how developers are often blamed for code level issues, issues that with the right level of insight could reveal what's causing a bug in production before impacting the digital experience for customers.

So what's the solution and what opportunities would developers suddenly benefit from if they spent more time writing code and less time debugging?

The Aha Moments — What Code Level Insights Bring to Life

The job of a developer is never ending given business priorities and product roadmaps. For those battling issues in monolithic environments or in highly distributed, microservices-based applications, code level insights greatly improve software delivery efficiency by enabling developers to spend less time debugging and more time delivering world-class software.

Specifically, today, once developers ship their code, access to the application and data is restricted. This means that most dev teams are forced to rely on time and resource intensive logging to collect the critical data needed to understand the cause of any performance impact.

Instead of this time intensive, often manual process, by leveraging code-level insights, developers are able to capture critical data and context, on-demand. This level of insight, means, developers have access to data and can collect the necessary information when they need to in order to pinpoint what's causing an issue. As a result, developers have witnessed a decrease in MTTR, improving the overall IT efficiency of their teams, a tighter alignment between Operations and Development teams and according to recent studies, a 25 percent improvement in developer productivity, freeing up valuable time to focus on releasing new features.

Armed with time back, developers can focus on building market-differentiating products that drive user experience, customer satisfaction, and business priorities. This is especially key for organizations competing with younger, digital-native companies.

Jina Na is Associate Product Marketing Manager at AppDynamics

The Latest

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...

Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...

While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...

A perfect storm is brewing in cybersecurity — certificate lifespans shrinking to just 47 days while quantum computing threatens today's encryption. Organizations must embrace ephemeral trust and crypto-agility to survive this dual challenge ...

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

The Power of Deep Code Insights

Jina Na
AppDynamics

The rise of technologies like cloud computing and automated delivery pipelines has enabled teams to deliver software at breakneck speed. In fact, top tech companies deploy software hundreds, even thousands of times per day, raising the bar for digital services. To stay competitive, organizations in every industry must match the pace of innovation set by these digital-native companies.

However, it's challenging to maintain heterogeneous applications and ensure that service is not only available, but also delighting users and driving business outcomes. From the front-end experience to back-end architecture, a web of third-party services, legacy data centers and a distributed, multi cloud infrastructure are supporting the application.

If you're unable to effectively manage these complex application environments, your business is impacted — an outage leads to poor user experience which leads to lost business and impact to your organizational productivity and resources. Take for instance what recently happened with the Iowa caucus app. Coding issues led to significant delays in counting and reporting important primary results, which led other states, such as Nevada, to pull two previously developed apps for their own primary elections, losing out on tens of thousands of dollars.

This example shows that while deployment velocity has increased exponentially, traditional approaches to troubleshooting fall short when it comes to equipping developers (and IT teams) with enough information to pinpoint the root cause of application code issues.

In fact, according to Stripe Research, developers spend roughly 17.3 hours each week debugging, refactoring and modifying bad code — valuable time that could be spent writing more code, shipping better products and innovating. The bottom line? Nearly $300B (US) in lost developer productivity every year.

What happened in Iowa is just one example of how developers are often blamed for code level issues, issues that with the right level of insight could reveal what's causing a bug in production before impacting the digital experience for customers.

So what's the solution and what opportunities would developers suddenly benefit from if they spent more time writing code and less time debugging?

The Aha Moments — What Code Level Insights Bring to Life

The job of a developer is never ending given business priorities and product roadmaps. For those battling issues in monolithic environments or in highly distributed, microservices-based applications, code level insights greatly improve software delivery efficiency by enabling developers to spend less time debugging and more time delivering world-class software.

Specifically, today, once developers ship their code, access to the application and data is restricted. This means that most dev teams are forced to rely on time and resource intensive logging to collect the critical data needed to understand the cause of any performance impact.

Instead of this time intensive, often manual process, by leveraging code-level insights, developers are able to capture critical data and context, on-demand. This level of insight, means, developers have access to data and can collect the necessary information when they need to in order to pinpoint what's causing an issue. As a result, developers have witnessed a decrease in MTTR, improving the overall IT efficiency of their teams, a tighter alignment between Operations and Development teams and according to recent studies, a 25 percent improvement in developer productivity, freeing up valuable time to focus on releasing new features.

Armed with time back, developers can focus on building market-differentiating products that drive user experience, customer satisfaction, and business priorities. This is especially key for organizations competing with younger, digital-native companies.

Jina Na is Associate Product Marketing Manager at AppDynamics

The Latest

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...

Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...

While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...

A perfect storm is brewing in cybersecurity — certificate lifespans shrinking to just 47 days while quantum computing threatens today's encryption. Organizations must embrace ephemeral trust and crypto-agility to survive this dual challenge ...

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...