Skip to main content

How to Shift Left with Code Profiling

Madeline Horton
Stackify

What is "Shifting Left?"

Development teams who utilize shift left practices typically employ frequent testing to speed up project deliverability and allow for better adherence to project timelines.

In Agile, development and testing work in tandem, with testing being performed at each stage of the software delivery lifecycle, also known as the SDLC. This combination of development and testing is known as "shifting left." Shift left is a software development testing practice intended to resolve any errors or performance bottlenecks as early in the software development lifecycle (SDLC) as possible.

Before Agile, software testing was performed using the waterfall methodology. When using the waterfall methodology, all testing occurs prior to deployment — from the non-production environments to the production environments. Through waterfall pre-deployment testing, issues are found in the code far too late and the release is inevitably delayed until all bottlenecks are fixed. Then, the code re-enters a testing period, which continues until all bugs are resolved and the code is deployed into the production environment. Waterfall methodology often negatively impacts the project’s deliverability and timeline. Increased time to market directly correlates with business revenue.

How Can I "Shift Left?"

In order to properly shift left, continuous testing must begin as soon as a developer starts to write code. A code profiler is one way of receiving immediate feedback and implementing a continuous testing loop in the preliminary stages of development.

Code profiling is one tool developers use to shift left and utilize frequent testing throughout the SDLC. Why? Fixing code directly while writing it on the developer’s workstation is essentially shifting as far left as possible. By shifting this far left, issues are found even before committing the code to a QA or non-production environment.

Traditionally, developers have used code profilers to identify performance bottlenecks without having to constantly touch their code. Code profilers are useful in answering questions such as "How many times is each method being called in my code" or "How long are these methods taking?" Additionally, code profilers track useful information such as memory allocation, garbage collection, web requests, and key methods in your code.

There are two types of code profilers: server-side profilers and desktop profilers. Server-side profilers track key performance methods in both pre-production and production environments to measure transaction timing and increased visibility into errors and logs. Another term for server-side profiling is Application Performance Management, or APM.

A desktop code profiler tracks the performance of every line of code within an individual method as well as tracking memory allocations and garbage collection to aid with memory leaks. Unfortunately, desktop profiling often causes applications to run slower than usual. In return, most developers utilize desktop profilers as a situational tool and not for daily use. Usually, developers only use code profilers when investigating a CPU or memory problem.

In order to provide both the granularity of a desktop code profiler and the light-weight nature of a server-side profiler, there are hybrid profilers. In a sense, hybrid profilers serve as the best of both worlds — merging key data from the server-side profiler with code-level details from the desktop profiler. Their light-weight nature is perfect for everyday use with server level insights and the ability to track key methods, transactions, dependency calls, errors, and logs.

What Code Profiler Should I Pick?

After evaluating the importance of a code profiler when implementing shift left methodology, it is important to keep in mind a few things. Often, profilers need to be built into the code itself. This is the reason why most desktop code profilers cause applications to run slow and are only utilized in specific circumstances. When looking at application performance management tools, note that most APMs require code or multiple configuration changes.

Whether you implement shift left methodology via a server-side, desktop, or hybrid code profiler, profilers are imperative for finding the hot path in your code. For example, a code profiler can be used to find what is using the 20% of the total CPU usage within your code. Then, your code profiler can help determine what you can do to improve your code.

Additionally, you can utilize a code profiler for proactively finding memory leaks as well as dependency call and transaction performance.

Code profilers are a necessary tool for constantly testing and improving your code throughout the SDLC as profilers can help look for the methods that can lead to the greatest improvement over time.

Madeline Horton is a Campaign Marketing Strategist at Stackify

The Latest

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

How to Shift Left with Code Profiling

Madeline Horton
Stackify

What is "Shifting Left?"

Development teams who utilize shift left practices typically employ frequent testing to speed up project deliverability and allow for better adherence to project timelines.

In Agile, development and testing work in tandem, with testing being performed at each stage of the software delivery lifecycle, also known as the SDLC. This combination of development and testing is known as "shifting left." Shift left is a software development testing practice intended to resolve any errors or performance bottlenecks as early in the software development lifecycle (SDLC) as possible.

Before Agile, software testing was performed using the waterfall methodology. When using the waterfall methodology, all testing occurs prior to deployment — from the non-production environments to the production environments. Through waterfall pre-deployment testing, issues are found in the code far too late and the release is inevitably delayed until all bottlenecks are fixed. Then, the code re-enters a testing period, which continues until all bugs are resolved and the code is deployed into the production environment. Waterfall methodology often negatively impacts the project’s deliverability and timeline. Increased time to market directly correlates with business revenue.

How Can I "Shift Left?"

In order to properly shift left, continuous testing must begin as soon as a developer starts to write code. A code profiler is one way of receiving immediate feedback and implementing a continuous testing loop in the preliminary stages of development.

Code profiling is one tool developers use to shift left and utilize frequent testing throughout the SDLC. Why? Fixing code directly while writing it on the developer’s workstation is essentially shifting as far left as possible. By shifting this far left, issues are found even before committing the code to a QA or non-production environment.

Traditionally, developers have used code profilers to identify performance bottlenecks without having to constantly touch their code. Code profilers are useful in answering questions such as "How many times is each method being called in my code" or "How long are these methods taking?" Additionally, code profilers track useful information such as memory allocation, garbage collection, web requests, and key methods in your code.

There are two types of code profilers: server-side profilers and desktop profilers. Server-side profilers track key performance methods in both pre-production and production environments to measure transaction timing and increased visibility into errors and logs. Another term for server-side profiling is Application Performance Management, or APM.

A desktop code profiler tracks the performance of every line of code within an individual method as well as tracking memory allocations and garbage collection to aid with memory leaks. Unfortunately, desktop profiling often causes applications to run slower than usual. In return, most developers utilize desktop profilers as a situational tool and not for daily use. Usually, developers only use code profilers when investigating a CPU or memory problem.

In order to provide both the granularity of a desktop code profiler and the light-weight nature of a server-side profiler, there are hybrid profilers. In a sense, hybrid profilers serve as the best of both worlds — merging key data from the server-side profiler with code-level details from the desktop profiler. Their light-weight nature is perfect for everyday use with server level insights and the ability to track key methods, transactions, dependency calls, errors, and logs.

What Code Profiler Should I Pick?

After evaluating the importance of a code profiler when implementing shift left methodology, it is important to keep in mind a few things. Often, profilers need to be built into the code itself. This is the reason why most desktop code profilers cause applications to run slow and are only utilized in specific circumstances. When looking at application performance management tools, note that most APMs require code or multiple configuration changes.

Whether you implement shift left methodology via a server-side, desktop, or hybrid code profiler, profilers are imperative for finding the hot path in your code. For example, a code profiler can be used to find what is using the 20% of the total CPU usage within your code. Then, your code profiler can help determine what you can do to improve your code.

Additionally, you can utilize a code profiler for proactively finding memory leaks as well as dependency call and transaction performance.

Code profilers are a necessary tool for constantly testing and improving your code throughout the SDLC as profilers can help look for the methods that can lead to the greatest improvement over time.

Madeline Horton is a Campaign Marketing Strategist at Stackify

The Latest

2020 was the equivalent of a wedding with a top-shelf open bar. As businesses scrambled to adjust to remote work, digital transformation accelerated at breakneck speed. New software categories emerged overnight. Tech stacks ballooned with all sorts of SaaS apps solving ALL the problems — often with little oversight or long-term integration planning, and yes frequently a lot of duplicated functionality ... But now the music's faded. The lights are on. Everyone from the CIO to the CFO is checking the bill. Welcome to the Great SaaS Hangover ...

Regardless of OpenShift being a scalable and flexible software, it can be a pain to monitor since complete visibility into the underlying operations is not guaranteed ... To effectively monitor an OpenShift environment, IT administrators should focus on these five key elements and their associated metrics ...

An overwhelming majority of IT leaders (95%) believe the upcoming wave of AI-powered digital transformation is set to be the most impactful and intensive seen thus far, according to The Science of Productivity: AI, Adoption, And Employee Experience, a new report from Nexthink ...

Overall outage frequency and the general level of reported severity continue to decline, according to the Outage Analysis 2025 from Uptime Institute. However, cyber security incidents are on the rise and often have severe, lasting impacts ...

In March, New Relic published the State of Observability for Media and Entertainment Report to share insights, data, and analysis into the adoption and business value of observability across the media and entertainment industry. Here are six key takeaways from the report ...

Regardless of their scale, business decisions often take time, effort, and a lot of back-and-forth discussion to reach any sort of actionable conclusion ... Any means of streamlining this process and getting from complex problems to optimal solutions more efficiently and reliably is key. How can organizations optimize their decision-making to save time and reduce excess effort from those involved? ...

As enterprises accelerate their cloud adoption strategies, CIOs are routinely exceeding their cloud budgets — a concern that's about to face additional pressure from an unexpected direction: uncertainty over semiconductor tariffs. The CIO Cloud Trends Survey & Report from Azul reveals the extent continued cloud investment despite cost overruns, and how organizations are attempting to bring spending under control ...

Image
Azul

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...