Skip to main content

Making Sense of APM and Ending the Agent/Agentless War

Antonio Piraino

Application Performance Management (APM) is a hot topic right now. Gartner defines APM as agent-based monitoring that sits inside the operating system and provides code-level performance, tracing, application mapping, and tracking. How exactly does APM help an organization, and when would a business choose to invest in this technology? When does APM make sense and when doesn’t it? And, more broadly, how does this tie into the changing needs of IT monitoring? Finally, why does the agent vs. agentless debate continue to rage on?

Simply put, enterprises that write their own code (Java, .NET, etc.) and leverage applications unique to the way they do business must have code-level application visibility. More specifically, those companies who place high importance on understanding how code executes and functions in a production environment, and what that means to business-critical, revenue generating, bespoke applications need APM.

That said, APM is not necessary for the vast majority of commercial applications not authored by the enterprise because code-level visibility is not necessary, for instance in the example of a CAD app purchased from a provider of an ERP solution. There is also the cost consideration. As a single APM agent typically runs somewhere between $150-$200 per month, from a cost perspective it simply doesn’t make sense. If your authentication service goes down, you’re not going to use an APM agent on that. In fact, most of your operators wouldn’t even know what to do with the deep code level data you’re getting back.

Today we’re seeing traditional IT infrastructure management vendors moving towards an application-centric view of the world and APM vendors attempting to get broader visibility of the entire IT infrastructure. As an enterprise, I need to understand how all of my infrastructure is working — what’s up, what’s down, what’s running well and what’s not, capacity planning, failure analysis, and keeping the lights on across my vast complicated set of IT technologies. Simultaneously, organizations need to know how their applications are doing. However, rather than handpicking one or two “important” ones for code level visibility, you’d really like the two different types of vendors to meet in the middle.

So most organizations are combining application-aware infrastructure monitoring for all apps and augmenting in spot places with APM for custom apps.

On to the war — agent-based versus agentless monitoring. For years now we’ve heard sniping back and forth as to which model is best suited for enterprise IT. Both approaches have their pros and cons. Agents can provide more granular performance metrics, while agentless monitoring platforms are often easier to manage. But to say you can only have one or the other is a canard. There are vendors that provide customers with the option to deploy both models simultaneously, depending on the customer’s need.

If there is one inalienable truth concerning IT, it’s that IT has and always will be heterogeneous in nature. The complexity of systems and IT infrastructure ecosystems demand it and IT will never converge on homogeneity. Enterprise IT should not look to choose between APM and application-aware infrastructure monitoring. Nor should they be forced to adopt a single approach to gathering performance metrics. That of course isn’t stopping vendors from yelling from the rooftops.

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Making Sense of APM and Ending the Agent/Agentless War

Antonio Piraino

Application Performance Management (APM) is a hot topic right now. Gartner defines APM as agent-based monitoring that sits inside the operating system and provides code-level performance, tracing, application mapping, and tracking. How exactly does APM help an organization, and when would a business choose to invest in this technology? When does APM make sense and when doesn’t it? And, more broadly, how does this tie into the changing needs of IT monitoring? Finally, why does the agent vs. agentless debate continue to rage on?

Simply put, enterprises that write their own code (Java, .NET, etc.) and leverage applications unique to the way they do business must have code-level application visibility. More specifically, those companies who place high importance on understanding how code executes and functions in a production environment, and what that means to business-critical, revenue generating, bespoke applications need APM.

That said, APM is not necessary for the vast majority of commercial applications not authored by the enterprise because code-level visibility is not necessary, for instance in the example of a CAD app purchased from a provider of an ERP solution. There is also the cost consideration. As a single APM agent typically runs somewhere between $150-$200 per month, from a cost perspective it simply doesn’t make sense. If your authentication service goes down, you’re not going to use an APM agent on that. In fact, most of your operators wouldn’t even know what to do with the deep code level data you’re getting back.

Today we’re seeing traditional IT infrastructure management vendors moving towards an application-centric view of the world and APM vendors attempting to get broader visibility of the entire IT infrastructure. As an enterprise, I need to understand how all of my infrastructure is working — what’s up, what’s down, what’s running well and what’s not, capacity planning, failure analysis, and keeping the lights on across my vast complicated set of IT technologies. Simultaneously, organizations need to know how their applications are doing. However, rather than handpicking one or two “important” ones for code level visibility, you’d really like the two different types of vendors to meet in the middle.

So most organizations are combining application-aware infrastructure monitoring for all apps and augmenting in spot places with APM for custom apps.

On to the war — agent-based versus agentless monitoring. For years now we’ve heard sniping back and forth as to which model is best suited for enterprise IT. Both approaches have their pros and cons. Agents can provide more granular performance metrics, while agentless monitoring platforms are often easier to manage. But to say you can only have one or the other is a canard. There are vendors that provide customers with the option to deploy both models simultaneously, depending on the customer’s need.

If there is one inalienable truth concerning IT, it’s that IT has and always will be heterogeneous in nature. The complexity of systems and IT infrastructure ecosystems demand it and IT will never converge on homogeneity. Enterprise IT should not look to choose between APM and application-aware infrastructure monitoring. Nor should they be forced to adopt a single approach to gathering performance metrics. That of course isn’t stopping vendors from yelling from the rooftops.

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...