Skip to main content

Q&A Part One: TRAC Research Talks About the APM Spectrum

Pete Goldin
APMdigest

In Part One of APMdigest's exclusive interview, Bojan Simic, President and Principal Analyst at TRAC Research, talks about the firm's new APM Spectrum report.

APM: What was your goal for the APM Spectrum?

TRAC: We had three key goals for the report: to dive deeper into the APM market, reduce some of the confusion around APM and make the information relevant and actionable.

We are publishing in an interactive format because there is so much information. If we put it in a PDF or Word document it would be close to 90 pages. So we have divided the report into bite-size chunks, based on your job role, your use case, the challenges that you are trying to address. You can quickly and easily see exactly what you care about.

APM: What exactly does the APM Spectrum cover?

TRAC: The report covers 30 angles in which to look at the APM market, which includes topics such as the nine submarkets of APM technologies that we identified; five key application performance challenges; the business areas that are being impacted by application performance issues; return on investment of APM solutions; the APM deployment process; vertical industries including telecom, healthcare, finance and retail; use cases including cloud, virtualization, mobility, Web services/Web APIs and Big Data; best practices; and recommendations.

In this APM Spectrum report we are not delving into which vendor is providing more capabilities — it is more about where they fit into different user requirements. True vendor evaluation is being conducted in our Hub studies and Vendor Index reports.

APM: Do readers of the report check off their particular needs and then get a unique recommendation?

TRAC: There are general recommendations that are relevant for all APM users, and then they get recommendations specific to their job role as either IT operations, business user, developer or CIO.

APM: Is the report mostly for organizations buying their first APM solutions, or could it be used for ongoing APM initiatives?

TRAC: It is really for both. If you look at the recommendations, some of them are how to get started with APM and how you create your APM strategy, but a lot of recommendations are also about how to make your current initiative more effective. It is not only from a technology perspective, we talk about organizational aspects as well.

APM: What is the biggest challenge a user faces when selecting an APM tool?

TRAC: That is a great question, and that is why TRAC built this APM Spectrum. There is a lot of confusion in the market. There are a lot of vendors in the market that have a very similar message. They talk about the same things, while sometimes doing completely different things.

One thing that the vendors did to themselves, they jump on some of the hot topics around APM, raise awareness about who they are, and they confuse the heck out of the market. All of a sudden they all look the same. We get a lot of requests from end-users to explain what exactly specific vendors do around APM.

We are trying to reduce all the confusion in the market, and show that APM is not one single market — it is a concept around managing the delivery of applications to end-users. There are at least nine submarkets of APM, and each has its own unique buying requirements. So we are showing people that one size does not fit all in APM. For the majority of end-users, there is no such thing as an APM solution that will take care of all of your needs.

Also, if you look at different use cases, these solutions are not as effective in every use case — cloud, Big Data, mobility. Even though you have a good APM product, you might find that your solution is not as effective when your environment starts changing and you start managing different use cases. So it is really more about finding the right mix of APM capabilities.

It is not about which approach to APM is better, but it is about who you are as an end-user and what you are using APM for.

APM: Did you find that most organization currently use a single APM solution or multiple tools?

TRAC: Our survey data shows that 71% of respondents are using more than one APM tool. And we did not survey only Fortune 500 and huge companies that are using everything under the sun for APM. Close to 50% of our survey respondents were actually SMB companies.

APM: According to the APM Spectrum, time to value is the key selection criteria for evaluating APM vendors. What is good vs. unacceptable time to value that most APM users would experience?

TRAC: I think that is one of the key stories of the study because it shows how the APM market has changed over the last couple years. APM products used take 3-4 months to deploy, with a couple of people working on it full-time, and a lot of consulting hours. For that reason, APM was not very appealing to many organizations. But, there has been a major shift in the market where deployment went from four months to a day or two. We are now seeing a day or two, sometimes a week, from the point you start deploying the solution to where you start seeing the value in the data coming back. But two or three months, even one month, is definitely too long.

Read Q&A Part Two: TRAC Research Talks About the APM Spectrum

Hot Topic
The Latest
The Latest 10

The Latest

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

In MEAN TIME TO INSIGHT Episode 21, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AI-driven NetOps ... 

Enterprise IT has become increasingly complex and fragmented. Organizations are juggling dozens — sometimes hundreds — of different tools for endpoint management, security, app delivery, and employee experience. Each one needs its own license, its own maintenance, and its own integration. The result is a patchwork of overlapping tools, data stuck in silos, security vulnerabilities, and IT teams are spending more time managing software than actually getting work done ...

2025 was the year everybody finally saw the cracks in the foundation. If you were running production workloads, you probably lived through at least one outage you could not explain to your executives without pulling up a diagram and a whiteboard ...

Data has never been more central to a greater portion of enterprise operations than it is today. From software development to marketing strategy, data has become an essential component for success. But as data use cases multiply, so too does the diversity of the data itself. This shift is pushing organizations toward increasingly complex data infrastructure ...

Enterprises are not stalling because they doubt AI, but because they cannot yet govern, validate, or safely scale autonomous systems, according to The Pulse of Agentic AI 2026, a new report from Dynatrace ...

For most of the cloud era, site reliability engineers (SREs) were measured by their ability to protect availability, maintain performance, and reduce the operational risk of change. Cost management was someone else's responsibility, typically finance, procurement, or a dedicated FinOps team. That separation of duties made sense when infrastructure was relatively static and cloud bills grew in predictable ways. But modern cloud-native systems don't behave that way ...

Q&A Part One: TRAC Research Talks About the APM Spectrum

Pete Goldin
APMdigest

In Part One of APMdigest's exclusive interview, Bojan Simic, President and Principal Analyst at TRAC Research, talks about the firm's new APM Spectrum report.

APM: What was your goal for the APM Spectrum?

TRAC: We had three key goals for the report: to dive deeper into the APM market, reduce some of the confusion around APM and make the information relevant and actionable.

We are publishing in an interactive format because there is so much information. If we put it in a PDF or Word document it would be close to 90 pages. So we have divided the report into bite-size chunks, based on your job role, your use case, the challenges that you are trying to address. You can quickly and easily see exactly what you care about.

APM: What exactly does the APM Spectrum cover?

TRAC: The report covers 30 angles in which to look at the APM market, which includes topics such as the nine submarkets of APM technologies that we identified; five key application performance challenges; the business areas that are being impacted by application performance issues; return on investment of APM solutions; the APM deployment process; vertical industries including telecom, healthcare, finance and retail; use cases including cloud, virtualization, mobility, Web services/Web APIs and Big Data; best practices; and recommendations.

In this APM Spectrum report we are not delving into which vendor is providing more capabilities — it is more about where they fit into different user requirements. True vendor evaluation is being conducted in our Hub studies and Vendor Index reports.

APM: Do readers of the report check off their particular needs and then get a unique recommendation?

TRAC: There are general recommendations that are relevant for all APM users, and then they get recommendations specific to their job role as either IT operations, business user, developer or CIO.

APM: Is the report mostly for organizations buying their first APM solutions, or could it be used for ongoing APM initiatives?

TRAC: It is really for both. If you look at the recommendations, some of them are how to get started with APM and how you create your APM strategy, but a lot of recommendations are also about how to make your current initiative more effective. It is not only from a technology perspective, we talk about organizational aspects as well.

APM: What is the biggest challenge a user faces when selecting an APM tool?

TRAC: That is a great question, and that is why TRAC built this APM Spectrum. There is a lot of confusion in the market. There are a lot of vendors in the market that have a very similar message. They talk about the same things, while sometimes doing completely different things.

One thing that the vendors did to themselves, they jump on some of the hot topics around APM, raise awareness about who they are, and they confuse the heck out of the market. All of a sudden they all look the same. We get a lot of requests from end-users to explain what exactly specific vendors do around APM.

We are trying to reduce all the confusion in the market, and show that APM is not one single market — it is a concept around managing the delivery of applications to end-users. There are at least nine submarkets of APM, and each has its own unique buying requirements. So we are showing people that one size does not fit all in APM. For the majority of end-users, there is no such thing as an APM solution that will take care of all of your needs.

Also, if you look at different use cases, these solutions are not as effective in every use case — cloud, Big Data, mobility. Even though you have a good APM product, you might find that your solution is not as effective when your environment starts changing and you start managing different use cases. So it is really more about finding the right mix of APM capabilities.

It is not about which approach to APM is better, but it is about who you are as an end-user and what you are using APM for.

APM: Did you find that most organization currently use a single APM solution or multiple tools?

TRAC: Our survey data shows that 71% of respondents are using more than one APM tool. And we did not survey only Fortune 500 and huge companies that are using everything under the sun for APM. Close to 50% of our survey respondents were actually SMB companies.

APM: According to the APM Spectrum, time to value is the key selection criteria for evaluating APM vendors. What is good vs. unacceptable time to value that most APM users would experience?

TRAC: I think that is one of the key stories of the study because it shows how the APM market has changed over the last couple years. APM products used take 3-4 months to deploy, with a couple of people working on it full-time, and a lot of consulting hours. For that reason, APM was not very appealing to many organizations. But, there has been a major shift in the market where deployment went from four months to a day or two. We are now seeing a day or two, sometimes a week, from the point you start deploying the solution to where you start seeing the value in the data coming back. But two or three months, even one month, is definitely too long.

Read Q&A Part Two: TRAC Research Talks About the APM Spectrum

Hot Topic
The Latest
The Latest 10

The Latest

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

In MEAN TIME TO INSIGHT Episode 21, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AI-driven NetOps ... 

Enterprise IT has become increasingly complex and fragmented. Organizations are juggling dozens — sometimes hundreds — of different tools for endpoint management, security, app delivery, and employee experience. Each one needs its own license, its own maintenance, and its own integration. The result is a patchwork of overlapping tools, data stuck in silos, security vulnerabilities, and IT teams are spending more time managing software than actually getting work done ...

2025 was the year everybody finally saw the cracks in the foundation. If you were running production workloads, you probably lived through at least one outage you could not explain to your executives without pulling up a diagram and a whiteboard ...

Data has never been more central to a greater portion of enterprise operations than it is today. From software development to marketing strategy, data has become an essential component for success. But as data use cases multiply, so too does the diversity of the data itself. This shift is pushing organizations toward increasingly complex data infrastructure ...

Enterprises are not stalling because they doubt AI, but because they cannot yet govern, validate, or safely scale autonomous systems, according to The Pulse of Agentic AI 2026, a new report from Dynatrace ...

For most of the cloud era, site reliability engineers (SREs) were measured by their ability to protect availability, maintain performance, and reduce the operational risk of change. Cost management was someone else's responsibility, typically finance, procurement, or a dedicated FinOps team. That separation of duties made sense when infrastructure was relatively static and cloud bills grew in predictable ways. But modern cloud-native systems don't behave that way ...