Skip to main content

Q&A Part Two: Insight from Forrester on APM

Pete Goldin
Editor and Publisher
APMdigest

In Part Two of APMdigest's exclusive interview, Jean-Pierre Garbani, Vice President, Principal Analyst at Forrester, provides valuable insight into the present state and future of APM.

Click here to start with Part One of APMdigest's interview with Forrester's JP Garbani

APM: What is the most significant recent advancement that has transformed APM?

JPG: The ability to trace individual transactions is the biggest progress that has been made. Starting in 2006 and after some products started to appear on the market that had the level of granularity to trace individual transactions – products like OpTier or Correlsense. Now you have a number of other products that joined them. These solutions come with built-in transaction tracing, and that was really a significant advancement because there is no other way to bring all the components together.

If you have 10,000 servers, there is no way you can find anything. You can monitor everything, but if the end-user calls and says something does not work, how are you going to find which of these 10,000 servers is the one that is actually causing the problem? It sounds simple, you just have to monitor each server and the one that is not doing everything right is the one causing the problem. In actuality it is much more subtle than that, because you could have a server that does not seem to be overloaded but actually is overloaded for what you are asking it to do. It becomes extremely complex and is far beyond the capabilities of the human brain. So the greatest progress we have made in APM is the ability to identify which five servers out of 10,000 are involved in a particular transaction.

APM: Would you say APM is the realization of the Business Service Management (BSM) concept?

JPG: Yes, it is what we thought BSM would be in 2004. Back then, we were all thinking that dependency discovery would actually be incorporated into monitoring products. So we were already thinking of what APM is today. What derailed the movement towards APM – first, we very quickly realized that application dependency discovery, as it is defined by the CMDB, is not fast enough and not detailed enough.

The second thing that happened was ITIL. All of a sudden everyone was looking at streamlining processes, and all the vendors decided that it was more lucrative to build CMDBs and build ITIL processes around it, rather than improve the monitoring solutions. It is only now in the last two years where APM came into real life. It is still not perfect, but we are getting there.

APM: What is APM technology missing right now that would enable true BSM?

JPG: Today APM has the ability to collect data from multiple sources, and the capability to understand the map of the transactions. Now we need more intelligence, to be able to interpret the data from different sources in a meaningful way.

All the vendors are trying to bring some level of analytics to APM. I have seen interesting solutions from HP, which recently made a lot of progress in this area. Netuitive has also made great progress. Netuitive is an example that I recommend to many clients because they are independent. They have the capability to bring together all this data from different sources. I have also seen progress made by companies like IBM, BMC and Quest software. I think there is certainly more progress to be made. Maybe it is a matter of another jump in terms of technology capabilities, such as speed of processors. Maybe two years down the road we will have faster processors that will let us create even more complex algorithms.

APM: In your APM Market Overview you talked about the convergence of APM and BTM (business transaction management). Do you foresee continued convergence of APM with other technologies?

JPG: Sure. You have to look at what the enterprise will need tomorrow. What is the next move? The next move may be totally abstracting your infrastructure in the cloud, for example. Maybe you want to manage capacity or the financial aspect of your service delivery. Can you use the data you collected from all these transactions and feed it into capacity management that will tell you what to provision in the next year or so? Will it also feed financial management so you can understand how much that will cost, and do a cost value analysis? If you mature in terms of APM, you have control over incident and problem management. That hurdle is behind you. What is the next hurdle?

APM: How do you see IT-as-a-Service and cloud impacting APM in the future?

JPG: The problem becomes different because the infrastructure itself is abstract. Some of your APM efforts are simplified because you don't need to monitor the servers themselves. But you still need to monitor the performance of components in the cloud, so it changes some of the dynamics. You can still use your capabilities to monitor the code, the capacity, the size of the VMs that you put into the cloud.

Another question to consider: If everyone is in the cloud, do we have enough capacity in the Internet? Are we going to have a shortage of bandwidth to accommodate all the information back and forth? I think that is a valid question about APM. There is a finite capacity in everything.

APM: Do you have any other predictions for the future of APM?

JPG: Looking at the condition of IT in midsize enterprises – midsize being a broad range of enterprises, anything that has more than 1500 employees and less than 20,000 – there is still a lot to be done to bring them to the right level in terms of APM. There is no shortage in the APM market for the next few years, in my opinion.

Click here to read Part One of APMdigest's interview with Forrester's JP Garbani

ABOUT Jean-Pierre Garbani

J.P. Garbani came to Forrester through the acquisition of Giga Information Group, where he was the research director of the computing infrastructure group. J.P. started his IT career in early 1968 as a software engineer working on the automation of nuclear power plants in France. J.P. then joined Bull General Electric in Paris (subsequently Honeywell Bull), where he was a designer and project leader of very large network infrastructures in France, Scandinavia, and the US. At Bull, J.P. occupied several positions in engineering, marketing, and sales. J.P. moved to the US in 1984 and filled several engineering and marketing positions with Bull Information Systems. In 1994, J.P. created Epitome Technology Corporation, a middleware software company focused on manufacturing execution systems. Prior to joining Giga, J.P. worked as an IT management consultant for several large financial institutions in the US.

Hot Topic
The Latest
The Latest 10

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Q&A Part Two: Insight from Forrester on APM

Pete Goldin
Editor and Publisher
APMdigest

In Part Two of APMdigest's exclusive interview, Jean-Pierre Garbani, Vice President, Principal Analyst at Forrester, provides valuable insight into the present state and future of APM.

Click here to start with Part One of APMdigest's interview with Forrester's JP Garbani

APM: What is the most significant recent advancement that has transformed APM?

JPG: The ability to trace individual transactions is the biggest progress that has been made. Starting in 2006 and after some products started to appear on the market that had the level of granularity to trace individual transactions – products like OpTier or Correlsense. Now you have a number of other products that joined them. These solutions come with built-in transaction tracing, and that was really a significant advancement because there is no other way to bring all the components together.

If you have 10,000 servers, there is no way you can find anything. You can monitor everything, but if the end-user calls and says something does not work, how are you going to find which of these 10,000 servers is the one that is actually causing the problem? It sounds simple, you just have to monitor each server and the one that is not doing everything right is the one causing the problem. In actuality it is much more subtle than that, because you could have a server that does not seem to be overloaded but actually is overloaded for what you are asking it to do. It becomes extremely complex and is far beyond the capabilities of the human brain. So the greatest progress we have made in APM is the ability to identify which five servers out of 10,000 are involved in a particular transaction.

APM: Would you say APM is the realization of the Business Service Management (BSM) concept?

JPG: Yes, it is what we thought BSM would be in 2004. Back then, we were all thinking that dependency discovery would actually be incorporated into monitoring products. So we were already thinking of what APM is today. What derailed the movement towards APM – first, we very quickly realized that application dependency discovery, as it is defined by the CMDB, is not fast enough and not detailed enough.

The second thing that happened was ITIL. All of a sudden everyone was looking at streamlining processes, and all the vendors decided that it was more lucrative to build CMDBs and build ITIL processes around it, rather than improve the monitoring solutions. It is only now in the last two years where APM came into real life. It is still not perfect, but we are getting there.

APM: What is APM technology missing right now that would enable true BSM?

JPG: Today APM has the ability to collect data from multiple sources, and the capability to understand the map of the transactions. Now we need more intelligence, to be able to interpret the data from different sources in a meaningful way.

All the vendors are trying to bring some level of analytics to APM. I have seen interesting solutions from HP, which recently made a lot of progress in this area. Netuitive has also made great progress. Netuitive is an example that I recommend to many clients because they are independent. They have the capability to bring together all this data from different sources. I have also seen progress made by companies like IBM, BMC and Quest software. I think there is certainly more progress to be made. Maybe it is a matter of another jump in terms of technology capabilities, such as speed of processors. Maybe two years down the road we will have faster processors that will let us create even more complex algorithms.

APM: In your APM Market Overview you talked about the convergence of APM and BTM (business transaction management). Do you foresee continued convergence of APM with other technologies?

JPG: Sure. You have to look at what the enterprise will need tomorrow. What is the next move? The next move may be totally abstracting your infrastructure in the cloud, for example. Maybe you want to manage capacity or the financial aspect of your service delivery. Can you use the data you collected from all these transactions and feed it into capacity management that will tell you what to provision in the next year or so? Will it also feed financial management so you can understand how much that will cost, and do a cost value analysis? If you mature in terms of APM, you have control over incident and problem management. That hurdle is behind you. What is the next hurdle?

APM: How do you see IT-as-a-Service and cloud impacting APM in the future?

JPG: The problem becomes different because the infrastructure itself is abstract. Some of your APM efforts are simplified because you don't need to monitor the servers themselves. But you still need to monitor the performance of components in the cloud, so it changes some of the dynamics. You can still use your capabilities to monitor the code, the capacity, the size of the VMs that you put into the cloud.

Another question to consider: If everyone is in the cloud, do we have enough capacity in the Internet? Are we going to have a shortage of bandwidth to accommodate all the information back and forth? I think that is a valid question about APM. There is a finite capacity in everything.

APM: Do you have any other predictions for the future of APM?

JPG: Looking at the condition of IT in midsize enterprises – midsize being a broad range of enterprises, anything that has more than 1500 employees and less than 20,000 – there is still a lot to be done to bring them to the right level in terms of APM. There is no shortage in the APM market for the next few years, in my opinion.

Click here to read Part One of APMdigest's interview with Forrester's JP Garbani

ABOUT Jean-Pierre Garbani

J.P. Garbani came to Forrester through the acquisition of Giga Information Group, where he was the research director of the computing infrastructure group. J.P. started his IT career in early 1968 as a software engineer working on the automation of nuclear power plants in France. J.P. then joined Bull General Electric in Paris (subsequently Honeywell Bull), where he was a designer and project leader of very large network infrastructures in France, Scandinavia, and the US. At Bull, J.P. occupied several positions in engineering, marketing, and sales. J.P. moved to the US in 1984 and filled several engineering and marketing positions with Bull Information Systems. In 1994, J.P. created Epitome Technology Corporation, a middleware software company focused on manufacturing execution systems. Prior to joining Giga, J.P. worked as an IT management consultant for several large financial institutions in the US.

Hot Topic
The Latest
The Latest 10

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...