Q&A Part Two: Insight from Forrester on APM
February 28, 2012
Share this

In Part Two of APMdigest's exclusive interview, Jean-Pierre Garbani, Vice President, Principal Analyst at Forrester, provides valuable insight into the present state and future of APM.

Click here to start with Part One of APMdigest's interview with Forrester's JP Garbani

APM: What is the most significant recent advancement that has transformed APM?

JPG: The ability to trace individual transactions is the biggest progress that has been made. Starting in 2006 and after some products started to appear on the market that had the level of granularity to trace individual transactions – products like OpTier or Correlsense. Now you have a number of other products that joined them. These solutions come with built-in transaction tracing, and that was really a significant advancement because there is no other way to bring all the components together.

If you have 10,000 servers, there is no way you can find anything. You can monitor everything, but if the end-user calls and says something does not work, how are you going to find which of these 10,000 servers is the one that is actually causing the problem? It sounds simple, you just have to monitor each server and the one that is not doing everything right is the one causing the problem. In actuality it is much more subtle than that, because you could have a server that does not seem to be overloaded but actually is overloaded for what you are asking it to do. It becomes extremely complex and is far beyond the capabilities of the human brain. So the greatest progress we have made in APM is the ability to identify which five servers out of 10,000 are involved in a particular transaction.

APM: Would you say APM is the realization of the Business Service Management (BSM) concept?

JPG: Yes, it is what we thought BSM would be in 2004. Back then, we were all thinking that dependency discovery would actually be incorporated into monitoring products. So we were already thinking of what APM is today. What derailed the movement towards APM – first, we very quickly realized that application dependency discovery, as it is defined by the CMDB, is not fast enough and not detailed enough.

The second thing that happened was ITIL. All of a sudden everyone was looking at streamlining processes, and all the vendors decided that it was more lucrative to build CMDBs and build ITIL processes around it, rather than improve the monitoring solutions. It is only now in the last two years where APM came into real life. It is still not perfect, but we are getting there.

APM: What is APM technology missing right now that would enable true BSM?

JPG: Today APM has the ability to collect data from multiple sources, and the capability to understand the map of the transactions. Now we need more intelligence, to be able to interpret the data from different sources in a meaningful way.

All the vendors are trying to bring some level of analytics to APM. I have seen interesting solutions from HP, which recently made a lot of progress in this area. Netuitive has also made great progress. Netuitive is an example that I recommend to many clients because they are independent. They have the capability to bring together all this data from different sources. I have also seen progress made by companies like IBM, BMC and Quest software. I think there is certainly more progress to be made. Maybe it is a matter of another jump in terms of technology capabilities, such as speed of processors. Maybe two years down the road we will have faster processors that will let us create even more complex algorithms.

APM: In your APM Market Overview you talked about the convergence of APM and BTM (business transaction management). Do you foresee continued convergence of APM with other technologies?

JPG: Sure. You have to look at what the enterprise will need tomorrow. What is the next move? The next move may be totally abstracting your infrastructure in the cloud, for example. Maybe you want to manage capacity or the financial aspect of your service delivery. Can you use the data you collected from all these transactions and feed it into capacity management that will tell you what to provision in the next year or so? Will it also feed financial management so you can understand how much that will cost, and do a cost value analysis? If you mature in terms of APM, you have control over incident and problem management. That hurdle is behind you. What is the next hurdle?

APM: How do you see IT-as-a-Service and cloud impacting APM in the future?

JPG: The problem becomes different because the infrastructure itself is abstract. Some of your APM efforts are simplified because you don't need to monitor the servers themselves. But you still need to monitor the performance of components in the cloud, so it changes some of the dynamics. You can still use your capabilities to monitor the code, the capacity, the size of the VMs that you put into the cloud.

Another question to consider: If everyone is in the cloud, do we have enough capacity in the Internet? Are we going to have a shortage of bandwidth to accommodate all the information back and forth? I think that is a valid question about APM. There is a finite capacity in everything.

APM: Do you have any other predictions for the future of APM?

JPG: Looking at the condition of IT in midsize enterprises – midsize being a broad range of enterprises, anything that has more than 1500 employees and less than 20,000 – there is still a lot to be done to bring them to the right level in terms of APM. There is no shortage in the APM market for the next few years, in my opinion.

Click here to read Part One of APMdigest's interview with Forrester's JP Garbani

ABOUT Jean-Pierre Garbani

J.P. Garbani came to Forrester through the acquisition of Giga Information Group, where he was the research director of the computing infrastructure group. J.P. started his IT career in early 1968 as a software engineer working on the automation of nuclear power plants in France. J.P. then joined Bull General Electric in Paris (subsequently Honeywell Bull), where he was a designer and project leader of very large network infrastructures in France, Scandinavia, and the US. At Bull, J.P. occupied several positions in engineering, marketing, and sales. J.P. moved to the US in 1984 and filled several engineering and marketing positions with Bull Information Systems. In 1994, J.P. created Epitome Technology Corporation, a middleware software company focused on manufacturing execution systems. Prior to joining Giga, J.P. worked as an IT management consultant for several large financial institutions in the US.

Share this

The Latest

November 19, 2019

Unexpected and unintentional drops in network quality, so-called network brownouts, cause serious financial damage and frustrate employees. A recent survey sponsored by Netrounds reveals that more than 60% of network brownouts are first discovered by IT’s internal and external customers, or never even reported, instead of being proactively detected by IT organizations ...

November 18, 2019

Digital transformation reaches into every aspect of our work and personal lives, to the point that there is an automatic expectation of 24/7, anywhere availability regarding any organization with an online presence. This environment is ripe for artificial intelligence, so it's no surprise that IT Operations has been an early adopter of AI ...

November 14, 2019

A brief introduction to Applications Performance Monitoring (APM), breaking it down to a few key points, followed by a few important lessons which I have learned over the years ...

November 13, 2019

Research conducted by ServiceNow shows that Gen Zs, now entering the workforce, recognize the promise of technology to improve work experiences, are eager to learn from other generations, and believe they can help older generations be more open‑minded ...

November 12, 2019

We're in the middle of a technology and connectivity revolution, giving us access to infinite digital tools and technologies. Is this multitude of technology solutions empowering us to do our best work, or getting in our way? ...

November 07, 2019

Microservices have become the go-to architectural standard in modern distributed systems. While there are plenty of tools and techniques to architect, manage, and automate the deployment of such distributed systems, issues during troubleshooting still happen at the individual service level, thereby prolonging the time taken to resolve an outage ...

November 06, 2019

A recent APMdigest blog by Jean Tunis provided an excellent background on Application Performance Monitoring (APM) and what it does. A further topic that I wanted to touch on though is the need for good quality data. If you are to get the most out of your APM solution possible, you will need to feed it with the best quality data ...

November 05, 2019

Humans and manual processes can no longer keep pace with network innovation, evolution, complexity, and change. That's why we're hearing more about self-driving networks, self-healing networks, intent-based networking, and other concepts. These approaches collectively belong to a growing focus area called AIOps, which aims to apply automation, AI and ML to support modern network operations ...

November 04, 2019

IT outages happen to companies across the globe, regardless of location, annual revenue or size. Even the most mammoth companies are at risk of downtime. Increasingly over the past few years, high-profile IT outages — defined as when the services or systems a business provides suddenly become unavailable — have ended up splashed across national news headlines ...

October 31, 2019

APM tools are ideal for an application owner or a line of business owner to track the performance of their key applications. But these tools have broader applicability to different stakeholders in an organization. In this blog, we will review the teams and functional departments that can make use of an APM tool and how they could put it to work ...