In Part Two of APMdigest's exclusive interview, Will Cappelli, Gartner Research VP in Enterprise Management, discusses the past and future of APM, and its key components such as analytics and end-user experience monitoring.
APM: How has APM technology changed in the last couple of years?
A couple of significant changes. One has been the recognition that APM is what we call a multidimensional problem. So you need a collection of different technologies that are looking at the application from different perspectives in order to create a fully rounded picture of what is going on. In the past, many of these different technologies were seen as being competitors of one another. But now I think enterprise buyers recognize that they complement one another.
Second, I would say what has changed most significantly over the last few years is the focus on end-user experience monitoring, especially the drive to capture the real user experience as opposed to capturing some proxy of that user experience. Utilizing synthetic transactions is still seen as valuable in a supplementary way. But the main event around end-user experience monitoring is being able to capture what is actually going on when a user is accessing the system.
The other critical component in all of this is the increasing importance accorded to analytics. As application architectures become more distributed, as they become more dynamic, the ability to see what is happening in the application becomes limited. So you want tools that are able to learn, almost on-the-fly, the relationships among the different variables that describe the states of different components within the application, and then learn the behavior of this dynamic system.
In the future, we are expecting to see many of the other elements that constitute APM to play an increasingly subsidiary role, and the center of APM will become the rich capture of end-user experience data, supplemented by a very powerful analytics capability that will draw data from whatever sources are available at the time to learn the causal patterns that will describe the application's behavior.
APM: Is the market focusing more on analytics because it is a key component of APM?
APM is one of the factors that has driven increased attention to analytics. Other critical factors are the complexity and dynamism of the virtual environment, trying to deal with some of the complexities of monitoring cloud-based applications, and the ongoing increase in the complexity of IT overall. All these factors are working together.
But APM is one of the most important use cases because you have multiple data sources which need to be looked at simultaneously. Each of the data collection technologies generates vast quantities of data. And then you also have the issue that although the data sets are large, they are not redundant. You cannot just sample a small part of these datasets and figure you've gotten the message of the whole. You really need to be looking at large segments of those datasets in order to learn the lesson they are trying to teach you. That means some kind of automated capability that will allow you to discover the patterns inherent in those data sets.
APM: You mentioned the large datasets. How do you solve that old problem of performance monitoring systems delivering too much information?
You are hitting on a fundamental point. The data volumes are exploding. They are much more difficult to manage by themselves. In the old days, where you would have a person sitting in the NOC looking at a screen, and trying to make decisions on whether those events were green, yellow or red. Those days are rapidly going away. You need an aggregating, simplifying pattern discovery capability that will overlay the data and help you make sense of it all.
There are some key straightforward statistics that you want to present in their simplicity because they are meaningful in and of themselves. There is a discipline, the art of creating a meaningful higher level health index that can be positioned to the executive, or even to the IT operation professional, with a minimum of explanation. Here is where the role of analytical technologies play a significant role in that they will allow you to extract meaningful and intuitive graphs that describe the relationship among different variables that impact the system.
APM: The Magic Quadrant talks about the five functionalities of APM, including end-user experience monitoring and analytics, as well as runtime application architecture discovery, modeling and display; user-defined transaction profiling; and component deep-dive monitoring. Are these functionalities based on the vendor offerings?
These five functionalities represent more or less the conceptual model that enterprise buyers have in their heads. I think that, in fact, the vendors came to support that model kicking and screaming. Then many tried to focus the APM problem on one of the dimensions or another. If you go back and look at the various head-to-head competitions and marketing arguments that took place even as recently as two years ago, you see the vendors pushing one of the five functional areas as being the key to APM. I think it is only because of the persistent demand on the part of enterprise buyers for all five capabilities that drove vendors to populate their portfolios in a way that would adequately reflect those five functionalities.
APM: What is missing in APM today?
There are a couple of key areas. It all comes down to the fact that this generation of APM technology has emerged to deal with traditional web-based applications. This points to where the gaps are right now.
First of all, there is the fact that the Internet is becoming more complex. It is much more difficult to see what is happening within the edge of the Internet unless you are actually there monitoring the edge of the Internet. And at the same time, the impact of what is happening at the edge of the Internet on the user's perception of the application's performance is growing. So the need for getting into the edge of the Internet and seeing what is going on there has increased. The way we can summarize that is to say these tools need to get a lot better at monitoring a Web 2.0 applications and applications being accessed over mobile devices.
Second, you also have the issue of all those legacy applications that are not web-based. We are finding many enterprises looking at the more traditional legacy applications and becoming frustrated because current APM tools don't handle those environments that well. That includes not only traditional big client environments like SAP or PeopleSoft, but also some of the Citrix-based environments as well. They are not handled by current technologies with the same degree of thoroughness that web-based applications are.
The last area where there is a very patchy treatment by the vendor community is with regard to vertical industry applications. These technologies do quite well with in-house developed applications built in Java and .Net, but their understanding of that whole realm of off-the-shelf industry specific applications – banking applications, healthcare applications – the APM technologies still fall short
APM: Are companies utilizing APM side by side with their own in-house monitoring for the industry niche apps?
What ends up happening is that you have a great inequality in the degree to which applications are monitored and managed. The in-house developed applications are well monitored while the packaged apps are usually monitored using some fairly low-level functionality that are offered by the vendors of the application packages themselves.
APM: You predicted that analytics and end-user experience monitoring will become even more important. Do you have any other predictions about major changes to come for APM?
A couple of other key changes will happen. I think we will see APM becoming increasingly embedded into our overall lifecycle approach to application management. This is an old topic but it has really been revitalized over the last year and a half or so. Sometimes you hear the term “devops”, which comes from the cloud community, but it is basically nothing more than a reawakening of interest in the overall application lifecycle. So we see APM and application development coming together into a single lifecycle approach. And many of the technologies that are used in production are being ported over to the development side in order to create a consistent view of the application across its lifecycle.
Second, we anticipate APM will become increasingly embedded into an automation cycle and it will be used for dynamic provisioning, dynamic infrastructure configuration, and to create feed back loops. So if there is a performance problem picked up by the APM system, that will feed back into your data center automation system that may reprovision some resources so that the application starts to perform well again.
Third, we think there will be increased integration between the monitoring and management of business applications and the monitoring and management of what are now considered to be more network services, such as VoIP and IP video. We anticipate that organizations will use more and more of the same stack to monitor both types of services.
Finally, I think we will see more integration between APM and overall business process monitoring, where applications and business processes will become more entangled over time and hence will need to be managed in conjunction.
APM: The Magic Quadrant mentioned some statistics that show a 10-15% rise in APM adoption. Do you see that continuing to rise?
Barring economic apocalypse, I think it is safe to predict that the adoption rate will continue over the next 4 or 5 years. I think it is difficult to see beyond that because of the general changing nature of IT itself. The relationship between users and the IT environment will change so significantly over the next 10 years that it may be hard to identify something as an application in 10 years.
The 11th anniversary of the Apple App Store frames a momentous time period in how we interact with each other and the services upon which we have come to rely. Even so, we continue to have our in-app mobile experiences marred by poor performance and instability. Apple has done little to help, and other tools provide little to no visibility and benchmarks on which to prioritize our efforts outside of crashes ...
Confidence in artificial intelligence (AI) and its ability to enhance network operations is high, but only if the issue of bias is tackled. Service providers (68%) are most concerned about the bias impact of "bad or incomplete data sets," since effective AI requires clean, high quality, unbiased data, according to a new survey of communication service providers ...
Every internet connected network needs a visibility platform for traffic monitoring, information security and infrastructure security. To accomplish this, most enterprise networks utilize from four to seven specialized tools on network links in order to monitor, capture and analyze traffic. Connecting tools to live links with TAPs allow network managers to safely see, analyze and protect traffic without compromising network reliability. However, like most networking equipment it's critical that installation and configuration are done properly ...
The Democratic presidential debates are likely to have many people switching back-and-forth between live streams over the coming months. This is going to be especially true in the days before and after each debate, which will mean many office networks are likely to see a greater share of their total capacity going to streaming news services than ever before ...
Monitoring of heating, ventilation and air conditioning (HVAC) infrastructures has become a key concern over the last several years. Modern versions of these systems need continual monitoring to stay energy efficient and deliver satisfactory comfort to building occupants. This is because there are a large number of environmental sensors and motorized control systems within HVAC systems. Proper monitoring helps maintain a consistent temperature to reduce energy and maintenance costs for this type of infrastructure ...