eG Innovations 2018 Predictions: Greater Things Expected From APM
January 11, 2018

Vinod Mohan
eG Innovations

Share this

Application performance monitoring (APM) has become a must-have technology for IT organizations. In today’s era of digital transformation, distributed computing and cloud-native services, APM tools enable IT organizations to measure the real experience of users, trace business transactions to identify slowdowns and deliver the code-level visibility needed for optimizing the performance of applications.

2018 will see the requirements and expectations from APM solutions increase in the following ways:

Application and infrastructure monitoring will need to converge

2018 will see an increasing demand for converged monitoring tools that include both application and infrastructure monitoring capabilities

Today, application performance monitoring tools focus mainly on code-level performance. These tools are effective when performance bottlenecks are in the application code (e.g., inefficient method call, poorly designed database queries, external web service causing a slowdown, etc.). At the same time, the performance of applications also depends on the IT infrastructure they operate on. Performance bottlenecks in any of the infrastructure tiers, such as database servers, the virtualization platform supporting the application, the storage tier, or infrastructure services – such as Active Directory, file access etc. – result in application slowdowns. One example of this would be how a storage bottleneck can cause database queries to be slow, which in turn affects the user experience.

Until now, IT organizations have been using different tools for application and infrastructure monitoring. This forces performance diagnosis across application and infrastructure tiers to be performed manually. Not only is this process slow and time consuming, but it also requires domain experts to be involved for troubleshooting.

Prediction: Application managers and IT teams will realize that they need contextual visibility into how an infrastructure problem affects application performance. 2018 will see an increasing demand for converged monitoring tools that include both application and infrastructure monitoring capabilities. These tools will be expected to cross-correlate across application and infrastructure tiers automatically to pinpoint the cause of application slowness: is it in the application code, or is it due to one of the infrastructure tiers, and why?

Going beyond user experience monitoring will be important

Today, there is a lot of emphasis on user experience monitoring. IT organizations are deploying synthetic and real user experience monitoring tools to measure user experience and report on compliance to SLAs.

While it is undoubtedly important, user experience monitoring only focuses on the interactive portion of a web application. In many applications, the processing required to fulfill a user request happens in the background, asynchronously. Failure or slowness of the non-interactive processing tasks affect the web application performance even more than that of the interactive tasks.

Prediction: IT organizations will demand that monitoring tools be capable of monitoring the non-interactive processing tasks associated with their web applications. Since there may not be common interfaces for monitoring such non-interactive tasks, monitoring tools will need to be extensible so IT administrators can add custom monitoring for their applications.

Domain expertise will become necessary

Increasingly, APM tools will be measured on the basis of the return on investment they offer. When a problem occurs, organizations will analyze if the APM tool alerted them to it and whether it pin-pointed the cause of the problem. The use of machine learning, pattern recognition, auto-correlation and other analytics capabilities are important, however, without the right metrics being collected, even the most intelligent monitoring tool may not be successful.

Most APM tools today focus on application code level visibility alone. The application is instrumented for monitoring in a way that is transparent to the application – e.g., through byte code instrumentation. The metrics collected are not application-specific. But for diagnosis to be accurate, application-specific monitoring is necessary. For example, an SAP application may experience slowness because the limit of work processes configured at the SAP server level has been reached. Likewise, SharePoint slowness can occur because growth of the content database is slowing query processing. Application-specific insights are necessary for accurate and timely problem diagnosis.

Prediction: 2018 will see application owners demand more application domain-specific KPIs from monitoring solutions. This will force APM vendors to embed domain expertise into their tools to help administrators quickly and easily troubleshoot problems with business-critical packaged applications such as SAP, SharePoint, PeopleSoft, Siebel and so on.

APM must focus on reporting

Until now, APM tools have been mainly used for real-time problem analysis and diagnosis. IT teams analyze user experience from different locations to point to network problems that affect application performance. They analyze application request flow topology graphs to highlight the time spent processing a request at each tier and to highlight the problematic tier.

But historical performance analytics have not received much attention. APM tools collect volumes of data. The addition of infrastructure monitoring to these tools will only increase the amount of data stored further. IT organizations will realize that operational analytics based on analysis of historical data can be extremely valuable – for application and infrastructure optimization, right-sizing, capacity planning and so on.

Prediction: In 2018, IT organizations will demand that monitoring tools provide built-in capabilities that provide actionable insights based on analysis of the collected data. Prescriptive analytics that spell out actions that IT organizations need to take to get more out of their IT and application investments will be required. APM tools will be expected to incorporate predictive capabilities to forewarn of impending problems. Providing this analysis in a simple to understand format, so even non-experts can interpret and act on the advice provided, will be important.

Vinod Mohan is Senior Manager, Product Marketing, at eG Innovations
Share this

The Latest

April 24, 2018

Managing emerging technologies such as Cloud, microservices and containers and SDx are driving organizations to redefine their IT monitoring strategies, according to a new study titled 17 Areas Shaping the Information Technology Operations Market in 2018 from Digital Enterprise Journal (DEJ) ...

April 23, 2018

Balancing digital innovation with security is critical to helping businesses deliver strong digital experiences, influencing factors such maintaining a competitive edge, customer satisfaction, customer trust, and risk mitigation. But some businesses struggle to meet that balance according to new data ...

April 19, 2018

In the course of researching, documenting and advising on user experience management needs and directions for more than a decade, I've found myself waging a quiet (and sometimes not so quiet) war with several industry assumptions. Chief among these is the notion that user experience management (UEM) is purely a subset of application performance management (APM). This APM-centricity misses some of UEM's most critical value points, and in a basic sense fails to recognize what UEM is truly about ...

April 18, 2018

We now live in the kind of connected world where established businesses that are not evolving digitally are in jeopardy of becoming extinct. New research shows companies are preparing to make digital transformation a priority in the near future. However most of them have a long way to go before achieving any kind of mastery over the multiple disciples required to effectively innovate ...

April 17, 2018

IT Transformation can result in bottom-line benefits that drive business differentiation, innovation and growth, according to new research conducted by Enterprise Strategy Group (ESG) ...

April 16, 2018

While regulatory compliance is an important activity for medium to large businesses, easy and cost-effective solutions can be difficult to find. Network visibility is an often overlooked, but critically important, activity that can help lower costs and make life easier for IT personnel that are responsible for these regulatory compliance solutions ...

April 12, 2018

This is the third in a series of three blogs directed at recent EMA research on the digital war room. In this blog, we'll look at three areas that have emerged in a spotlight in and of themselves — as signs of changing times — let alone as they may impact digital war room decision making. They are the growing focus on development and agile/DevOps; the impacts of cloud; and the growing need for security and operations (SecOps) to team more effectively ...

April 11, 2018

As we've seen, hardware is at the root of a large proportion of data center outages, and the costs and consequences are often exacerbated when VMs are affected. The best answer, therefore, is for IT pros to get back to basics ...

April 10, 2018

Risk is relative. The Peltzman Effect describes how humans change behavior when risk factors are reduced. They often act more recklessly and drive risk right back up. The phenomenon is recognized by many economists, its effects have been studied in the field of medicine, and I'd argue it is at the root of an interesting trend in IT — namely the increasing cost of downtime despite our more reliable virtualized environments ...

April 09, 2018

How do enterprises prepare for the future that our Cloud Vision 2020 survey forecasts? I see three immediate takeaways to focus on ...