eG Innovations 2018 Predictions: Greater Things Expected From APM
January 11, 2018

Vinod Mohan
eG Innovations

Share this

Application performance monitoring (APM) has become a must-have technology for IT organizations. In today’s era of digital transformation, distributed computing and cloud-native services, APM tools enable IT organizations to measure the real experience of users, trace business transactions to identify slowdowns and deliver the code-level visibility needed for optimizing the performance of applications.

2018 will see the requirements and expectations from APM solutions increase in the following ways:

Application and infrastructure monitoring will need to converge

2018 will see an increasing demand for converged monitoring tools that include both application and infrastructure monitoring capabilities

Today, application performance monitoring tools focus mainly on code-level performance. These tools are effective when performance bottlenecks are in the application code (e.g., inefficient method call, poorly designed database queries, external web service causing a slowdown, etc.). At the same time, the performance of applications also depends on the IT infrastructure they operate on. Performance bottlenecks in any of the infrastructure tiers, such as database servers, the virtualization platform supporting the application, the storage tier, or infrastructure services – such as Active Directory, file access etc. – result in application slowdowns. One example of this would be how a storage bottleneck can cause database queries to be slow, which in turn affects the user experience.

Until now, IT organizations have been using different tools for application and infrastructure monitoring. This forces performance diagnosis across application and infrastructure tiers to be performed manually. Not only is this process slow and time consuming, but it also requires domain experts to be involved for troubleshooting.

Prediction: Application managers and IT teams will realize that they need contextual visibility into how an infrastructure problem affects application performance. 2018 will see an increasing demand for converged monitoring tools that include both application and infrastructure monitoring capabilities. These tools will be expected to cross-correlate across application and infrastructure tiers automatically to pinpoint the cause of application slowness: is it in the application code, or is it due to one of the infrastructure tiers, and why?

Going beyond user experience monitoring will be important

Today, there is a lot of emphasis on user experience monitoring. IT organizations are deploying synthetic and real user experience monitoring tools to measure user experience and report on compliance to SLAs.

While it is undoubtedly important, user experience monitoring only focuses on the interactive portion of a web application. In many applications, the processing required to fulfill a user request happens in the background, asynchronously. Failure or slowness of the non-interactive processing tasks affect the web application performance even more than that of the interactive tasks.

Prediction: IT organizations will demand that monitoring tools be capable of monitoring the non-interactive processing tasks associated with their web applications. Since there may not be common interfaces for monitoring such non-interactive tasks, monitoring tools will need to be extensible so IT administrators can add custom monitoring for their applications.

Domain expertise will become necessary

Increasingly, APM tools will be measured on the basis of the return on investment they offer. When a problem occurs, organizations will analyze if the APM tool alerted them to it and whether it pin-pointed the cause of the problem. The use of machine learning, pattern recognition, auto-correlation and other analytics capabilities are important, however, without the right metrics being collected, even the most intelligent monitoring tool may not be successful.

Most APM tools today focus on application code level visibility alone. The application is instrumented for monitoring in a way that is transparent to the application – e.g., through byte code instrumentation. The metrics collected are not application-specific. But for diagnosis to be accurate, application-specific monitoring is necessary. For example, an SAP application may experience slowness because the limit of work processes configured at the SAP server level has been reached. Likewise, SharePoint slowness can occur because growth of the content database is slowing query processing. Application-specific insights are necessary for accurate and timely problem diagnosis.

Prediction: 2018 will see application owners demand more application domain-specific KPIs from monitoring solutions. This will force APM vendors to embed domain expertise into their tools to help administrators quickly and easily troubleshoot problems with business-critical packaged applications such as SAP, SharePoint, PeopleSoft, Siebel and so on.

APM must focus on reporting

Until now, APM tools have been mainly used for real-time problem analysis and diagnosis. IT teams analyze user experience from different locations to point to network problems that affect application performance. They analyze application request flow topology graphs to highlight the time spent processing a request at each tier and to highlight the problematic tier.

But historical performance analytics have not received much attention. APM tools collect volumes of data. The addition of infrastructure monitoring to these tools will only increase the amount of data stored further. IT organizations will realize that operational analytics based on analysis of historical data can be extremely valuable – for application and infrastructure optimization, right-sizing, capacity planning and so on.

Prediction: In 2018, IT organizations will demand that monitoring tools provide built-in capabilities that provide actionable insights based on analysis of the collected data. Prescriptive analytics that spell out actions that IT organizations need to take to get more out of their IT and application investments will be required. APM tools will be expected to incorporate predictive capabilities to forewarn of impending problems. Providing this analysis in a simple to understand format, so even non-experts can interpret and act on the advice provided, will be important.

Vinod Mohan is Senior Manager, Product Marketing, at eG Innovations
Share this

The Latest

July 19, 2018

According to a recent survey, critical IT incidents cost the average organization upwards of $6 million per year. This infographic outlines 4 easy steps to automate incident management, reducing downtime and costs to organizations ...

July 17, 2018

The essential value resulting from data-driven processes has become progressively linked with analytics. Once considered a desired complement to intuitive decision-making, analytics has developed into a main focus of mission-critical applications across industries for any number of use cases ...

July 16, 2018

The question of SaaS-based technology over the past decade has quickly changed from "should we?" to "how soon can we?" even for the most customized and regulated of industries. This macro move toward SaaS has also encouraged a series of IT "best practices" that have potential impacts on the employee digital experience, organizational risk and ultimately, productivity ...

July 11, 2018

Optimization means improving the performance of your human and technology resources while keeping a watchful eye. To accomplish this, we must have clear, crisp visibility into the metrics relevant to the delivery of workspace applications to your end users and to the devices – the endpoints – they use to be productive ...

July 09, 2018

As tech headlines flash across my email, the CMDB, and its federated equivalent, the CMS, are almost never mentioned. And yet when I do research, dialog with IT, or support our consulting team, the CMDB/CMS many times still remains paramount ...

June 28, 2018

Given the size and complexity of today's IT networks it can be almost impossible to detect just when and where a security breach or network failure might occur. It's critical, therefore, that businesses have complete visibility over their IT networks, and any applications and services that run on those networks, in order to protect their customers' information, assure uninterrupted service delivery and, of course, comply with the GDPR ...

June 27, 2018

A new breed of solution has been born that simultaneously provides the precision of packet-based analytics with the speed of flow-based monitoring (at a reasonable cost). Here are more reasons to use these new NPM/APM analytics solutions ...

June 26, 2018

A new breed of solution has been born that simultaneously provides the precision of packet-based analytics with the speed of flow-based monitoring (at a reasonable cost). Here are 6 reasons to use these new NPM/APM analytics solutions ...

June 21, 2018

There’s no doubt that digital innovations are transforming industries, and business leaders are left with little or no choice – either embrace digital processes or suffer the consequences and get left behind ...

June 20, 2018

Looking ahead to the rest of 2018 and beyond, it seems like many of the trends that shaped 2017 are set to continue, with the key difference being in how they evolve and shift as they become mainstream. Five key factors defining the progression of the digital transformation movement are ...