eG Innovations 2018 Predictions: Greater Things Expected From APM
January 11, 2018

Vinod Mohan
eG Innovations

Share this

Application performance monitoring (APM) has become a must-have technology for IT organizations. In today’s era of digital transformation, distributed computing and cloud-native services, APM tools enable IT organizations to measure the real experience of users, trace business transactions to identify slowdowns and deliver the code-level visibility needed for optimizing the performance of applications.

2018 will see the requirements and expectations from APM solutions increase in the following ways:

Application and infrastructure monitoring will need to converge

2018 will see an increasing demand for converged monitoring tools that include both application and infrastructure monitoring capabilities

Today, application performance monitoring tools focus mainly on code-level performance. These tools are effective when performance bottlenecks are in the application code (e.g., inefficient method call, poorly designed database queries, external web service causing a slowdown, etc.). At the same time, the performance of applications also depends on the IT infrastructure they operate on. Performance bottlenecks in any of the infrastructure tiers, such as database servers, the virtualization platform supporting the application, the storage tier, or infrastructure services – such as Active Directory, file access etc. – result in application slowdowns. One example of this would be how a storage bottleneck can cause database queries to be slow, which in turn affects the user experience.

Until now, IT organizations have been using different tools for application and infrastructure monitoring. This forces performance diagnosis across application and infrastructure tiers to be performed manually. Not only is this process slow and time consuming, but it also requires domain experts to be involved for troubleshooting.

Prediction: Application managers and IT teams will realize that they need contextual visibility into how an infrastructure problem affects application performance. 2018 will see an increasing demand for converged monitoring tools that include both application and infrastructure monitoring capabilities. These tools will be expected to cross-correlate across application and infrastructure tiers automatically to pinpoint the cause of application slowness: is it in the application code, or is it due to one of the infrastructure tiers, and why?

Going beyond user experience monitoring will be important

Today, there is a lot of emphasis on user experience monitoring. IT organizations are deploying synthetic and real user experience monitoring tools to measure user experience and report on compliance to SLAs.

While it is undoubtedly important, user experience monitoring only focuses on the interactive portion of a web application. In many applications, the processing required to fulfill a user request happens in the background, asynchronously. Failure or slowness of the non-interactive processing tasks affect the web application performance even more than that of the interactive tasks.

Prediction: IT organizations will demand that monitoring tools be capable of monitoring the non-interactive processing tasks associated with their web applications. Since there may not be common interfaces for monitoring such non-interactive tasks, monitoring tools will need to be extensible so IT administrators can add custom monitoring for their applications.

Domain expertise will become necessary

Increasingly, APM tools will be measured on the basis of the return on investment they offer. When a problem occurs, organizations will analyze if the APM tool alerted them to it and whether it pin-pointed the cause of the problem. The use of machine learning, pattern recognition, auto-correlation and other analytics capabilities are important, however, without the right metrics being collected, even the most intelligent monitoring tool may not be successful.

Most APM tools today focus on application code level visibility alone. The application is instrumented for monitoring in a way that is transparent to the application – e.g., through byte code instrumentation. The metrics collected are not application-specific. But for diagnosis to be accurate, application-specific monitoring is necessary. For example, an SAP application may experience slowness because the limit of work processes configured at the SAP server level has been reached. Likewise, SharePoint slowness can occur because growth of the content database is slowing query processing. Application-specific insights are necessary for accurate and timely problem diagnosis.

Prediction: 2018 will see application owners demand more application domain-specific KPIs from monitoring solutions. This will force APM vendors to embed domain expertise into their tools to help administrators quickly and easily troubleshoot problems with business-critical packaged applications such as SAP, SharePoint, PeopleSoft, Siebel and so on.

APM must focus on reporting

Until now, APM tools have been mainly used for real-time problem analysis and diagnosis. IT teams analyze user experience from different locations to point to network problems that affect application performance. They analyze application request flow topology graphs to highlight the time spent processing a request at each tier and to highlight the problematic tier.

But historical performance analytics have not received much attention. APM tools collect volumes of data. The addition of infrastructure monitoring to these tools will only increase the amount of data stored further. IT organizations will realize that operational analytics based on analysis of historical data can be extremely valuable – for application and infrastructure optimization, right-sizing, capacity planning and so on.

Prediction: In 2018, IT organizations will demand that monitoring tools provide built-in capabilities that provide actionable insights based on analysis of the collected data. Prescriptive analytics that spell out actions that IT organizations need to take to get more out of their IT and application investments will be required. APM tools will be expected to incorporate predictive capabilities to forewarn of impending problems. Providing this analysis in a simple to understand format, so even non-experts can interpret and act on the advice provided, will be important.

Vinod Mohan is Senior Manager, Product Marketing, at eG Innovations
Share this

The Latest

October 18, 2018

Two years ago, Amazon, Comcast, Twitter and Netflix were effectively taken off the Internet for multiple hours by a DDoS attack because they all relied on a single DNS provider. Can it happen again? ...

October 17, 2018

We're seeing artificial intelligence for IT operations or "AIOps" take center stage in the IT industry. If AIOps hasn't been on your horizon yet, look closely and expect it soon. So what can we expect from automation and AIOps as it becomes more commonplace? ...

October 15, 2018

Use of artificial intelligence (AI) in digital commerce is generally considered a success, according to a survey by Gartner, Inc. About 70 percent of digital commerce organizations surveyed report that their AI projects are very or extremely successful ...

October 12, 2018

Most organizations are adopting or considering adopting machine learning due to its benefits, rather than with the intention to cut people’s jobs, according to the Voice of the Enterprise (VoTE): AI & Machine Learning – Adoption, Drivers and Stakeholders 2018 survey conducted by 451 Research ...

October 11, 2018

AI (Artificial Intelligence) and ML (Machine Learning) are the number one strategic enterprise IT investment priority in 2018 (named by 33% of enterprises), taking the top spot from container management (28%), and clearly leaving behind DevOps pipeline automation (13%), according to new EMA research ...

October 09, 2018

Although Windows and Linux were historically viewed as competitors, modern IT advancements have ensured much needed network availability between these ecosystems for redundancy, fault tolerance, and competitive advantage. Software that offers intelligent availability enables the dynamic transfer of data and its processing to the best execution environment for any given purpose. That may be on-premises, in the cloud, in containers, in Windows, or in Linux ...

October 04, 2018

TEKsystems released the results of its 2018 Forecast Reality Check, measuring the current impact of market conditions on IT initiatives, hiring, salaries and skill needs. Here are some key results ...

October 02, 2018

Retailers that have readily adopted digital technologies have experienced a 6% CAGR revenue growth over a 3-year period, while other retailers that have explored digital without a full commitment to broad implementation experienced flat growth over the same period ...

October 01, 2018

As businesses look to capitalize on the benefits offered by the cloud, we've seen the rise of the DevOps practice which, in common with the cloud, offers businesses the advantages of greater agility, speed, quality and efficiency. However, achieving this agility requires end-to-end visibility based on continuous monitoring of the developed applications as part of the software development life cycle ...

September 28, 2018
I developed a Glossary, aimed at introducing topics and indicating where simple further reading can be found about the differences between CS and IT in their applicability to computing needs in today's workplace ...