5 Predictions for Application Performance Management in 2016
January 04, 2016

Srinivas Ramanathan
eG Innovations

Share this

One can safely say that Application Performance Management (APM) will grow even further in importance in 2016 as businesses turn to application software to operate their key internal and external processes. But we can also expect some changes in the focus of APM purchasers and software vendors in 2016:

1. End User Experience Monitoring

End User Experience Monitoring is important but not necessarily the only thing that APM must focus on and be measured by. Because so many key internal businesses processes are run by software – e.g., day-end reconciliation, backend order fulfilment, chargeback and inventory tracking, etc., – failure or slowdown of these services is business-affecting. So far, end-user response time has been regarded as the defining measure of an online businesses' performance and thus the foundational APM requirement. But the performance of key business processes will ultimately affect user experience, so tracking these processes proactively and detecting issues before users are affected will grow as a primary requirement.

2. Transaction Tracing

Transaction tracing is important for rapid application performance problem diagnosis, but is not sufficient by itself for successful APM. Transaction tracing – i.e., the ability to watch a transaction through all its processing stages and determining which stage is responsible for slowdowns – is a key part of APM, so much so that in the recent years, transaction tracing and APM are virtually synonymous. While transaction tracing is a key for good APM, it is not the only requirement for APM. For example, if there is a slowdown of the backend database, all transactions will highlight slowness for database queries, but this is not very helpful in determining where a problem lies. Automating route cause analysis is outside of the purview of transaction tracing but it is a critical function, so must be addressed either separately, or better, holistically.

3. Deep-Dive Visibility

Transaction visibility must be augmented with deep-dive visibility into every tier of the underlying infrastructure. Troubleshooting performance issues requires extensive expertise about each and every tier of the infrastructure. Enabling performance diagnosis to be accomplished easily and with minimal human intervention requires a great deal of automation. APM tools must augment user experience monitoring and transaction tracing with in-depth insights and domain expertise inside every layer and every tier of the infrastructure. Additionally, these tools should be easy to set up and use, to help remove potential barriers to adoption.

4. Virtualization and Cloud

APM tools must become virtualization and cloud-aware. Virtualization and cloud computing cannot be looked at as yet another infrastructure silo. Performance issues in the virtualization or cloud computing tier affects application performance. Hence, APM tools must discover and correlate virtualization performance with that of the individual application component tiers.

5. Collaborative Management

Organizations will move to collaborative management from silo management. Given the number of tiers that an application cuts across, it will no longer be practical to have individual administrators focus on just the tiers of the infrastructure they operate and control. For the application has to support the business well, the entire application operations team must function as a cohesive unit. Application performance issues will be correlated across the different tiers of the infrastructure so that problems can be resolved quickly. This requires unified and correlated visibility into the entire infrastructure, which APM tools will provide. Development and operations will standardize on the same tool sets so problems detected by operations can be rapidly remediated by the exact development or operations area from which a performance issue is originating.

Srinivas Ramanathan is CEO and Founder of eG Innovations.

Srinivas Ramanathan is CEO and Founder of eG Innovations
Share this

The Latest

July 25, 2024

The 2024 State of the Data Center Report from CoreSite shows that although C-suite confidence in the economy remains high, a VUCA (volatile, uncertain, complex, ambiguous) environment has many business leaders proceeding with caution when it comes to their IT and data ecosystems, with an emphasis on cost control and predictability, flexibility and risk management ...

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...