Skip to main content

5 Predictions for Application Performance Management in 2016

Srinivas Ramanathan

One can safely say that Application Performance Management (APM) will grow even further in importance in 2016 as businesses turn to application software to operate their key internal and external processes. But we can also expect some changes in the focus of APM purchasers and software vendors in 2016:

1. End User Experience Monitoring

End User Experience Monitoring is important but not necessarily the only thing that APM must focus on and be measured by. Because so many key internal businesses processes are run by software – e.g., day-end reconciliation, backend order fulfilment, chargeback and inventory tracking, etc., – failure or slowdown of these services is business-affecting. So far, end-user response time has been regarded as the defining measure of an online businesses' performance and thus the foundational APM requirement. But the performance of key business processes will ultimately affect user experience, so tracking these processes proactively and detecting issues before users are affected will grow as a primary requirement.

2. Transaction Tracing

Transaction tracing is important for rapid application performance problem diagnosis, but is not sufficient by itself for successful APM. Transaction tracing – i.e., the ability to watch a transaction through all its processing stages and determining which stage is responsible for slowdowns – is a key part of APM, so much so that in the recent years, transaction tracing and APM are virtually synonymous. While transaction tracing is a key for good APM, it is not the only requirement for APM. For example, if there is a slowdown of the backend database, all transactions will highlight slowness for database queries, but this is not very helpful in determining where a problem lies. Automating route cause analysis is outside of the purview of transaction tracing but it is a critical function, so must be addressed either separately, or better, holistically.

3. Deep-Dive Visibility

Transaction visibility must be augmented with deep-dive visibility into every tier of the underlying infrastructure. Troubleshooting performance issues requires extensive expertise about each and every tier of the infrastructure. Enabling performance diagnosis to be accomplished easily and with minimal human intervention requires a great deal of automation. APM tools must augment user experience monitoring and transaction tracing with in-depth insights and domain expertise inside every layer and every tier of the infrastructure. Additionally, these tools should be easy to set up and use, to help remove potential barriers to adoption.

4. Virtualization and Cloud

APM tools must become virtualization and cloud-aware. Virtualization and cloud computing cannot be looked at as yet another infrastructure silo. Performance issues in the virtualization or cloud computing tier affects application performance. Hence, APM tools must discover and correlate virtualization performance with that of the individual application component tiers.

5. Collaborative Management

Organizations will move to collaborative management from silo management. Given the number of tiers that an application cuts across, it will no longer be practical to have individual administrators focus on just the tiers of the infrastructure they operate and control. For the application has to support the business well, the entire application operations team must function as a cohesive unit. Application performance issues will be correlated across the different tiers of the infrastructure so that problems can be resolved quickly. This requires unified and correlated visibility into the entire infrastructure, which APM tools will provide. Development and operations will standardize on the same tool sets so problems detected by operations can be rapidly remediated by the exact development or operations area from which a performance issue is originating.

Srinivas Ramanathan is CEO and Founder of eG Innovations.

The Latest

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

5 Predictions for Application Performance Management in 2016

Srinivas Ramanathan

One can safely say that Application Performance Management (APM) will grow even further in importance in 2016 as businesses turn to application software to operate their key internal and external processes. But we can also expect some changes in the focus of APM purchasers and software vendors in 2016:

1. End User Experience Monitoring

End User Experience Monitoring is important but not necessarily the only thing that APM must focus on and be measured by. Because so many key internal businesses processes are run by software – e.g., day-end reconciliation, backend order fulfilment, chargeback and inventory tracking, etc., – failure or slowdown of these services is business-affecting. So far, end-user response time has been regarded as the defining measure of an online businesses' performance and thus the foundational APM requirement. But the performance of key business processes will ultimately affect user experience, so tracking these processes proactively and detecting issues before users are affected will grow as a primary requirement.

2. Transaction Tracing

Transaction tracing is important for rapid application performance problem diagnosis, but is not sufficient by itself for successful APM. Transaction tracing – i.e., the ability to watch a transaction through all its processing stages and determining which stage is responsible for slowdowns – is a key part of APM, so much so that in the recent years, transaction tracing and APM are virtually synonymous. While transaction tracing is a key for good APM, it is not the only requirement for APM. For example, if there is a slowdown of the backend database, all transactions will highlight slowness for database queries, but this is not very helpful in determining where a problem lies. Automating route cause analysis is outside of the purview of transaction tracing but it is a critical function, so must be addressed either separately, or better, holistically.

3. Deep-Dive Visibility

Transaction visibility must be augmented with deep-dive visibility into every tier of the underlying infrastructure. Troubleshooting performance issues requires extensive expertise about each and every tier of the infrastructure. Enabling performance diagnosis to be accomplished easily and with minimal human intervention requires a great deal of automation. APM tools must augment user experience monitoring and transaction tracing with in-depth insights and domain expertise inside every layer and every tier of the infrastructure. Additionally, these tools should be easy to set up and use, to help remove potential barriers to adoption.

4. Virtualization and Cloud

APM tools must become virtualization and cloud-aware. Virtualization and cloud computing cannot be looked at as yet another infrastructure silo. Performance issues in the virtualization or cloud computing tier affects application performance. Hence, APM tools must discover and correlate virtualization performance with that of the individual application component tiers.

5. Collaborative Management

Organizations will move to collaborative management from silo management. Given the number of tiers that an application cuts across, it will no longer be practical to have individual administrators focus on just the tiers of the infrastructure they operate and control. For the application has to support the business well, the entire application operations team must function as a cohesive unit. Application performance issues will be correlated across the different tiers of the infrastructure so that problems can be resolved quickly. This requires unified and correlated visibility into the entire infrastructure, which APM tools will provide. Development and operations will standardize on the same tool sets so problems detected by operations can be rapidly remediated by the exact development or operations area from which a performance issue is originating.

Srinivas Ramanathan is CEO and Founder of eG Innovations.

The Latest

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...