Skip to main content

Another Look At Gartner's 5 Dimensions of APM

Helping IT Operate at the New Speed of Business

APMdigest followers will already have read the article on Gartner's 5 Dimensions of APM. While that article examines the advantages of  single- or multi-vendor sourcing for the Application Performance Management (APM) tools that address these different dimensions, we'd like to look at this matter from a different angle: What are the important issues and goals to consider when evaluating a suite of APM solutions -- from one or more vendors -- to ensure that your APM solution will help IT operate at the new speed of business? 

Consider Gartner's 5 dimensions of APM again:

1. End-user experience monitoring

The ability to capture end-to-end application performance data is critical, but few of today's apps are straight-line affairs. A web-based storefront, for instance, may present a user with ads or catalog information from sources that are outside of the storefront owner's own infrastructure. A traditional experience monitoring tool might look at how quickly the website interacts with the back-end sales applications. However, the speed of that transaction is only one part -- and a relatively late part -- of the user's experience.

If a problem outside of the vendor's infrastructure is delaying the delivery of third-party catalog content -- and causing the entire web page to load slowly -- the user may never get to the point of clicking the "Place my Order" button. 

Today's businesses need APM tools that can monitor all aspects of the user experience. You may have no control over the third-party servers pushing content to your site, but you need to know how those servers affect the end user experience.

It also helps if your APM tools can enable you to make changes on the fly if the network links or external servers are compromising the overall experience you want to provide your users.

2. Run-time application architecture discovery, modeling, and display

The environment in which today's applications execute are more and more complex. With distributed networks, virtualized machines, web services and service-oriented architectures (and more), discovering, modeling, and displaying all the components that contribute to application performance is a challenge. You need tools that can provide real-time insight into all aspects of your application delivery infrastructure.

For efficiency's sake, IT organizations should be able to visualize this complete infrastructure on the same console that provides insight into the end-user experience. In a world of real-time business, IT teams need to be able to interact with all aspects of an APM solution quickly, efficiently, and effectively.  

3. User-defined transaction profiling

User-defined transaction profiling is not just about tracing events as they occur among components or as they move across the paths discovered in the second dimension. What's important here is to understand whether events are occurring when, where, and as efficiently​ as you want them to occur. 

Real-time IT organizations need APM tools for tracing events along an application path in the context of defined KPIs. To achieve that, these tools need to interact very efficiently with the APM tools you use for end user experience monitoring and run-time application architecture discovery, modeling, and display. This ensures efficient information reuse, but more importantly a frictionless interaction between these tools is that you need to minimize latency in the system. In a real-time, performance-oriented world, latency is to be avoided.

4. Component deep-dive monitoring in application context

The critical consideration related to deep dive monitoring is how well the tools you use work together. Six best-of-breed component monitoring tools presenting information on six different consoles would be absurd. Relying on a single manager of managers (MOM), though, to create the appearance of an integrated monitoring solution may simply mask the inefficiencies inherent in trying rely on six different monitoring tools.

If you decide not to use a single tool to provide deep-dive monitoring of your entire business infrastructure, be sure that your SI integrates the different tools you have selected with low-latency, real-time responsiveness in mind. Moreover, be sure that all the information captured by the tools can be used in real time by the other components within the APM suite.​

5. Analytics

If your data is modeled correctly -- and the important word here is "if" -- you can use sophisticated analytical tools to discover all kinds of opportunities to improve application performance or the user's experience of your application. The important consideration is the data model itself. All the tools we have just discussed must be able to contribute data easily to a performance management database (PMDB). If they cannot, you then have to invest in further complexity to deploy additional tools to transform data from one solution so that it becomes useful to other tools -- and that is highly inefficient.   

Ultimately, it is important to consider the world in which your applications exist. Business is increasingly moving to a real-time model. It requires real-time responsiveness. Batch-oriented APM tools that are designed to support a break-fix mentality and aimed at infrastructure running exclusively on a corporate network over which IT has complete control -- these won't help you in the world we live in.

Your APM tools must provide real-time, transaction-orientation support. They must contribute to a real-time responsiveness, driven by the needs of business and focused on the quality of the user experience of the applications -- both inside and beyond the firewall.

About Raj Sabhlok and Suvish Viswanathan

Raj Sabhlok is the President of ManageEngine. Suvish Viswanathan is an APM Research Analyst at ManageEngine. ​ ManageEngine is a division of Zoho Corp. and makers of a globally renowned suite of cost-effective network, systems, security, and applications management software solutions.

Related Links:

www.manageengine.com

Gartner's 5 Dimensions of APM

The Latest

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

Another Look At Gartner's 5 Dimensions of APM

Helping IT Operate at the New Speed of Business

APMdigest followers will already have read the article on Gartner's 5 Dimensions of APM. While that article examines the advantages of  single- or multi-vendor sourcing for the Application Performance Management (APM) tools that address these different dimensions, we'd like to look at this matter from a different angle: What are the important issues and goals to consider when evaluating a suite of APM solutions -- from one or more vendors -- to ensure that your APM solution will help IT operate at the new speed of business? 

Consider Gartner's 5 dimensions of APM again:

1. End-user experience monitoring

The ability to capture end-to-end application performance data is critical, but few of today's apps are straight-line affairs. A web-based storefront, for instance, may present a user with ads or catalog information from sources that are outside of the storefront owner's own infrastructure. A traditional experience monitoring tool might look at how quickly the website interacts with the back-end sales applications. However, the speed of that transaction is only one part -- and a relatively late part -- of the user's experience.

If a problem outside of the vendor's infrastructure is delaying the delivery of third-party catalog content -- and causing the entire web page to load slowly -- the user may never get to the point of clicking the "Place my Order" button. 

Today's businesses need APM tools that can monitor all aspects of the user experience. You may have no control over the third-party servers pushing content to your site, but you need to know how those servers affect the end user experience.

It also helps if your APM tools can enable you to make changes on the fly if the network links or external servers are compromising the overall experience you want to provide your users.

2. Run-time application architecture discovery, modeling, and display

The environment in which today's applications execute are more and more complex. With distributed networks, virtualized machines, web services and service-oriented architectures (and more), discovering, modeling, and displaying all the components that contribute to application performance is a challenge. You need tools that can provide real-time insight into all aspects of your application delivery infrastructure.

For efficiency's sake, IT organizations should be able to visualize this complete infrastructure on the same console that provides insight into the end-user experience. In a world of real-time business, IT teams need to be able to interact with all aspects of an APM solution quickly, efficiently, and effectively.  

3. User-defined transaction profiling

User-defined transaction profiling is not just about tracing events as they occur among components or as they move across the paths discovered in the second dimension. What's important here is to understand whether events are occurring when, where, and as efficiently​ as you want them to occur. 

Real-time IT organizations need APM tools for tracing events along an application path in the context of defined KPIs. To achieve that, these tools need to interact very efficiently with the APM tools you use for end user experience monitoring and run-time application architecture discovery, modeling, and display. This ensures efficient information reuse, but more importantly a frictionless interaction between these tools is that you need to minimize latency in the system. In a real-time, performance-oriented world, latency is to be avoided.

4. Component deep-dive monitoring in application context

The critical consideration related to deep dive monitoring is how well the tools you use work together. Six best-of-breed component monitoring tools presenting information on six different consoles would be absurd. Relying on a single manager of managers (MOM), though, to create the appearance of an integrated monitoring solution may simply mask the inefficiencies inherent in trying rely on six different monitoring tools.

If you decide not to use a single tool to provide deep-dive monitoring of your entire business infrastructure, be sure that your SI integrates the different tools you have selected with low-latency, real-time responsiveness in mind. Moreover, be sure that all the information captured by the tools can be used in real time by the other components within the APM suite.​

5. Analytics

If your data is modeled correctly -- and the important word here is "if" -- you can use sophisticated analytical tools to discover all kinds of opportunities to improve application performance or the user's experience of your application. The important consideration is the data model itself. All the tools we have just discussed must be able to contribute data easily to a performance management database (PMDB). If they cannot, you then have to invest in further complexity to deploy additional tools to transform data from one solution so that it becomes useful to other tools -- and that is highly inefficient.   

Ultimately, it is important to consider the world in which your applications exist. Business is increasingly moving to a real-time model. It requires real-time responsiveness. Batch-oriented APM tools that are designed to support a break-fix mentality and aimed at infrastructure running exclusively on a corporate network over which IT has complete control -- these won't help you in the world we live in.

Your APM tools must provide real-time, transaction-orientation support. They must contribute to a real-time responsiveness, driven by the needs of business and focused on the quality of the user experience of the applications -- both inside and beyond the firewall.

About Raj Sabhlok and Suvish Viswanathan

Raj Sabhlok is the President of ManageEngine. Suvish Viswanathan is an APM Research Analyst at ManageEngine. ​ ManageEngine is a division of Zoho Corp. and makers of a globally renowned suite of cost-effective network, systems, security, and applications management software solutions.

Related Links:

www.manageengine.com

Gartner's 5 Dimensions of APM

The Latest

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...