Skip to main content

Another Look At Gartner's 5 Dimensions of APM

Helping IT Operate at the New Speed of Business

APMdigest followers will already have read the article on Gartner's 5 Dimensions of APM. While that article examines the advantages of  single- or multi-vendor sourcing for the Application Performance Management (APM) tools that address these different dimensions, we'd like to look at this matter from a different angle: What are the important issues and goals to consider when evaluating a suite of APM solutions -- from one or more vendors -- to ensure that your APM solution will help IT operate at the new speed of business? 

Consider Gartner's 5 dimensions of APM again:

1. End-user experience monitoring

The ability to capture end-to-end application performance data is critical, but few of today's apps are straight-line affairs. A web-based storefront, for instance, may present a user with ads or catalog information from sources that are outside of the storefront owner's own infrastructure. A traditional experience monitoring tool might look at how quickly the website interacts with the back-end sales applications. However, the speed of that transaction is only one part -- and a relatively late part -- of the user's experience.

If a problem outside of the vendor's infrastructure is delaying the delivery of third-party catalog content -- and causing the entire web page to load slowly -- the user may never get to the point of clicking the "Place my Order" button. 

Today's businesses need APM tools that can monitor all aspects of the user experience. You may have no control over the third-party servers pushing content to your site, but you need to know how those servers affect the end user experience.

It also helps if your APM tools can enable you to make changes on the fly if the network links or external servers are compromising the overall experience you want to provide your users.

2. Run-time application architecture discovery, modeling, and display

The environment in which today's applications execute are more and more complex. With distributed networks, virtualized machines, web services and service-oriented architectures (and more), discovering, modeling, and displaying all the components that contribute to application performance is a challenge. You need tools that can provide real-time insight into all aspects of your application delivery infrastructure.

For efficiency's sake, IT organizations should be able to visualize this complete infrastructure on the same console that provides insight into the end-user experience. In a world of real-time business, IT teams need to be able to interact with all aspects of an APM solution quickly, efficiently, and effectively.  

3. User-defined transaction profiling

User-defined transaction profiling is not just about tracing events as they occur among components or as they move across the paths discovered in the second dimension. What's important here is to understand whether events are occurring when, where, and as efficiently​ as you want them to occur. 

Real-time IT organizations need APM tools for tracing events along an application path in the context of defined KPIs. To achieve that, these tools need to interact very efficiently with the APM tools you use for end user experience monitoring and run-time application architecture discovery, modeling, and display. This ensures efficient information reuse, but more importantly a frictionless interaction between these tools is that you need to minimize latency in the system. In a real-time, performance-oriented world, latency is to be avoided.

4. Component deep-dive monitoring in application context

The critical consideration related to deep dive monitoring is how well the tools you use work together. Six best-of-breed component monitoring tools presenting information on six different consoles would be absurd. Relying on a single manager of managers (MOM), though, to create the appearance of an integrated monitoring solution may simply mask the inefficiencies inherent in trying rely on six different monitoring tools.

If you decide not to use a single tool to provide deep-dive monitoring of your entire business infrastructure, be sure that your SI integrates the different tools you have selected with low-latency, real-time responsiveness in mind. Moreover, be sure that all the information captured by the tools can be used in real time by the other components within the APM suite.​

5. Analytics

If your data is modeled correctly -- and the important word here is "if" -- you can use sophisticated analytical tools to discover all kinds of opportunities to improve application performance or the user's experience of your application. The important consideration is the data model itself. All the tools we have just discussed must be able to contribute data easily to a performance management database (PMDB). If they cannot, you then have to invest in further complexity to deploy additional tools to transform data from one solution so that it becomes useful to other tools -- and that is highly inefficient.   

Ultimately, it is important to consider the world in which your applications exist. Business is increasingly moving to a real-time model. It requires real-time responsiveness. Batch-oriented APM tools that are designed to support a break-fix mentality and aimed at infrastructure running exclusively on a corporate network over which IT has complete control -- these won't help you in the world we live in.

Your APM tools must provide real-time, transaction-orientation support. They must contribute to a real-time responsiveness, driven by the needs of business and focused on the quality of the user experience of the applications -- both inside and beyond the firewall.

About Raj Sabhlok and Suvish Viswanathan

Raj Sabhlok is the President of ManageEngine. Suvish Viswanathan is an APM Research Analyst at ManageEngine. ​ ManageEngine is a division of Zoho Corp. and makers of a globally renowned suite of cost-effective network, systems, security, and applications management software solutions.

Related Links:

www.manageengine.com

Gartner's 5 Dimensions of APM

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...

Another Look At Gartner's 5 Dimensions of APM

Helping IT Operate at the New Speed of Business

APMdigest followers will already have read the article on Gartner's 5 Dimensions of APM. While that article examines the advantages of  single- or multi-vendor sourcing for the Application Performance Management (APM) tools that address these different dimensions, we'd like to look at this matter from a different angle: What are the important issues and goals to consider when evaluating a suite of APM solutions -- from one or more vendors -- to ensure that your APM solution will help IT operate at the new speed of business? 

Consider Gartner's 5 dimensions of APM again:

1. End-user experience monitoring

The ability to capture end-to-end application performance data is critical, but few of today's apps are straight-line affairs. A web-based storefront, for instance, may present a user with ads or catalog information from sources that are outside of the storefront owner's own infrastructure. A traditional experience monitoring tool might look at how quickly the website interacts with the back-end sales applications. However, the speed of that transaction is only one part -- and a relatively late part -- of the user's experience.

If a problem outside of the vendor's infrastructure is delaying the delivery of third-party catalog content -- and causing the entire web page to load slowly -- the user may never get to the point of clicking the "Place my Order" button. 

Today's businesses need APM tools that can monitor all aspects of the user experience. You may have no control over the third-party servers pushing content to your site, but you need to know how those servers affect the end user experience.

It also helps if your APM tools can enable you to make changes on the fly if the network links or external servers are compromising the overall experience you want to provide your users.

2. Run-time application architecture discovery, modeling, and display

The environment in which today's applications execute are more and more complex. With distributed networks, virtualized machines, web services and service-oriented architectures (and more), discovering, modeling, and displaying all the components that contribute to application performance is a challenge. You need tools that can provide real-time insight into all aspects of your application delivery infrastructure.

For efficiency's sake, IT organizations should be able to visualize this complete infrastructure on the same console that provides insight into the end-user experience. In a world of real-time business, IT teams need to be able to interact with all aspects of an APM solution quickly, efficiently, and effectively.  

3. User-defined transaction profiling

User-defined transaction profiling is not just about tracing events as they occur among components or as they move across the paths discovered in the second dimension. What's important here is to understand whether events are occurring when, where, and as efficiently​ as you want them to occur. 

Real-time IT organizations need APM tools for tracing events along an application path in the context of defined KPIs. To achieve that, these tools need to interact very efficiently with the APM tools you use for end user experience monitoring and run-time application architecture discovery, modeling, and display. This ensures efficient information reuse, but more importantly a frictionless interaction between these tools is that you need to minimize latency in the system. In a real-time, performance-oriented world, latency is to be avoided.

4. Component deep-dive monitoring in application context

The critical consideration related to deep dive monitoring is how well the tools you use work together. Six best-of-breed component monitoring tools presenting information on six different consoles would be absurd. Relying on a single manager of managers (MOM), though, to create the appearance of an integrated monitoring solution may simply mask the inefficiencies inherent in trying rely on six different monitoring tools.

If you decide not to use a single tool to provide deep-dive monitoring of your entire business infrastructure, be sure that your SI integrates the different tools you have selected with low-latency, real-time responsiveness in mind. Moreover, be sure that all the information captured by the tools can be used in real time by the other components within the APM suite.​

5. Analytics

If your data is modeled correctly -- and the important word here is "if" -- you can use sophisticated analytical tools to discover all kinds of opportunities to improve application performance or the user's experience of your application. The important consideration is the data model itself. All the tools we have just discussed must be able to contribute data easily to a performance management database (PMDB). If they cannot, you then have to invest in further complexity to deploy additional tools to transform data from one solution so that it becomes useful to other tools -- and that is highly inefficient.   

Ultimately, it is important to consider the world in which your applications exist. Business is increasingly moving to a real-time model. It requires real-time responsiveness. Batch-oriented APM tools that are designed to support a break-fix mentality and aimed at infrastructure running exclusively on a corporate network over which IT has complete control -- these won't help you in the world we live in.

Your APM tools must provide real-time, transaction-orientation support. They must contribute to a real-time responsiveness, driven by the needs of business and focused on the quality of the user experience of the applications -- both inside and beyond the firewall.

About Raj Sabhlok and Suvish Viswanathan

Raj Sabhlok is the President of ManageEngine. Suvish Viswanathan is an APM Research Analyst at ManageEngine. ​ ManageEngine is a division of Zoho Corp. and makers of a globally renowned suite of cost-effective network, systems, security, and applications management software solutions.

Related Links:

www.manageengine.com

Gartner's 5 Dimensions of APM

The Latest

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

Today, organizations are generating and processing more data than ever before. From training AI models to running complex analytics, massive datasets have become the backbone of innovation. However, as businesses embrace the cloud for its scalability and flexibility, a new challenge arises: managing the soaring costs of storing and processing this data ...