Skip to main content

APM and ITOA: Clearing Up the Confusion

Guy Warren

I was reading a discussion on a social media site about Application Performance Management, and realized that there is a lot of confusion about what is Application Performance Monitoring, Application Performance Management (APM) and IT Operational Analytics (ITOA).

Just looking at the words used, you would believe that Application Performance Monitoring is focused on watching data and monitoring it for a particular condition or state. Application Performance Management would lead you to believe that this is a wider field which includes a range of techniques to certainly monitor the application, but also to manage other aspects of the IT estate. The degree to which complex analytics are used is unclear, but potentially IT Operational Analytics could be seen as a subset of Application Performance Management, although the focus on application might make it more limited in its scope than ITOA.

To help clarify this rather muddy set of terms, we use two models which we find are much clearer and logical, and have less ambiguity than the APM and ITOA definitions.

The Monitoring Maturity Model

The first model we call the Monitoring Maturity Model, because it is a layered model where generally the higher levels are based on data collected from the lower levels. The model is:

1. Infrastructure Monitoring: Collection data on the servers, operating systems, network and storage and setting rule based alerts to catch potential problems.

2. Basic Application Monitoring: From interrogating the Operating System, capture and alert on data about the processes running on the servers. This would include CPU & memory utilization, disk I/O, network I/O etc.

3. Advanced Application Monitoring: Installing a tailored agent on the server which is capturing data specific to the application it is monitoring. This can be "inside the app" data or "outside the app" which is useful for Off the Shelf software products and middleware.

4. Flow Monitoring: This is capturing data about the information passing between applications and monitoring/reporting on data flows. This would include volumes/second, volumes per counterparty, latency etc.

5. Business and IT Analysis: This is the analysis of both business data and "machine" data from levels 1 and 2 to understand the business activity and the behavior of the IT estate.

Monitoring vs Analytics

The second model is separating monitoring from analytics. There is no hard definition which separates them so we break the types of analysis into three types:

1. Detect: This is a rule based detection of an alert condition. This is generally what people mean when they talk about Monitoring.

2. Analyze: This is the collection of lots of data, even data which did not trigger a rule in Detect, and analyzing it to discover more insight. This may be as simple as trends, or as complex as Machine Learning and time series pattern based Anomaly Detection. This would also include techniques like Bayesian Network Causal Analysis.

3. Predict: This uses current and historic data to try and predict future or “what if” scenarios. Again, this can be as simple as extrapolation, or as complex as comparison of current state to empirically derived behavioral data, the likes of which you might have gathered in a performance lab when stress testing an application.

Whichever way you model your IT estate and the behavior of your applications, it is necessary to have a clear language so that people are talking about the same thing.

Guy Warren is CEO of ITRS Group.

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...

APM and ITOA: Clearing Up the Confusion

Guy Warren

I was reading a discussion on a social media site about Application Performance Management, and realized that there is a lot of confusion about what is Application Performance Monitoring, Application Performance Management (APM) and IT Operational Analytics (ITOA).

Just looking at the words used, you would believe that Application Performance Monitoring is focused on watching data and monitoring it for a particular condition or state. Application Performance Management would lead you to believe that this is a wider field which includes a range of techniques to certainly monitor the application, but also to manage other aspects of the IT estate. The degree to which complex analytics are used is unclear, but potentially IT Operational Analytics could be seen as a subset of Application Performance Management, although the focus on application might make it more limited in its scope than ITOA.

To help clarify this rather muddy set of terms, we use two models which we find are much clearer and logical, and have less ambiguity than the APM and ITOA definitions.

The Monitoring Maturity Model

The first model we call the Monitoring Maturity Model, because it is a layered model where generally the higher levels are based on data collected from the lower levels. The model is:

1. Infrastructure Monitoring: Collection data on the servers, operating systems, network and storage and setting rule based alerts to catch potential problems.

2. Basic Application Monitoring: From interrogating the Operating System, capture and alert on data about the processes running on the servers. This would include CPU & memory utilization, disk I/O, network I/O etc.

3. Advanced Application Monitoring: Installing a tailored agent on the server which is capturing data specific to the application it is monitoring. This can be "inside the app" data or "outside the app" which is useful for Off the Shelf software products and middleware.

4. Flow Monitoring: This is capturing data about the information passing between applications and monitoring/reporting on data flows. This would include volumes/second, volumes per counterparty, latency etc.

5. Business and IT Analysis: This is the analysis of both business data and "machine" data from levels 1 and 2 to understand the business activity and the behavior of the IT estate.

Monitoring vs Analytics

The second model is separating monitoring from analytics. There is no hard definition which separates them so we break the types of analysis into three types:

1. Detect: This is a rule based detection of an alert condition. This is generally what people mean when they talk about Monitoring.

2. Analyze: This is the collection of lots of data, even data which did not trigger a rule in Detect, and analyzing it to discover more insight. This may be as simple as trends, or as complex as Machine Learning and time series pattern based Anomaly Detection. This would also include techniques like Bayesian Network Causal Analysis.

3. Predict: This uses current and historic data to try and predict future or “what if” scenarios. Again, this can be as simple as extrapolation, or as complex as comparison of current state to empirically derived behavioral data, the likes of which you might have gathered in a performance lab when stress testing an application.

Whichever way you model your IT estate and the behavior of your applications, it is necessary to have a clear language so that people are talking about the same thing.

Guy Warren is CEO of ITRS Group.

Hot Topics

The Latest

According to Auvik's 2025 IT Trends Report, 60% of IT professionals feel at least moderately burned out on the job, with 43% stating that their workload is contributing to work stress. At the same time, many IT professionals are naming AI and machine learning as key areas they'd most like to upskill ...

Businesses that face downtime or outages risk financial and reputational damage, as well as reducing partner, shareholder, and customer trust. One of the major challenges that enterprises face is implementing a robust business continuity plan. What's the solution? The answer may lie in disaster recovery tactics such as truly immutable storage and regular disaster recovery testing ...

IT spending is expected to jump nearly 10% in 2025, and organizations are now facing pressure to manage costs without slowing down critical functions like observability. To meet the challenge, leaders are turning to smarter, more cost effective business strategies. Enter stage right: OpenTelemetry, the missing piece of the puzzle that is no longer just an option but rather a strategic advantage ...

Amidst the threat of cyberhacks and data breaches, companies install several security measures to keep their business safely afloat. These measures aim to protect businesses, employees, and crucial data. Yet, employees perceive them as burdensome. Frustrated with complex logins, slow access, and constant security checks, workers decide to completely bypass all security set-ups ...

Image
Cloudbrink's Personal SASE services provide last-mile acceleration and reduction in latency

In MEAN TIME TO INSIGHT Episode 13, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud networking strategy ... 

In high-traffic environments, the sheer volume and unpredictable nature of network incidents can quickly overwhelm even the most skilled teams, hindering their ability to react swiftly and effectively, potentially impacting service availability and overall business performance. This is where closed-loop remediation comes into the picture: an IT management concept designed to address the escalating complexity of modern networks ...

In 2025, enterprise workflows are undergoing a seismic shift. Propelled by breakthroughs in generative AI (GenAI), large language models (LLMs), and natural language processing (NLP), a new paradigm is emerging — agentic AI. This technology is not just automating tasks; it's reimagining how organizations make decisions, engage customers, and operate at scale ...

In the early days of the cloud revolution, business leaders perceived cloud services as a means of sidelining IT organizations. IT was too slow, too expensive, or incapable of supporting new technologies. With a team of developers, line of business managers could deploy new applications and services in the cloud. IT has been fighting to retake control ever since. Today, IT is back in the driver's seat, according to new research by Enterprise Management Associates (EMA) ...

In today's fast-paced and increasingly complex network environments, Network Operations Centers (NOCs) are the backbone of ensuring continuous uptime, smooth service delivery, and rapid issue resolution. However, the challenges faced by NOC teams are only growing. In a recent study, 78% state network complexity has grown significantly over the last few years while 84% regularly learn about network issues from users. It is imperative we adopt a new approach to managing today's network experiences ...

Image
Broadcom

From growing reliance on FinOps teams to the increasing attention on artificial intelligence (AI), and software licensing, the Flexera 2025 State of the Cloud Report digs into how organizations are improving cloud spend efficiency, while tackling the complexities of emerging technologies ...