Skip to main content

Q&A Part One: EMA Talks About Advanced Performance Analytics

Pete Goldin
APMdigest

In APMdigest's exclusive interview, Dennis Drogseth, VP of Research at Enterprise Management Associates (EMA), talks about the APA market, and the recently released EMA Radar for Advanced Performance Analytics (APA) Use Cases: Q4 2012.

APM: Let's start be defining APA.

APA is about Big Data. Huge volumes of data coming out of what I would call the service performance management space – tools that have evolved to manage the performance of applications and other services. APA assimilates that Big Data in near real-time, and uses a lot of advanced heuristics to do predictive or least innovative ways of looking at problems.

APM: Are the capabilities to analyze Big Data, in real-time, and produce predictive results the 3 main defining characteristics that differentiate APA from traditional analytics?

DD: That's true, although I would add other potential attributes such as “self-learning” and “discovering the unobvious.”

But I can hear BI analysts saying that BI tools are evolving to deliver in real-time. All the 22 vendors in the APA Radar Report came out of performance management, they did not come out of the data warehousing. So the DNA is different. It is really a heritage statement, in part that requires Big Data, heuristics and some real-time, either predictive or strong analytic value add.

And APA is not limited to real-time either. Some of these solutions have very strong historical analytics. One even has its own internal OLAP cube. This is why a lot of analysts so far haven’t looked at APA. It is more of a biological thing – sort of how species evolve – than it is a lovely little mathematical definition.

APM: Do you consider APA as a subset of Application Performance Management (APM), or a totally separate market

DD: I would consider it separate but not totally separate. In APA, “A” stands for “advanced” not “application.” I am not saying there isn’t a strong overlap, but the two are not the same. You could make a case that APA is more accurately a subset of “service” performance.

The way I would define APM is certainly smaller than the span of APA. APA is more sprawling and more unruly than APM, in some respects. But there are a lot of APM capabilities that are not APA, such as basic monitoring. Maybe the best way to summarize is that I see APA as a child of APM and service management that will grow up to be bigger than they are in the future.

APM: It seems to me that you would almost have to have APA for APM, to make APM work today, to deal with Big Data and the other issues.

DD: To be competitive, yes. Not all of the 22 vendors in the Radar Report would claim to be APM, but for the ones who would, one of the factors that makes them more competitive is some APA capabilities. Yes, I would say it is a competitive differentiator for APM. But it is not limited to APM.

APM: Do users always buy APA separately or does it come with an APM solution?

DD: The goal of the radar is to show that APA can come in many different forms. In some cases, like Netuitive, it is primarily an overlay, and that general approach — to leverage APA by assimilating many different pre-existing data sources — is growing more and more. But in most cases, APA is part of a suite of solutions, many or most of which do some of their own monitoring, or can at least collect data directly.

APA: In your recent blog on APMdigest, you said “By Q4 of last year I realized that the industry was at an APA turning point.” What was the turning point?

DD: The turning point for me was when both IBM and HP introduced Netuitive-like functionality in Q4 of 2011. They introduced analytic overlays that would feed off third-party sources as well as their own solutions. And of course you could argue the same was true when ProactiveNet was acquired by BMC.

To tell you the truth, I have been watching Netuitive, along with other APA innovators, for years, and I have been waiting for the industry to move more in that direction. And in Q4 last year I saw the ship is beginning to sail – or at least it is leaving the dock.

APM: What has caused this new drive toward APA?

DD: That's a good question. What are the drivers? The need for more cross domain capabilities, for one. If you think about how performance management has evolved, it began with a lot of point solution tools. Niche tools. But unfortunately they were targeted at very narrow spans, and sometimes device specific. You can no longer run an IT organization based on a lot of siloed tools that only look at one domain in isolation.

The other driver is the increasing pressure for IT to become more efficient and deliver value as well as cost efficiencies to the business, which includes a much more enlightened summary of what is going on than was available in the past.

One of the factors that has sort of doomed the BSM acronym was its association with long, protracted, costly deployments that would take years to evolve. That is not how IT organizations can function anymore. So another driver is to have a much more dynamic, self-aware, self-learning capabilities.

Yet another driver for APA has been the need to manage more eclectic environments – thanks to Cloud computing. Cloud is often a mosaic of service provider infrastructures and internal IT infrastructures – Cloud and non-Cloud. How do you bring that all together and understand that from an effective, service-centric point of view?

Q&A Part Two: EMA Talks About Advanced Performance Analytics

Related Links:

EMA Releases New Radar Report on Advanced Performance Analytics

APMdigest Sponsors Featured in New EMA Radar Report on Advanced Performance Analytics

EMA's Dennis Drogseth Publishes New Novel

Click here to download the EMA Radar Report on Advanced Performance Analytics

View the EMA On-demand webinar- Advanced Performance Analytics (APA) Radar Report: Big Data with a New, Real-time Context

Hot Topic
The Latest
The Latest 10

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Q&A Part One: EMA Talks About Advanced Performance Analytics

Pete Goldin
APMdigest

In APMdigest's exclusive interview, Dennis Drogseth, VP of Research at Enterprise Management Associates (EMA), talks about the APA market, and the recently released EMA Radar for Advanced Performance Analytics (APA) Use Cases: Q4 2012.

APM: Let's start be defining APA.

APA is about Big Data. Huge volumes of data coming out of what I would call the service performance management space – tools that have evolved to manage the performance of applications and other services. APA assimilates that Big Data in near real-time, and uses a lot of advanced heuristics to do predictive or least innovative ways of looking at problems.

APM: Are the capabilities to analyze Big Data, in real-time, and produce predictive results the 3 main defining characteristics that differentiate APA from traditional analytics?

DD: That's true, although I would add other potential attributes such as “self-learning” and “discovering the unobvious.”

But I can hear BI analysts saying that BI tools are evolving to deliver in real-time. All the 22 vendors in the APA Radar Report came out of performance management, they did not come out of the data warehousing. So the DNA is different. It is really a heritage statement, in part that requires Big Data, heuristics and some real-time, either predictive or strong analytic value add.

And APA is not limited to real-time either. Some of these solutions have very strong historical analytics. One even has its own internal OLAP cube. This is why a lot of analysts so far haven’t looked at APA. It is more of a biological thing – sort of how species evolve – than it is a lovely little mathematical definition.

APM: Do you consider APA as a subset of Application Performance Management (APM), or a totally separate market

DD: I would consider it separate but not totally separate. In APA, “A” stands for “advanced” not “application.” I am not saying there isn’t a strong overlap, but the two are not the same. You could make a case that APA is more accurately a subset of “service” performance.

The way I would define APM is certainly smaller than the span of APA. APA is more sprawling and more unruly than APM, in some respects. But there are a lot of APM capabilities that are not APA, such as basic monitoring. Maybe the best way to summarize is that I see APA as a child of APM and service management that will grow up to be bigger than they are in the future.

APM: It seems to me that you would almost have to have APA for APM, to make APM work today, to deal with Big Data and the other issues.

DD: To be competitive, yes. Not all of the 22 vendors in the Radar Report would claim to be APM, but for the ones who would, one of the factors that makes them more competitive is some APA capabilities. Yes, I would say it is a competitive differentiator for APM. But it is not limited to APM.

APM: Do users always buy APA separately or does it come with an APM solution?

DD: The goal of the radar is to show that APA can come in many different forms. In some cases, like Netuitive, it is primarily an overlay, and that general approach — to leverage APA by assimilating many different pre-existing data sources — is growing more and more. But in most cases, APA is part of a suite of solutions, many or most of which do some of their own monitoring, or can at least collect data directly.

APA: In your recent blog on APMdigest, you said “By Q4 of last year I realized that the industry was at an APA turning point.” What was the turning point?

DD: The turning point for me was when both IBM and HP introduced Netuitive-like functionality in Q4 of 2011. They introduced analytic overlays that would feed off third-party sources as well as their own solutions. And of course you could argue the same was true when ProactiveNet was acquired by BMC.

To tell you the truth, I have been watching Netuitive, along with other APA innovators, for years, and I have been waiting for the industry to move more in that direction. And in Q4 last year I saw the ship is beginning to sail – or at least it is leaving the dock.

APM: What has caused this new drive toward APA?

DD: That's a good question. What are the drivers? The need for more cross domain capabilities, for one. If you think about how performance management has evolved, it began with a lot of point solution tools. Niche tools. But unfortunately they were targeted at very narrow spans, and sometimes device specific. You can no longer run an IT organization based on a lot of siloed tools that only look at one domain in isolation.

The other driver is the increasing pressure for IT to become more efficient and deliver value as well as cost efficiencies to the business, which includes a much more enlightened summary of what is going on than was available in the past.

One of the factors that has sort of doomed the BSM acronym was its association with long, protracted, costly deployments that would take years to evolve. That is not how IT organizations can function anymore. So another driver is to have a much more dynamic, self-aware, self-learning capabilities.

Yet another driver for APA has been the need to manage more eclectic environments – thanks to Cloud computing. Cloud is often a mosaic of service provider infrastructures and internal IT infrastructures – Cloud and non-Cloud. How do you bring that all together and understand that from an effective, service-centric point of view?

Q&A Part Two: EMA Talks About Advanced Performance Analytics

Related Links:

EMA Releases New Radar Report on Advanced Performance Analytics

APMdigest Sponsors Featured in New EMA Radar Report on Advanced Performance Analytics

EMA's Dennis Drogseth Publishes New Novel

Click here to download the EMA Radar Report on Advanced Performance Analytics

View the EMA On-demand webinar- Advanced Performance Analytics (APA) Radar Report: Big Data with a New, Real-time Context

Hot Topic
The Latest
The Latest 10

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...