Skip to main content

Gartner Q&A: Jonah Kowall Talks About APM - Part 2

Pete Goldin
APMdigest

In Part 2 of APMdigest's exclusive interview, Jonah Kowall, Research Vice President, IT Operations Management at Gartner, discusses Gartner's 2013 Magic Quadrant for Application Performance Monitoring (APM), complexity in today's product offerings, and the market's move to simplify APM.

Start with Part 1 of the interview

APM: In Gartner's 2013 Magic Quadrant for APM, complexity seems to be cited as a caution for many vendors.

JK: This also goes back to Software-as-a-Service (SaaS). If you simplify deployment, and you simplify the administration, that helps the complexity problem.

The other piece is obviously the usability of the product itself. A lot of these products require multiple user interfaces, multiple tools, especially when you look at some of the legacy solutions. That manifests itself in usability complexity, which limits who can use them, how they use the data, and the training required to get value from the solution.

Vendors keep adding too many features, and too many functions, far too many knobs and dials to all of these solutions. They listen to customer requests without considering the impact the requests have on the potential buyers or the broader customer base. This results in users being overwhelmed, and users having to become experts in order to use these tools, which is not ideal, of course.

Another piece that affects complexity is the use of analytics. How can the solution coach the user to go down a path to accomplish what they want to do? That is a combination of analyzing the data that is coming into the solution and also being able to learn what users typically do in the workflows. That can simplify how much time it takes the user to get the answer they are looking for. Analytics tools often present data or answer simplified questions versus providing insight and moving a tool towards being proactive.

APM: Yet the report says you expect the product offerings to be simplified by the end of 2014 - to a point where only vendors who have simplified APM will be competitive. Does this mean all these vendors are aggressively overhauling and simplifying their portfolios right now?

JK: Yes, almost every vendor is making simplicity one of the key goals in terms of redesigns and revamps of their APM tools, from the big vendors to many of the smaller vendors that had this mantra from the beginning. Those players with a focus on usability and simplicity have been taking market share, this makes everyone pay attention and emulate.

Some vendors had to change gears because of what is happening in the market. They realize that people want simple straightforward tools that don't have 100% of the functionality but can accomplish the basic needs for less money with less complexity. That is clearly what we are seeing happen, not just in APM but in monitoring as a whole.

I would say simplicity is not the norm today, but when you look at priorities across the well-established players in the market, simplicity is definitely something that they are all striving towards.

APM: What do you see as the greatest weakness of the APM market? What are the vendors missing?

JK: Buyers would like to have one monitoring tool. And that does not mean five tools that are integrated together – that means one tool.

Today, monitoring is divided into two categories: availability and performance. First, there is a lot of availability monitoring that happens, whether it is ensuring that the infrastructure is functional, or doing synthetic testing to make sure the application is functional. Then APM is obviously on the performance side. There is the whole performance element, whether it is APM or Network Performance Management (NPM) looking at the network, or the applications, and the performance and the latency in the actual application execution. Enterprises want to buy availability and performance tools together, meaning one tool, but it does not exist today. That is definitely where vendors are missing the mark.

APM: Why do you think this is happening?

JK: Part of the issue is, if someone raises venture capital today to start a new company and solve the biggest problems, they would use that money to build an APM product rather than build an availability product. All the new companies out there innovating tend to be in the APM space because there is still innovation that needs to happen there as applications continue to change drastically.

The availability space is mostly commoditized, however, and no one has really disrupted it, from a technology perspective. This proliferates the problem, these small specialist vendors get bought by large vendors that offer multiple tools, and then they become yet another tool that needs to plug into a broader portfolio of technologies.

I think the biggest point that is being missed by the larger vendors is this idea of a single tool that can do multiple tasks. That is clearly what users want, it just does not exist. Users have to buy at least two, if not more, different tools today.

APM: There are companies that offer both availability and performance tools, or APM and NPM tools for example, but you are saying they need to put them together in one unified tool?

JK: One single product, versus being part of a suite. A vendor might have specific tools that do availability of the network, the servers. Some vendors have storage tools, and virtualization tools, and then they also offer APM tools that are separate. Some of them even offer NPM tools that are separate. They may offer a console that rolls up information or alerts from all their tools. But generally speaking these are all separate tools from separate acquisitions that work differently, look different, the technologies are all completely different.

When a user implements a suite of these tools to cover monitoring, they have to deal with about five or six different technologies with different databases, different platforms, and different UIs. It becomes very complex and difficult to manage. That is why users are rebelling against that large suite of tools, because they just don't have time to manage and maintain it. It is not a good use of time.

APM: Speaking of NPM, in your 2014 Magic Quadrant for Network Performance Monitoring and Diagnostics (NPMD), released in March, you consider it a strength to offer both APM and NPMD together. Do you foresee these tools becoming more integrated?

JK: APM and NPMD target different buyers. Today, the needs of the network professional are different from the needs of APM buyers. If Software-Defined Networking (SDN) continues to evolve, and the network team starts to merge with other groups in the organization, and is no longer considered a silo – as it is today in almost every organization – then I think the tools will follow suit. But today there are clearly different buyers for these tools, for different use cases with different expectations.

Gartner Q&A: Jonah Kowall Talks About APM - Part 3

In Part 3, Jonah Kowall discusses the changing and volatile APM market in 2014 and beyond.

Related Links:

Gartner Q&A: Jonah Kowall Talks About APM - Part 1

Hot Topic
The Latest
The Latest 10

The Latest

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...

Gartner Q&A: Jonah Kowall Talks About APM - Part 2

Pete Goldin
APMdigest

In Part 2 of APMdigest's exclusive interview, Jonah Kowall, Research Vice President, IT Operations Management at Gartner, discusses Gartner's 2013 Magic Quadrant for Application Performance Monitoring (APM), complexity in today's product offerings, and the market's move to simplify APM.

Start with Part 1 of the interview

APM: In Gartner's 2013 Magic Quadrant for APM, complexity seems to be cited as a caution for many vendors.

JK: This also goes back to Software-as-a-Service (SaaS). If you simplify deployment, and you simplify the administration, that helps the complexity problem.

The other piece is obviously the usability of the product itself. A lot of these products require multiple user interfaces, multiple tools, especially when you look at some of the legacy solutions. That manifests itself in usability complexity, which limits who can use them, how they use the data, and the training required to get value from the solution.

Vendors keep adding too many features, and too many functions, far too many knobs and dials to all of these solutions. They listen to customer requests without considering the impact the requests have on the potential buyers or the broader customer base. This results in users being overwhelmed, and users having to become experts in order to use these tools, which is not ideal, of course.

Another piece that affects complexity is the use of analytics. How can the solution coach the user to go down a path to accomplish what they want to do? That is a combination of analyzing the data that is coming into the solution and also being able to learn what users typically do in the workflows. That can simplify how much time it takes the user to get the answer they are looking for. Analytics tools often present data or answer simplified questions versus providing insight and moving a tool towards being proactive.

APM: Yet the report says you expect the product offerings to be simplified by the end of 2014 - to a point where only vendors who have simplified APM will be competitive. Does this mean all these vendors are aggressively overhauling and simplifying their portfolios right now?

JK: Yes, almost every vendor is making simplicity one of the key goals in terms of redesigns and revamps of their APM tools, from the big vendors to many of the smaller vendors that had this mantra from the beginning. Those players with a focus on usability and simplicity have been taking market share, this makes everyone pay attention and emulate.

Some vendors had to change gears because of what is happening in the market. They realize that people want simple straightforward tools that don't have 100% of the functionality but can accomplish the basic needs for less money with less complexity. That is clearly what we are seeing happen, not just in APM but in monitoring as a whole.

I would say simplicity is not the norm today, but when you look at priorities across the well-established players in the market, simplicity is definitely something that they are all striving towards.

APM: What do you see as the greatest weakness of the APM market? What are the vendors missing?

JK: Buyers would like to have one monitoring tool. And that does not mean five tools that are integrated together – that means one tool.

Today, monitoring is divided into two categories: availability and performance. First, there is a lot of availability monitoring that happens, whether it is ensuring that the infrastructure is functional, or doing synthetic testing to make sure the application is functional. Then APM is obviously on the performance side. There is the whole performance element, whether it is APM or Network Performance Management (NPM) looking at the network, or the applications, and the performance and the latency in the actual application execution. Enterprises want to buy availability and performance tools together, meaning one tool, but it does not exist today. That is definitely where vendors are missing the mark.

APM: Why do you think this is happening?

JK: Part of the issue is, if someone raises venture capital today to start a new company and solve the biggest problems, they would use that money to build an APM product rather than build an availability product. All the new companies out there innovating tend to be in the APM space because there is still innovation that needs to happen there as applications continue to change drastically.

The availability space is mostly commoditized, however, and no one has really disrupted it, from a technology perspective. This proliferates the problem, these small specialist vendors get bought by large vendors that offer multiple tools, and then they become yet another tool that needs to plug into a broader portfolio of technologies.

I think the biggest point that is being missed by the larger vendors is this idea of a single tool that can do multiple tasks. That is clearly what users want, it just does not exist. Users have to buy at least two, if not more, different tools today.

APM: There are companies that offer both availability and performance tools, or APM and NPM tools for example, but you are saying they need to put them together in one unified tool?

JK: One single product, versus being part of a suite. A vendor might have specific tools that do availability of the network, the servers. Some vendors have storage tools, and virtualization tools, and then they also offer APM tools that are separate. Some of them even offer NPM tools that are separate. They may offer a console that rolls up information or alerts from all their tools. But generally speaking these are all separate tools from separate acquisitions that work differently, look different, the technologies are all completely different.

When a user implements a suite of these tools to cover monitoring, they have to deal with about five or six different technologies with different databases, different platforms, and different UIs. It becomes very complex and difficult to manage. That is why users are rebelling against that large suite of tools, because they just don't have time to manage and maintain it. It is not a good use of time.

APM: Speaking of NPM, in your 2014 Magic Quadrant for Network Performance Monitoring and Diagnostics (NPMD), released in March, you consider it a strength to offer both APM and NPMD together. Do you foresee these tools becoming more integrated?

JK: APM and NPMD target different buyers. Today, the needs of the network professional are different from the needs of APM buyers. If Software-Defined Networking (SDN) continues to evolve, and the network team starts to merge with other groups in the organization, and is no longer considered a silo – as it is today in almost every organization – then I think the tools will follow suit. But today there are clearly different buyers for these tools, for different use cases with different expectations.

Gartner Q&A: Jonah Kowall Talks About APM - Part 3

In Part 3, Jonah Kowall discusses the changing and volatile APM market in 2014 and beyond.

Related Links:

Gartner Q&A: Jonah Kowall Talks About APM - Part 1

Hot Topic
The Latest
The Latest 10

The Latest

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

Getting applications into the hands of those who need them quickly and securely has long been the goal of a branch of IT often referred to as End User Computing (EUC). Over recent years, the way applications (and data) have been delivered to these "users" has changed noticeably. Organizations have many more choices available to them now, and there will be more to come ... But how did we get here? Where are we going? Is this all too complicated? ...

On November 18, a single database permission change inside Cloudflare set off a chain of failures that rippled across the Internet. Traffic stalled. Authentication broke. Workers KV returned waves of 5xx errors as systems fell in and out of sync. For nearly three hours, one of the most resilient networks on the planet struggled under the weight of a change no one expected to matter ... Cloudflare recovered quickly, but the deeper lesson reaches far beyond this incident ...

Chris Steffen and Ken Buckler from EMA discuss the Cloudflare outage and what availability means in the technology space ...

Every modern industry is confronting the same challenge: human reaction time is no longer fast enough for real-time decision environments. Across sectors, from financial services to manufacturing to cybersecurity and beyond, the stakes mirror those of autonomous vehicles — systems operating in complex, high-risk environments where milliseconds matter ...