Skip to main content

APM for Development - Unified Monitoring for IT Ops

Scott Hollis

Ensuring application performance is a never ending task that involves multiple products, features and best practices. There is no one process, feature, or product that does everything. A good place to start is pre-production and production monitoring with both an Application Performance Management (APM) tool and a Unified Monitoring tool.

The APM tool will trace/instrument your application and application server activity and often the end user experience via synthetic transactions. The development team and DevOps folks need this.

The Unified Monitoring tool will monitor the supporting infrastructure. The IT Ops team needs this. DevOps likes it too because it helps make IT Ops more effective, which in turn helps assure application delivery.

More Cost Effective

APM tools do not specialize in infrastructure monitoring like unified monitoring solutions do, and unified monitoring solutions do not provide application monitoring depth and diagnostics like the APM tools do. And on top of that, the different audiences need different information.

The best approach is to buy APM for the most critical applications. Most organizations use APM for 10% - 15% of their applications. It is too expensive to buy it for everything. Then for the second tier applications that need some monitoring, they use the unified monitoring solution. It is much less expensive and if you select one with synthetic transaction capability you can get "good enough" end user experience monitoring to know whether or not the application is performing well or not.

Service-Centric is Key

When it comes to unified monitoring, it is important to understand that most unified monitoring vendors provide endpoint monitoring. With endpoint monitoring alone, it is impossible to provide highly accurate root-cause isolation. And they don't identify which service, or application, is impacted. And they can't tell you the extent of the impact. Is it just at risk without impacting application delivery yet OR is it down OR is it somewhere in between?

Be sure the unified monitoring vendor is service-centric and models relationships between components, and that it identifies root-cause; the service or application impacted; and the extent of the impact. This can save hours when there is an outage.

Better yet, by identifying when services are at risk, this can help you to proactively identify and address issues before services/application delivery is impacted.

Scott Hollis is Director of Product Marketing for Zenoss.

Hot Topics

The Latest

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

In MEAN TIME TO INSIGHT Episode 21, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AI-driven NetOps ... 

Enterprise IT has become increasingly complex and fragmented. Organizations are juggling dozens — sometimes hundreds — of different tools for endpoint management, security, app delivery, and employee experience. Each one needs its own license, its own maintenance, and its own integration. The result is a patchwork of overlapping tools, data stuck in silos, security vulnerabilities, and IT teams are spending more time managing software than actually getting work done ...

2025 was the year everybody finally saw the cracks in the foundation. If you were running production workloads, you probably lived through at least one outage you could not explain to your executives without pulling up a diagram and a whiteboard ...

Data has never been more central to a greater portion of enterprise operations than it is today. From software development to marketing strategy, data has become an essential component for success. But as data use cases multiply, so too does the diversity of the data itself. This shift is pushing organizations toward increasingly complex data infrastructure ...

Enterprises are not stalling because they doubt AI, but because they cannot yet govern, validate, or safely scale autonomous systems, according to The Pulse of Agentic AI 2026, a new report from Dynatrace ...

For most of the cloud era, site reliability engineers (SREs) were measured by their ability to protect availability, maintain performance, and reduce the operational risk of change. Cost management was someone else's responsibility, typically finance, procurement, or a dedicated FinOps team. That separation of duties made sense when infrastructure was relatively static and cloud bills grew in predictable ways. But modern cloud-native systems don't behave that way ...

APM for Development - Unified Monitoring for IT Ops

Scott Hollis

Ensuring application performance is a never ending task that involves multiple products, features and best practices. There is no one process, feature, or product that does everything. A good place to start is pre-production and production monitoring with both an Application Performance Management (APM) tool and a Unified Monitoring tool.

The APM tool will trace/instrument your application and application server activity and often the end user experience via synthetic transactions. The development team and DevOps folks need this.

The Unified Monitoring tool will monitor the supporting infrastructure. The IT Ops team needs this. DevOps likes it too because it helps make IT Ops more effective, which in turn helps assure application delivery.

More Cost Effective

APM tools do not specialize in infrastructure monitoring like unified monitoring solutions do, and unified monitoring solutions do not provide application monitoring depth and diagnostics like the APM tools do. And on top of that, the different audiences need different information.

The best approach is to buy APM for the most critical applications. Most organizations use APM for 10% - 15% of their applications. It is too expensive to buy it for everything. Then for the second tier applications that need some monitoring, they use the unified monitoring solution. It is much less expensive and if you select one with synthetic transaction capability you can get "good enough" end user experience monitoring to know whether or not the application is performing well or not.

Service-Centric is Key

When it comes to unified monitoring, it is important to understand that most unified monitoring vendors provide endpoint monitoring. With endpoint monitoring alone, it is impossible to provide highly accurate root-cause isolation. And they don't identify which service, or application, is impacted. And they can't tell you the extent of the impact. Is it just at risk without impacting application delivery yet OR is it down OR is it somewhere in between?

Be sure the unified monitoring vendor is service-centric and models relationships between components, and that it identifies root-cause; the service or application impacted; and the extent of the impact. This can save hours when there is an outage.

Better yet, by identifying when services are at risk, this can help you to proactively identify and address issues before services/application delivery is impacted.

Scott Hollis is Director of Product Marketing for Zenoss.

Hot Topics

The Latest

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

In MEAN TIME TO INSIGHT Episode 21, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AI-driven NetOps ... 

Enterprise IT has become increasingly complex and fragmented. Organizations are juggling dozens — sometimes hundreds — of different tools for endpoint management, security, app delivery, and employee experience. Each one needs its own license, its own maintenance, and its own integration. The result is a patchwork of overlapping tools, data stuck in silos, security vulnerabilities, and IT teams are spending more time managing software than actually getting work done ...

2025 was the year everybody finally saw the cracks in the foundation. If you were running production workloads, you probably lived through at least one outage you could not explain to your executives without pulling up a diagram and a whiteboard ...

Data has never been more central to a greater portion of enterprise operations than it is today. From software development to marketing strategy, data has become an essential component for success. But as data use cases multiply, so too does the diversity of the data itself. This shift is pushing organizations toward increasingly complex data infrastructure ...

Enterprises are not stalling because they doubt AI, but because they cannot yet govern, validate, or safely scale autonomous systems, according to The Pulse of Agentic AI 2026, a new report from Dynatrace ...

For most of the cloud era, site reliability engineers (SREs) were measured by their ability to protect availability, maintain performance, and reduce the operational risk of change. Cost management was someone else's responsibility, typically finance, procurement, or a dedicated FinOps team. That separation of duties made sense when infrastructure was relatively static and cloud bills grew in predictable ways. But modern cloud-native systems don't behave that way ...