Ensuring application performance is a never ending task that involves multiple products, features and best practices. There is no one process, feature, or product that does everything. A good place to start is pre-production and production monitoring with both an Application Performance Management (APM) tool and a Unified Monitoring tool.
The APM tool will trace/instrument your application and application server activity and often the end user experience via synthetic transactions. The development team and DevOps folks need this.
The Unified Monitoring tool will monitor the supporting infrastructure. The IT Ops team needs this. DevOps likes it too because it helps make IT Ops more effective, which in turn helps assure application delivery.
More Cost Effective
APM tools do not specialize in infrastructure monitoring like unified monitoring solutions do, and unified monitoring solutions do not provide application monitoring depth and diagnostics like the APM tools do. And on top of that, the different audiences need different information.
The best approach is to buy APM for the most critical applications. Most organizations use APM for 10% - 15% of their applications. It is too expensive to buy it for everything. Then for the second tier applications that need some monitoring, they use the unified monitoring solution. It is much less expensive and if you select one with synthetic transaction capability you can get "good enough" end user experience monitoring to know whether or not the application is performing well or not.
Service-Centric is Key
When it comes to unified monitoring, it is important to understand that most unified monitoring vendors provide endpoint monitoring. With endpoint monitoring alone, it is impossible to provide highly accurate root-cause isolation. And they don't identify which service, or application, is impacted. And they can't tell you the extent of the impact. Is it just at risk without impacting application delivery yet OR is it down OR is it somewhere in between?
Be sure the unified monitoring vendor is service-centric and models relationships between components, and that it identifies root-cause; the service or application impacted; and the extent of the impact. This can save hours when there is an outage.
Better yet, by identifying when services are at risk, this can help you to proactively identify and address issues before services/application delivery is impacted.
Scott Hollis is Director of Product Marketing for Zenoss.
The Latest
The OpenTelemetry End-User SIG surveyed more than 100 OpenTelemetry users to learn more about their observability journeys and what resources deliver the most value when establishing an observability practice ... Regardless of experience level, there's a clear need for more support and continued education ...
A silo is, by definition, an isolated component of an organization that doesn't interact with those around it in any meaningful way. This is the antithesis of collaboration, but its effects are even more insidious than the shutting down of effective conversation ...
New Relic's 2024 State of Observability for Industrials, Materials, and Manufacturing report outlines the adoption and business value of observability for the industrials, materials, and manufacturing industries ... Here are 8 key takeaways from the report ...
For mission-critical applications, it's often easy to justify an investment in a solution designed to ensure that the application is available no less than 99.99% of the time — easy because the cost to the organization of that app being offline would quickly surpass the cost of a high availability (HA) solution ... But not every application warrants the investment in an HA solution with redundant infrastructure spanning multiple data centers or cloud availability zones ...
The edge brings computing resources and data storage closer to end users, which explains the rapid boom in edge computing, but it also generates a huge amount of data ... 44% of organizations are investing in edge IT to create new customer experiences and improve engagement. To achieve those goals, edge services observability should be a centerpoint of that investment ...
The growing adoption of efficiency-boosting technologies like artificial intelligence (AI) and machine learning (ML) helps counteract staffing shortages, rising labor costs, and talent gaps, while giving employees more time to focus on strategic projects. This trend is especially evident in the government contracting sector, where, according to Deltek's 2024 Clarity Report, 34% of GovCon leaders rank AI and ML in their top three technology investment priorities for 2024, above perennial focus areas like cybersecurity, data management and integration, business automation and cloud infrastructure ...
While IT leaders are preparing organizations for accelerated generative AI (GenAI) adoption, C-suite executives' confidence in their IT team's ability to deliver basic services is declining, according to a study conducted by the IBM Institute for Business Value ...
The consequences of outages have become a pressing issue as the largest IT outage in history continues to rock the world with severe ramifications ... According to the Catchpoint Internet Resilience Report, these types of disruptions, internet outages in particular, can have severe financial and reputational impacts and enterprises should strongly consider their resilience ...
Everyday AI and digital employee experience (DEX) are projected to reach mainstream adoption in less than two years according to the Gartner, Inc. Hype Cycle for Digital Workplace Applications, 2024 ...
When an IT issue is not handled correctly, not only is innovation stifled, but stakeholder trust can also be impacted (such as when there's an IT outage or slowdowns in performance). When you add new technology investments and innovations into the mix, you have a recipe for disaster ...