APMdigest followers will already have read the article on Gartner's 5 Dimensions of APM. While that article examines the advantages of single- or multi-vendor sourcing for the Application Performance Management (APM) tools that address these different dimensions, we'd like to look at this matter from a different angle: What are the important issues and goals to consider when evaluating a suite of APM solutions -- from one or more vendors -- to ensure that your APM solution will help IT operate at the new speed of business?
Consider Gartner's 5 dimensions of APM again:
1. End-user experience monitoring
The ability to capture end-to-end application performance data is critical, but few of today's apps are straight-line affairs. A web-based storefront, for instance, may present a user with ads or catalog information from sources that are outside of the storefront owner's own infrastructure. A traditional experience monitoring tool might look at how quickly the website interacts with the back-end sales applications. However, the speed of that transaction is only one part -- and a relatively late part -- of the user's experience.
If a problem outside of the vendor's infrastructure is delaying the delivery of third-party catalog content -- and causing the entire web page to load slowly -- the user may never get to the point of clicking the "Place my Order" button.
Today's businesses need APM tools that can monitor all aspects of the user experience. You may have no control over the third-party servers pushing content to your site, but you need to know how those servers affect the end user experience.
It also helps if your APM tools can enable you to make changes on the fly if the network links or external servers are compromising the overall experience you want to provide your users.
2. Run-time application architecture discovery, modeling, and display
The environment in which today's applications execute are more and more complex. With distributed networks, virtualized machines, web services and service-oriented architectures (and more), discovering, modeling, and displaying all the components that contribute to application performance is a challenge. You need tools that can provide real-time insight into all aspects of your application delivery infrastructure.
For efficiency's sake, IT organizations should be able to visualize this complete infrastructure on the same console that provides insight into the end-user experience. In a world of real-time business, IT teams need to be able to interact with all aspects of an APM solution quickly, efficiently, and effectively.
3. User-defined transaction profiling
User-defined transaction profiling is not just about tracing events as they occur among components or as they move across the paths discovered in the second dimension. What's important here is to understand whether events are occurring when, where, and as efficiently as you want them to occur.
Real-time IT organizations need APM tools for tracing events along an application path in the context of defined KPIs. To achieve that, these tools need to interact very efficiently with the APM tools you use for end user experience monitoring and run-time application architecture discovery, modeling, and display. This ensures efficient information reuse, but more importantly a frictionless interaction between these tools is that you need to minimize latency in the system. In a real-time, performance-oriented world, latency is to be avoided.
4. Component deep-dive monitoring in application context
The critical consideration related to deep dive monitoring is how well the tools you use work together. Six best-of-breed component monitoring tools presenting information on six different consoles would be absurd. Relying on a single manager of managers (MOM), though, to create the appearance of an integrated monitoring solution may simply mask the inefficiencies inherent in trying rely on six different monitoring tools.
If you decide not to use a single tool to provide deep-dive monitoring of your entire business infrastructure, be sure that your SI integrates the different tools you have selected with low-latency, real-time responsiveness in mind. Moreover, be sure that all the information captured by the tools can be used in real time by the other components within the APM suite.
If your data is modeled correctly -- and the important word here is "if" -- you can use sophisticated analytical tools to discover all kinds of opportunities to improve application performance or the user's experience of your application. The important consideration is the data model itself. All the tools we have just discussed must be able to contribute data easily to a performance management database (PMDB). If they cannot, you then have to invest in further complexity to deploy additional tools to transform data from one solution so that it becomes useful to other tools -- and that is highly inefficient.
Ultimately, it is important to consider the world in which your applications exist. Business is increasingly moving to a real-time model. It requires real-time responsiveness. Batch-oriented APM tools that are designed to support a break-fix mentality and aimed at infrastructure running exclusively on a corporate network over which IT has complete control -- these won't help you in the world we live in.
Your APM tools must provide real-time, transaction-orientation support. They must contribute to a real-time responsiveness, driven by the needs of business and focused on the quality of the user experience of the applications -- both inside and beyond the firewall.
About Raj Sabhlok and Suvish Viswanathan
Raj Sabhlok is the President of ManageEngine. Suvish Viswanathan is an APM Research Analyst at ManageEngine. ManageEngine is a division of Zoho Corp. and makers of a globally renowned suite of cost-effective network, systems, security, and applications management software solutions.
This blog presents the case for a radical new approach to basic information technology (IT) education. This conclusion is based on a study of courses and other forms of IT education which purport to cover IT "fundamentals" ...
To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.