APMdigest followers will already have read the article on Gartner's 5 Dimensions of APM. While that article examines the advantages of single- or multi-vendor sourcing for the Application Performance Management (APM) tools that address these different dimensions, we'd like to look at this matter from a different angle: What are the important issues and goals to consider when evaluating a suite of APM solutions -- from one or more vendors -- to ensure that your APM solution will help IT operate at the new speed of business?
Consider Gartner's 5 dimensions of APM again:
1. End-user experience monitoring
The ability to capture end-to-end application performance data is critical, but few of today's apps are straight-line affairs. A web-based storefront, for instance, may present a user with ads or catalog information from sources that are outside of the storefront owner's own infrastructure. A traditional experience monitoring tool might look at how quickly the website interacts with the back-end sales applications. However, the speed of that transaction is only one part -- and a relatively late part -- of the user's experience.
If a problem outside of the vendor's infrastructure is delaying the delivery of third-party catalog content -- and causing the entire web page to load slowly -- the user may never get to the point of clicking the "Place my Order" button.
Today's businesses need APM tools that can monitor all aspects of the user experience. You may have no control over the third-party servers pushing content to your site, but you need to know how those servers affect the end user experience.
It also helps if your APM tools can enable you to make changes on the fly if the network links or external servers are compromising the overall experience you want to provide your users.
2. Run-time application architecture discovery, modeling, and display
The environment in which today's applications execute are more and more complex. With distributed networks, virtualized machines, web services and service-oriented architectures (and more), discovering, modeling, and displaying all the components that contribute to application performance is a challenge. You need tools that can provide real-time insight into all aspects of your application delivery infrastructure.
For efficiency's sake, IT organizations should be able to visualize this complete infrastructure on the same console that provides insight into the end-user experience. In a world of real-time business, IT teams need to be able to interact with all aspects of an APM solution quickly, efficiently, and effectively.
3. User-defined transaction profiling
User-defined transaction profiling is not just about tracing events as they occur among components or as they move across the paths discovered in the second dimension. What's important here is to understand whether events are occurring when, where, and as efficiently as you want them to occur.
Real-time IT organizations need APM tools for tracing events along an application path in the context of defined KPIs. To achieve that, these tools need to interact very efficiently with the APM tools you use for end user experience monitoring and run-time application architecture discovery, modeling, and display. This ensures efficient information reuse, but more importantly a frictionless interaction between these tools is that you need to minimize latency in the system. In a real-time, performance-oriented world, latency is to be avoided.
4. Component deep-dive monitoring in application context
The critical consideration related to deep dive monitoring is how well the tools you use work together. Six best-of-breed component monitoring tools presenting information on six different consoles would be absurd. Relying on a single manager of managers (MOM), though, to create the appearance of an integrated monitoring solution may simply mask the inefficiencies inherent in trying rely on six different monitoring tools.
If you decide not to use a single tool to provide deep-dive monitoring of your entire business infrastructure, be sure that your SI integrates the different tools you have selected with low-latency, real-time responsiveness in mind. Moreover, be sure that all the information captured by the tools can be used in real time by the other components within the APM suite.
5. Analytics
If your data is modeled correctly -- and the important word here is "if" -- you can use sophisticated analytical tools to discover all kinds of opportunities to improve application performance or the user's experience of your application. The important consideration is the data model itself. All the tools we have just discussed must be able to contribute data easily to a performance management database (PMDB). If they cannot, you then have to invest in further complexity to deploy additional tools to transform data from one solution so that it becomes useful to other tools -- and that is highly inefficient.
Ultimately, it is important to consider the world in which your applications exist. Business is increasingly moving to a real-time model. It requires real-time responsiveness. Batch-oriented APM tools that are designed to support a break-fix mentality and aimed at infrastructure running exclusively on a corporate network over which IT has complete control -- these won't help you in the world we live in.
Your APM tools must provide real-time, transaction-orientation support. They must contribute to a real-time responsiveness, driven by the needs of business and focused on the quality of the user experience of the applications -- both inside and beyond the firewall.
About Raj Sabhlok and Suvish Viswanathan
Raj Sabhlok is the President of ManageEngine. Suvish Viswanathan is an APM Research Analyst at ManageEngine. ManageEngine is a division of Zoho Corp. and makers of a globally renowned suite of cost-effective network, systems, security, and applications management software solutions.
Related Links:
The Latest
Users have become digital hoarders, saving everything they handle, including outdated reports, duplicate files and irrelevant documents that make it difficult to find critical information, slowing down systems and productivity. In digital terms, they have simply shoved the mess off their desks and into the virtual storage bins ...
Today we could be witnessing the dawn of a new age in software development, transformed by Artificial Intelligence (AI). But is AI a gateway or a precipice? Is AI in software development transformative, just the latest helpful tool, or a bunch of hype? To help with this assessment, DEVOPSdigest invited experts across the industry to comment on how AI can support the SDLC. In this epic multi-part series to be posted over the next several weeks, DEVOPSdigest will explore the advantages and disadvantages; the current state of maturity and adoption; and how AI will impact the processes, the developers, and the future of software development ...
Half of all employees are using Shadow AI (i.e. non-company issued AI tools), according to a new report by Software AG ...
On their digital transformation journey, companies are migrating more workloads to the cloud, which can incur higher costs during the process due to the higher volume of cloud resources needed ... Here are four critical components of a cloud governance framework that can help keep cloud costs under control ...
Operational resilience is an organization's ability to predict, respond to, and prevent unplanned work to drive reliable customer experiences and protect revenue. This doesn't just apply to downtime; it also covers service degradation due to latency or other factors. But make no mistake — when things go sideways, the bottom line and the customer are impacted ...
Organizations continue to struggle to generate business value with AI. Despite increased investments in AI, only 34% of AI professionals feel fully equipped with the tools necessary to meet their organization's AI goals, according to The Unmet AI Needs Surveywas conducted by DataRobot ...
High-business-impact outages are costly, and a fast MTTx (mean-time-to-detect (MTTD) and mean-time-to-resolve (MTTR)) is crucial, with 62% of businesses reporting a loss of at least $1 million per hour of downtime ...
Organizations recognize the benefits of generative AI (GenAI) yet need help to implement the infrastructure necessary to deploy it, according to The Future of AI in IT Operations: Benefits and Challenges, a new report commissioned by ScienceLogic ...
Splunk's latest research reveals that companies embracing observability aren't just keeping up, they're pulling ahead. Whether it's unlocking advantages across their digital infrastructure, achieving deeper understanding of their IT environments or uncovering faster insights, organizations are slashing through resolution times like never before ...
A majority of IT workers surveyed (79%) believe the current service desk model will be unrecognizable within three years, with nearly as many (77%) saying new technologies will render it "redundant" by 2027, according to The Death (and Rebirth) of the Service Desk from Nexthink ...