Skip to main content

Application Performance Monitoring Cheat Sheet

Phee-Lip
BHP

A brief introduction to Applications Performance Monitoring (APM), breaking it down to a few key points:

1. It is different from conventional infrastructure monitoring which primarily captures and reports on hardware performance such as CPU and memory but APM also covers more advanced infra technology, such as containers, etc.

2. APM tells you how the application, which sits on top of the infrastructure, is performing by going deep into the code level and it includes the capability to monitor microservices and different types of programming languages.

3. Recently (actually not so recent) it has expanded to include user experience monitoring which encapsulates capturing of the user journey, reporting of errors and performance of user-triggered activities (click-on-page) as user behavior and experience are becoming more essential.

4. With a huge amount of data being collected, it is only natural that it has become a big data platform for companies to gain insights into their operations and business. Hence the expansion into analytic!

A few important lessons which I have learned over the years:

1. Many organizations are still "stuck" at reporting service availability. This requires a fundamental mindset change as the spotlight is now on application performance and service quality. These are critical aspects of digitization which no companies can afford to neglect.

2. APM can pinpoint the problems but it can't fix them for you. At least not now, perhaps later with AI. It is not a silver bullet and it draws out a very important point that organizations MUST HAVE system/domain expertise to maintain and improve the systems which are the most critical to their business!

3. Not everything is created equally. Hence you don't need a full-fledged APM tool for every system. Focus on the most critical systems. That will not only save you money but enable you to have undivided attention only on those which you care deeply about.

4. It is hard to find the best APM tool in every aspect of its capabilities. You just have to decide what are the most crucial elements for success and find the best solutions for them. You may end up with a couple of tools, hence it will be good to look at how you can gain a cohesive view across these tools to form your master service performance dashboard. Some form of integration may be required.

5. Many organizations have a central monitoring team who have eyes-on monitoring 24x7. This is old school and ineffective. Natural language processing (NLP) is the future with exception-based voice notification and an intelligent contextual query to have a deep understanding of systems health and performance, anytime, anywhere.

APM is a complex topic as it is a multi-faceted discipline. It will continue to evolve, branching into other domains such as service automation (self-healing), service management and deep learning. These areas have been coined as AIOps by Gartner, heavily anchored on AI. Definitely a space to watch out going forward!

Phee-Lip is Principal, APM Practice Lead, at BHP

Hot Topics

The Latest

While 87% of manufacturing leaders and technical specialists report that ROI from their AIOps initiatives has met or exceeded expectations, only 37% say they are fully prepared to operationalize AI at scale, according to The Future of IT Operations in the AI Era, a report from Riverbed ...

Many organizations rely on cloud-first architectures to aggregate, analyze, and act on their operational data ... However, not all environments are conducive to cloud-first architectures ... There are limitations to cloud-first architectures that render them ineffective in mission-critical situations where responsiveness, cost control, and data sovereignty are non-negotiable; these limitations include ...

For years, cybersecurity was built around a simple assumption: protect the physical network and trust everything inside it. That model made sense when employees worked in offices, applications lived in data centers, and devices rarely left the building. Today's reality is fluid: people work from everywhere, applications run across multiple clouds, and AI-driven agents are beginning to act on behalf of users. But while the old perimeter dissolved, a new one quietly emerged ...

For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...

Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...

Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

Application Performance Monitoring Cheat Sheet

Phee-Lip
BHP

A brief introduction to Applications Performance Monitoring (APM), breaking it down to a few key points:

1. It is different from conventional infrastructure monitoring which primarily captures and reports on hardware performance such as CPU and memory but APM also covers more advanced infra technology, such as containers, etc.

2. APM tells you how the application, which sits on top of the infrastructure, is performing by going deep into the code level and it includes the capability to monitor microservices and different types of programming languages.

3. Recently (actually not so recent) it has expanded to include user experience monitoring which encapsulates capturing of the user journey, reporting of errors and performance of user-triggered activities (click-on-page) as user behavior and experience are becoming more essential.

4. With a huge amount of data being collected, it is only natural that it has become a big data platform for companies to gain insights into their operations and business. Hence the expansion into analytic!

A few important lessons which I have learned over the years:

1. Many organizations are still "stuck" at reporting service availability. This requires a fundamental mindset change as the spotlight is now on application performance and service quality. These are critical aspects of digitization which no companies can afford to neglect.

2. APM can pinpoint the problems but it can't fix them for you. At least not now, perhaps later with AI. It is not a silver bullet and it draws out a very important point that organizations MUST HAVE system/domain expertise to maintain and improve the systems which are the most critical to their business!

3. Not everything is created equally. Hence you don't need a full-fledged APM tool for every system. Focus on the most critical systems. That will not only save you money but enable you to have undivided attention only on those which you care deeply about.

4. It is hard to find the best APM tool in every aspect of its capabilities. You just have to decide what are the most crucial elements for success and find the best solutions for them. You may end up with a couple of tools, hence it will be good to look at how you can gain a cohesive view across these tools to form your master service performance dashboard. Some form of integration may be required.

5. Many organizations have a central monitoring team who have eyes-on monitoring 24x7. This is old school and ineffective. Natural language processing (NLP) is the future with exception-based voice notification and an intelligent contextual query to have a deep understanding of systems health and performance, anytime, anywhere.

APM is a complex topic as it is a multi-faceted discipline. It will continue to evolve, branching into other domains such as service automation (self-healing), service management and deep learning. These areas have been coined as AIOps by Gartner, heavily anchored on AI. Definitely a space to watch out going forward!

Phee-Lip is Principal, APM Practice Lead, at BHP

Hot Topics

The Latest

While 87% of manufacturing leaders and technical specialists report that ROI from their AIOps initiatives has met or exceeded expectations, only 37% say they are fully prepared to operationalize AI at scale, according to The Future of IT Operations in the AI Era, a report from Riverbed ...

Many organizations rely on cloud-first architectures to aggregate, analyze, and act on their operational data ... However, not all environments are conducive to cloud-first architectures ... There are limitations to cloud-first architectures that render them ineffective in mission-critical situations where responsiveness, cost control, and data sovereignty are non-negotiable; these limitations include ...

For years, cybersecurity was built around a simple assumption: protect the physical network and trust everything inside it. That model made sense when employees worked in offices, applications lived in data centers, and devices rarely left the building. Today's reality is fluid: people work from everywhere, applications run across multiple clouds, and AI-driven agents are beginning to act on behalf of users. But while the old perimeter dissolved, a new one quietly emerged ...

For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...

Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...

Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...