Skip to main content

Q&A: HP Talks About APM - Part One

Pete Goldin
APMdigest

In Part One of APMdigest's exclusive interview, Shane Pearson, Vice President, Product Marketing for HP Software, provides a unique insider's view into HP BSM and HP APM, and outlines the technology behind the tools.

APM: Describe HP's BSM product and how it relates to APM.

SP: BSM provides a comprehensive management solution for managing business service across complex dynamic environments including traditional datacenter, virtual environments, mobile, private, public and hybrid cloud environments. BSM monitors the performance, availability and faults for both applications, systems, servers, virtualization and network layers. It combines application with infrastructure information into a unified Run-time service model. The Run-time Service model uniquely keeps an up-to-date model that reflects the changing dynamic nature of your cloud-based services. HP BSM is composed of following 4 main pillars:

Application Performance Management (APM) is one of the suite of products within BSM which is mainly focused on end user experience (synthetic and real user monitoring), transaction monitoring and deep dive diagnostics of composite and packaged applications going all the way from the end user into the back-end systems (mainframes).

Systems Management is mainly focused on the performance, availability monitoring of servers, infrastructure and virtualization stack. Here we have the depth and breadth of coverage to monitor any kind of server or VM.

Automated Network Management is focused on fault, availability, performance, change and configuration management of the a broad array of network devices. HP's Automated Network Management Suite's high points are its modularity, its ability to monitor service level compliance and its automation of many of a network engineer's daily tasks - i.e., it's scalable, it helps track actual vs. expected performance and it saves time.

Service Intelligence is a new suite of products we brought to market recently and is focused on predictive analytics which now helps IT to go from reactive to proactive to predictive. It also contains solutions for real-time capacity management for virtual and physical environments and enterprise reporting solution which now gives cross-domain reports correlating end user with the underlying infrastructure. All of the service intelligence suite of products is built on top of our industry leading run-time service model.

APM: What are the main components that should be included in a run-time service model?

SP: The run-time service model provides that end-to-end of the components that make up the business service. It should have the configuration items and key performance indicators for a business service, the end-user experience of the service, the application and the dependent infrastructure of the service.

APM: Why does today's dynamic IT environment require a run-time service model?

SP: Today’s dynamic environments are constantly changing and in order to have an up-to-date view into the map of this dynamic IT real estate you need a model that will keep up with the change. When IT components are added or moved, you need it to be reflected in your service model. Having an up-to-date model allows for faster problem management, and better decision making as you have the latest and greatest information at your fingertips.

APM: How do you keep the service model up to date in a constantly changing hybrid environment?

SP: The run-time service model is updated on a near-real-time basis whenever a monitored component or its context changes in any way. The resulting dynamic, accurate, and up-to-date view of how infrastructure components relate to one another speeds diagnosis and eases the burden of maintaining complex static rules and mappings, freeing expert staff to work on more strategic projects.

APM: What is run-book automation and why is it important to APM?

SP: It is all about making IT better, and part of this is removing manual processes by automating and simplifying tasks. Run-book automation allows us to automatically open incidents in the help desk tool, enrich the events with key information and automatically resolve problems which helps to improve IT efficiencies and remove human error that sometimes occurs in change. 

APM: What does "a 360 degree view of application performance and availability" mean, in reality?

SP: HP’s APM solution looks at end-user experience, transactions and detailed performance information and relates this to the performance and availability of the dependent infrastructure. We combine all this information together into a single view or a "360 degree view" of the performance and availability of your application.

APM: How does HP monitor the end user experience?

SP: When we think about monitoring end user or customer experience, we typically think of 2 different ways of capturing the applications performance and availability information. 

One method is a Synthetic approach, which allows you to check the health of the application without relying on the user to invoke traffic. This method allows you to check the application’s performance and availability from different points of presence. It also allows you to establish a baseline of application performance for improved application monitoring.

The second method is capturing the real user’s session data to determine the applications performance and availability.  This can allow you to really understand how customer's are using your applications and provide detailed information about the user's session which aids in better isolation and diagnostic abilities. There may also be instances where you cannot use synthetic transactions to capture performance information.

APM: So which method do you use to effectively monitor the customer's experience?

SP: The answer is both, each provide unique information about customer experience.

By combining the real-user visibility available within the Real User Monitor (RUM) product along with the consistency and proactive nature of synthetic transactions available within Business Process Monitor (BPM), you get complete coverage in your customer experience monitoring.

APM: What new APM capabilities will HP be introducing in 2012 or beyond?

SP: I cannot talk about futures but I can talk about market trends in 2012. Some of those big market trends will be simply managing complexity such as mobile applications and cloud. Other areas include analytics and also providing offerings that automate and help IT to reduce costs.

Click here to read Part Two of APMdigest's interview with HP's Shane Pearson

Click here to read Part Three of APMdigest's interview with HP's Shane Pearson

The Latest
The Latest 10

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Q&A: HP Talks About APM - Part One

Pete Goldin
APMdigest

In Part One of APMdigest's exclusive interview, Shane Pearson, Vice President, Product Marketing for HP Software, provides a unique insider's view into HP BSM and HP APM, and outlines the technology behind the tools.

APM: Describe HP's BSM product and how it relates to APM.

SP: BSM provides a comprehensive management solution for managing business service across complex dynamic environments including traditional datacenter, virtual environments, mobile, private, public and hybrid cloud environments. BSM monitors the performance, availability and faults for both applications, systems, servers, virtualization and network layers. It combines application with infrastructure information into a unified Run-time service model. The Run-time Service model uniquely keeps an up-to-date model that reflects the changing dynamic nature of your cloud-based services. HP BSM is composed of following 4 main pillars:

Application Performance Management (APM) is one of the suite of products within BSM which is mainly focused on end user experience (synthetic and real user monitoring), transaction monitoring and deep dive diagnostics of composite and packaged applications going all the way from the end user into the back-end systems (mainframes).

Systems Management is mainly focused on the performance, availability monitoring of servers, infrastructure and virtualization stack. Here we have the depth and breadth of coverage to monitor any kind of server or VM.

Automated Network Management is focused on fault, availability, performance, change and configuration management of the a broad array of network devices. HP's Automated Network Management Suite's high points are its modularity, its ability to monitor service level compliance and its automation of many of a network engineer's daily tasks - i.e., it's scalable, it helps track actual vs. expected performance and it saves time.

Service Intelligence is a new suite of products we brought to market recently and is focused on predictive analytics which now helps IT to go from reactive to proactive to predictive. It also contains solutions for real-time capacity management for virtual and physical environments and enterprise reporting solution which now gives cross-domain reports correlating end user with the underlying infrastructure. All of the service intelligence suite of products is built on top of our industry leading run-time service model.

APM: What are the main components that should be included in a run-time service model?

SP: The run-time service model provides that end-to-end of the components that make up the business service. It should have the configuration items and key performance indicators for a business service, the end-user experience of the service, the application and the dependent infrastructure of the service.

APM: Why does today's dynamic IT environment require a run-time service model?

SP: Today’s dynamic environments are constantly changing and in order to have an up-to-date view into the map of this dynamic IT real estate you need a model that will keep up with the change. When IT components are added or moved, you need it to be reflected in your service model. Having an up-to-date model allows for faster problem management, and better decision making as you have the latest and greatest information at your fingertips.

APM: How do you keep the service model up to date in a constantly changing hybrid environment?

SP: The run-time service model is updated on a near-real-time basis whenever a monitored component or its context changes in any way. The resulting dynamic, accurate, and up-to-date view of how infrastructure components relate to one another speeds diagnosis and eases the burden of maintaining complex static rules and mappings, freeing expert staff to work on more strategic projects.

APM: What is run-book automation and why is it important to APM?

SP: It is all about making IT better, and part of this is removing manual processes by automating and simplifying tasks. Run-book automation allows us to automatically open incidents in the help desk tool, enrich the events with key information and automatically resolve problems which helps to improve IT efficiencies and remove human error that sometimes occurs in change. 

APM: What does "a 360 degree view of application performance and availability" mean, in reality?

SP: HP’s APM solution looks at end-user experience, transactions and detailed performance information and relates this to the performance and availability of the dependent infrastructure. We combine all this information together into a single view or a "360 degree view" of the performance and availability of your application.

APM: How does HP monitor the end user experience?

SP: When we think about monitoring end user or customer experience, we typically think of 2 different ways of capturing the applications performance and availability information. 

One method is a Synthetic approach, which allows you to check the health of the application without relying on the user to invoke traffic. This method allows you to check the application’s performance and availability from different points of presence. It also allows you to establish a baseline of application performance for improved application monitoring.

The second method is capturing the real user’s session data to determine the applications performance and availability.  This can allow you to really understand how customer's are using your applications and provide detailed information about the user's session which aids in better isolation and diagnostic abilities. There may also be instances where you cannot use synthetic transactions to capture performance information.

APM: So which method do you use to effectively monitor the customer's experience?

SP: The answer is both, each provide unique information about customer experience.

By combining the real-user visibility available within the Real User Monitor (RUM) product along with the consistency and proactive nature of synthetic transactions available within Business Process Monitor (BPM), you get complete coverage in your customer experience monitoring.

APM: What new APM capabilities will HP be introducing in 2012 or beyond?

SP: I cannot talk about futures but I can talk about market trends in 2012. Some of those big market trends will be simply managing complexity such as mobile applications and cloud. Other areas include analytics and also providing offerings that automate and help IT to reduce costs.

Click here to read Part Two of APMdigest's interview with HP's Shane Pearson

Click here to read Part Three of APMdigest's interview with HP's Shane Pearson

The Latest
The Latest 10

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...