Skip to main content

Q&A: HP Talks About APM - Part Three

Pete Goldin
APMdigest

In Part Three of APMdigest's exclusive interview, Shane Pearson, Vice President, Product Marketing for HP Software, discusses predictive analytics and its importance to APM.

Click here to start with Part One of APMdigest's interview with HP's Shane Pearson

Click here to start with Part Two of APMdigest's interview with HP's Shane Pearson

APM: Why is predictive analytics such a hot topic in APM right now?

SP: As software vendors in this space, we have all done a good job at collecting data. We can monitor just about everything. But, IT operators are overwhelmed with all the data being collected. What data is important? What data should they pay most attention to? How can they make the best decisions with all this data? And with Cloud, Mobility, and Virtualization, the complexity in managing data has skyrocketed.

IDC recently asserted that predictive analytics will go mainstream in 2012 within IT and here’s why: Operational complexity, virtualization and the need for “optimization tools that can quickly discover, filter, correlate, remediate, and ideally prevent performance and availability slowdowns, outages, and other service-interrupting incidents” are pointed to as reasons for the growth in predictive analytics. “IDC expects powerful performance management tools, based on sophisticated statistical analysis and modeling techniques, to emerge from niche status and become a recognized mainstream technology during the coming year.”

APM: In APMdigest's recent Q&A with Forrester's JP Garbani, he mentioned that HP has "recently made a lot of progress" in predictive analytics. What are the latest HP advancements in this area?

SP: In December of 2011, HP released a predictive analytics tool called Service Health Analyzer (SHA) which is a part of the Service Intelligence pillar within BSM family. SHA is a “Run-time” Predictive Analytics tool that provides organizations a more intelligent way to manage IT by analyzing abnormal service behavior and alerting IT managers of real service degradation before it impacts the business. Because it is built upon the Run-time Service Model, it can correlate the metrics that are behaving abnormally with the underlying topology. This information, along with advanced analytics and sophisticated algorithms, enables SHA to forecast future problems and prioritize those issues based upon business impact.

In addition, SHA analyzes historical data to automatically create real thresholds. It then combines hundreds of baseline breaches that are associated with a single service into one event. The event generated by SHA includes a list of the CIs involved in the anomaly, so you can take action to fix the problem before the service is impaired by automated event-to-ticket closure remediation.

With SHA, you can:

1. Anticipate real IT incidents ... before they occur

2. Prevent business impact

3. Remediate events by fusing analytics & automation

APM: Besides predictive analytics, are any other analytics needed to improve APM?

SP: The dynamic relationships in a complex IT environment mean that correlating and mapping physical, virtual and cloud-based elements is beyond the realm of human judgment and spreadsheets. It requires analytics to provide intelligence in order to power agility and cost savings.

While virtualization and cloud deliver more agility to business owners and the ability to scale capacity with changing demand, these technologies add a layer of complexity that makes managing the infrastructure much more difficult. You need to understand how changes impact your applications and services.

Having visibility and insight into the performance of your applications, and knowing how those applications or services are tied to the underlying infrastructure, is absolutely crucial in today’s ever-changing, virtualized data centers. When issues occur, you need to understand what happened, why it happened, and how to fix them. Better yet, you need to be proactive and forecast issues.

This is where analytics can help.

APM: What analytics solutions does HP offer?

SP: HP solves the issues inherent in a dynamic, virtual environment with its Service Intelligence portfolio. HP Service Intelligence uses the information gathered from the Run-time Service Model (RTSM) to understand what happened at the business service level, and then analyzes that data to create actionable intelligence. With HP’s Service Intelligence, you’ll have the analytics to help you 1) analyze the past, 2) optimize the present, and 3) anticipate the future.

The products that reside inside HP’s Service Intelligence portfolio provide IT executives and operation teams the ability to use real-time service topology stored in the RtSM in order to:

- Anticipate service issues, prevent impact, and remediate quickly (Service Health Analyzer)

- Visualize, optimize, and plan performance in virtualized and cloud environment (Service Health Optimizer)

- Understand issues from a services view by leveraging cross-domain reporting (Service Health Reporter)

- Align IT to the business by tracking SLAs, KPIs, and business health (Service Level Management)

Each of these products provides the analytics tool set to help you understand how the availability and performance of your applications are tied to the underlying availability and performance of your infrastructure. Having this visibility can simplify the complexities of managing your applications and overall business services.

Click here to read Part One of APMdigest's interview with HP's Shane Pearson

Click here to read Part Two of APMdigest's interview with HP's Shane Pearson

ABOUT Shane Pearson

Shane Pearson, Vice President, Product Marketing for HP Software, is a product marketing professional with experience as a general manager and technologist at startups and Fortune 500 companies. In his current role, Pearson is responsible for managing the Operations Management, Cloud and SaaS product portfolios.

Prior to his role at HP, Pearson was Sr. VP and GM of NetWeaver Product Group at SAP. During his tenure at SAP, he was responsible for managing the worldwide NetWeaver business group including working across business operations, marketing, product management, and development. Pearson was also previously VP of Products at Gnip, a real-time social media data delivery provider, where he coordinated product development, marketing and sales. Additionally, Pearson served in various product management and marketing roles at BEA Systems, a provider of enterprise application infrastructure solutions acquired by Oracle in 2008. He holds a bachelor’s degree in industrial management and a master’s degree in management with concentrations in marketing and finance.

Hot Topic
The Latest
The Latest 10

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Q&A: HP Talks About APM - Part Three

Pete Goldin
APMdigest

In Part Three of APMdigest's exclusive interview, Shane Pearson, Vice President, Product Marketing for HP Software, discusses predictive analytics and its importance to APM.

Click here to start with Part One of APMdigest's interview with HP's Shane Pearson

Click here to start with Part Two of APMdigest's interview with HP's Shane Pearson

APM: Why is predictive analytics such a hot topic in APM right now?

SP: As software vendors in this space, we have all done a good job at collecting data. We can monitor just about everything. But, IT operators are overwhelmed with all the data being collected. What data is important? What data should they pay most attention to? How can they make the best decisions with all this data? And with Cloud, Mobility, and Virtualization, the complexity in managing data has skyrocketed.

IDC recently asserted that predictive analytics will go mainstream in 2012 within IT and here’s why: Operational complexity, virtualization and the need for “optimization tools that can quickly discover, filter, correlate, remediate, and ideally prevent performance and availability slowdowns, outages, and other service-interrupting incidents” are pointed to as reasons for the growth in predictive analytics. “IDC expects powerful performance management tools, based on sophisticated statistical analysis and modeling techniques, to emerge from niche status and become a recognized mainstream technology during the coming year.”

APM: In APMdigest's recent Q&A with Forrester's JP Garbani, he mentioned that HP has "recently made a lot of progress" in predictive analytics. What are the latest HP advancements in this area?

SP: In December of 2011, HP released a predictive analytics tool called Service Health Analyzer (SHA) which is a part of the Service Intelligence pillar within BSM family. SHA is a “Run-time” Predictive Analytics tool that provides organizations a more intelligent way to manage IT by analyzing abnormal service behavior and alerting IT managers of real service degradation before it impacts the business. Because it is built upon the Run-time Service Model, it can correlate the metrics that are behaving abnormally with the underlying topology. This information, along with advanced analytics and sophisticated algorithms, enables SHA to forecast future problems and prioritize those issues based upon business impact.

In addition, SHA analyzes historical data to automatically create real thresholds. It then combines hundreds of baseline breaches that are associated with a single service into one event. The event generated by SHA includes a list of the CIs involved in the anomaly, so you can take action to fix the problem before the service is impaired by automated event-to-ticket closure remediation.

With SHA, you can:

1. Anticipate real IT incidents ... before they occur

2. Prevent business impact

3. Remediate events by fusing analytics & automation

APM: Besides predictive analytics, are any other analytics needed to improve APM?

SP: The dynamic relationships in a complex IT environment mean that correlating and mapping physical, virtual and cloud-based elements is beyond the realm of human judgment and spreadsheets. It requires analytics to provide intelligence in order to power agility and cost savings.

While virtualization and cloud deliver more agility to business owners and the ability to scale capacity with changing demand, these technologies add a layer of complexity that makes managing the infrastructure much more difficult. You need to understand how changes impact your applications and services.

Having visibility and insight into the performance of your applications, and knowing how those applications or services are tied to the underlying infrastructure, is absolutely crucial in today’s ever-changing, virtualized data centers. When issues occur, you need to understand what happened, why it happened, and how to fix them. Better yet, you need to be proactive and forecast issues.

This is where analytics can help.

APM: What analytics solutions does HP offer?

SP: HP solves the issues inherent in a dynamic, virtual environment with its Service Intelligence portfolio. HP Service Intelligence uses the information gathered from the Run-time Service Model (RTSM) to understand what happened at the business service level, and then analyzes that data to create actionable intelligence. With HP’s Service Intelligence, you’ll have the analytics to help you 1) analyze the past, 2) optimize the present, and 3) anticipate the future.

The products that reside inside HP’s Service Intelligence portfolio provide IT executives and operation teams the ability to use real-time service topology stored in the RtSM in order to:

- Anticipate service issues, prevent impact, and remediate quickly (Service Health Analyzer)

- Visualize, optimize, and plan performance in virtualized and cloud environment (Service Health Optimizer)

- Understand issues from a services view by leveraging cross-domain reporting (Service Health Reporter)

- Align IT to the business by tracking SLAs, KPIs, and business health (Service Level Management)

Each of these products provides the analytics tool set to help you understand how the availability and performance of your applications are tied to the underlying availability and performance of your infrastructure. Having this visibility can simplify the complexities of managing your applications and overall business services.

Click here to read Part One of APMdigest's interview with HP's Shane Pearson

Click here to read Part Two of APMdigest's interview with HP's Shane Pearson

ABOUT Shane Pearson

Shane Pearson, Vice President, Product Marketing for HP Software, is a product marketing professional with experience as a general manager and technologist at startups and Fortune 500 companies. In his current role, Pearson is responsible for managing the Operations Management, Cloud and SaaS product portfolios.

Prior to his role at HP, Pearson was Sr. VP and GM of NetWeaver Product Group at SAP. During his tenure at SAP, he was responsible for managing the worldwide NetWeaver business group including working across business operations, marketing, product management, and development. Pearson was also previously VP of Products at Gnip, a real-time social media data delivery provider, where he coordinated product development, marketing and sales. Additionally, Pearson served in various product management and marketing roles at BEA Systems, a provider of enterprise application infrastructure solutions acquired by Oracle in 2008. He holds a bachelor’s degree in industrial management and a master’s degree in management with concentrations in marketing and finance.

Hot Topic
The Latest
The Latest 10

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...