Skip to main content

IDC Prediction: Predictive Analytics Goes Mainstream in 2012

Operational complexity in virtualized, scale-out, and cloud environments and composite Web-based applications will drive demand for automated analytic performance management and optimization tools that can quickly discover, filter, correlate, remediate, and ideally prevent performance and availability slowdowns, outages, and other service-interrupting incidents.

The need to rapidly sort through tens of thousands — or even hundreds of thousands — of monitor variables, alerts and events to quickly discover problems and pinpoint root causes far exceeds the capabilities of manual methods.

To meet this growing need, IDC expects powerful performance management tools, based on sophisticated statistical analysis and modeling techniques, to emerge from niche status and become a recognized mainstream technology during the coming year. These analytics will be particularly important in driving increased demand for application performance management (APM) and end user experience monitoring tools that can provide a real-time end-to-end view of the health and business impact of the total environment.

Typically, IT infrastructure devices, applications, and IT-based business processes are monitored to see how they are performing. Monitored metrics are tested against thresholds (often adaptive ones) to see if they are exceeding defined limits or service objectives.

With the proliferation of scale-out architectures, virtual machines, and public and private clouds for applications deployment, the number of monitored elements increases rapidly and often results in a large stream of data with many variables that must be quickly scanned and analyzed to discover problems and find root causes. Multivariate statistical analysis and modeling are long-established mathematical techniques for analyzing large volumes of data, discovering meaningful relationships between variables, and building formulas that can be used to predict how related variables will behave in the future.

What is emerging is the wider application of this methodology, often called predictive analytics, to discovering, predicting, analyzing, and even preventing IT performance and availability problems. Key use cases include application performance management, virtualization management, and cloud management.

IDC expects wider distribution and use of this technology during the coming year from a growing number of vendors given the challenges of managing today's large, complex dynamic environments.

This article originally appeared in "Worldwide System Infrastructure Software 2012 Top 10 Predictions" IDC Document # 231593, December 2011, on www.idc.com.

About Tim Grieser

Tim Grieser is Program Vice President, Enterprise System Management Software, at IDC. He has extensive background in system management software technology including the use of predictive models for performance management and capacity planning.

Time Grieser's email

Twitter: @TimGrieser

Hot Topics

The Latest

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...

IDC Prediction: Predictive Analytics Goes Mainstream in 2012

Operational complexity in virtualized, scale-out, and cloud environments and composite Web-based applications will drive demand for automated analytic performance management and optimization tools that can quickly discover, filter, correlate, remediate, and ideally prevent performance and availability slowdowns, outages, and other service-interrupting incidents.

The need to rapidly sort through tens of thousands — or even hundreds of thousands — of monitor variables, alerts and events to quickly discover problems and pinpoint root causes far exceeds the capabilities of manual methods.

To meet this growing need, IDC expects powerful performance management tools, based on sophisticated statistical analysis and modeling techniques, to emerge from niche status and become a recognized mainstream technology during the coming year. These analytics will be particularly important in driving increased demand for application performance management (APM) and end user experience monitoring tools that can provide a real-time end-to-end view of the health and business impact of the total environment.

Typically, IT infrastructure devices, applications, and IT-based business processes are monitored to see how they are performing. Monitored metrics are tested against thresholds (often adaptive ones) to see if they are exceeding defined limits or service objectives.

With the proliferation of scale-out architectures, virtual machines, and public and private clouds for applications deployment, the number of monitored elements increases rapidly and often results in a large stream of data with many variables that must be quickly scanned and analyzed to discover problems and find root causes. Multivariate statistical analysis and modeling are long-established mathematical techniques for analyzing large volumes of data, discovering meaningful relationships between variables, and building formulas that can be used to predict how related variables will behave in the future.

What is emerging is the wider application of this methodology, often called predictive analytics, to discovering, predicting, analyzing, and even preventing IT performance and availability problems. Key use cases include application performance management, virtualization management, and cloud management.

IDC expects wider distribution and use of this technology during the coming year from a growing number of vendors given the challenges of managing today's large, complex dynamic environments.

This article originally appeared in "Worldwide System Infrastructure Software 2012 Top 10 Predictions" IDC Document # 231593, December 2011, on www.idc.com.

About Tim Grieser

Tim Grieser is Program Vice President, Enterprise System Management Software, at IDC. He has extensive background in system management software technology including the use of predictive models for performance management and capacity planning.

Time Grieser's email

Twitter: @TimGrieser

Hot Topics

The Latest

A major architectural shift is underway across enterprise networks, according to a new global study from Cisco. As AI assistants, agents, and data-driven workloads reshape how work gets done, they're creating faster, more dynamic, more latency-sensitive, and more complex network traffic. Combined with the ubiquity of connected devices, 24/7 uptime demands, and intensifying security threats, these shifts are driving infrastructure to adapt and evolve ...

Image
Cisco

The development of banking apps was supposed to provide users with convenience, control and piece of mind. However, for thousands of Halifax customers recently, a major mobile outage caused the exact opposite, leaving customers unable to check balances, or pay bills, sparking widespread frustration. This wasn't an isolated incident ... So why are these failures still happening? ...

Cyber threats are growing more sophisticated every day, and at their forefront are zero-day vulnerabilities. These elusive security gaps are exploited before a fix becomes available, making them among the most dangerous threats in today's digital landscape ... This guide will explore what these vulnerabilities are, how they work, why they pose such a significant threat, and how modern organizations can stay protected ...

The prevention of data center outages continues to be a strategic priority for data center owners and operators. Infrastructure equipment has improved, but the complexity of modern architectures and evolving external threats presents new risks that operators must actively manage, according to the Data Center Outage Analysis 2025 from Uptime Institute ...

As observability engineers, we navigate a sea of telemetry daily. We instrument our applications, configure collectors, and build dashboards, all in pursuit of understanding our complex distributed systems. Yet, amidst this flood of data, a critical question often remains unspoken, or at best, answered by gut feeling: "Is our telemetry actually good?" ... We're inviting you to participate in shaping a foundational element for better observability: the Instrumentation Score ...

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms ...

Much like a traditional factory turns raw materials into finished products, the AI factory turns vast datasets into actionable business outcomes through advanced models, inferences, and automation. From the earliest data inputs to the final token output, this process must be reliable, repeatable, and scalable. That requires industrializing the way AI is developed, deployed, and managed ...

Almost half (48%) of employees admit they resent their jobs but stay anyway, according to research from Ivanti ... This has obvious consequences across the business, but we're overlooking the massive impact of resenteeism and presenteeism on IT. For IT professionals tasked with managing the backbone of modern business operations, these numbers spell big trouble ...

For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...

FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...