Forecasting Issues in Your Data Center
January 12, 2012

Scott Edwards

Share this

If you could forecast potential issues in your data center, what would it mean to you? Would advance warning technology be useful? If you could be notified early of an impending problem, would it benefit your business?

Predictive Analytics is very real. One area it is being deployed is in the area of weather forecasting. Meteorologists and oceanographers are using analytics to monitor and predict major tropical storms, cyclones, and hurricanes.

In this blog post, I want to analyze how meteorologists are using advanced analytics to forecast potential storms which enable them to act quickly to prevent losses. In the same way, IT can use analytics to forecast problems and prevent outages.

Hurricane and Storm Tracking Analogy

Years ago, weather forecasters relied on static models and sparse observations when trying to predict tropical storms and hurricanes. They would take all their temperature and barometric pressure readings, look out the window to observe the sky and water conditions, and then consult a set of charts and almanacs.

A day may have begun innocently enough with bright blue skies; then, suddenly, it became overcast and windy. Forecasters had no way to know if the looming storm would be severe like a hurricane, or just a small tempest that would shortly pass. These limited forecasts left little time for preparation before a hurricane struck. Without advanced notification, commerce and the way of life for many were heavily impacted.

Trying to identify potential storms of performance degradation or outages in a data center used to be the same way. IT managers would look at a snapshot of their environment, consult a static model, and take their best guess about what things would be like in the future. Sometimes the predictions were accurate and sometimes they’d get blindsided with a problem that would cripple their service.

Times have changed.

Today weather forecasters are able to provide advanced notifications of tropical storms and even predict their path and intensity, which in turn helps prevent communities and businesses from being totally unprepared for the impact of the storm.

Meteorologists now use a vast network of ground- and ocean-based sensors, satellites, and radar technology to collect data such as air pressure and humidity, ocean temperature and height, ocean currents, and wind speed in real-time. These networked data collector tools are perpetually taking the pulse of the planet and feeding forecasters critical data.

This data is captured in computer forecast models, which analyze the data and calculate likely future weather behavior. These models also look at seasonality, tide, and trend data to predict the potential of hurricanes. When an anomaly, or change in the weather pattern that doesn’t align with “normal”, appears in the ocean, the models correlate information and begin sending alerts to the meteorologists.

Advanced data collecting tools within IT are also available today. Organizations are often collecting millions of data points per hour. But all these metrics are accumulating into an enormous sea of data. IT managers are often overwhelmed with the ensuing wave of metrics, log files, event consoles, manually managing thresholds, and red gauges.

What has been lacking was the analytic tool set and automated intelligence to correlate these disparate metrics from both an application and a topology perspective to help them predict, or forecast, potential problems on the horizon. Warning that could help them take the necessary action to remediate the issue before crippling the mission critical service.

Forecasting IT Issues

This same predictive analytics capability is essential in managing IT.

Making sure you have complete visibility into the health of your business service, that you can adapt, and even survive, in today’s cloud and virtualized IT environment isn’t just a “nice-to-have.” It is mandatory.

Managing dynamic infrastructures and applications will take more than just reacting to business services problems when they occur. You need to be able to anticipate issues and take action to prevent service degradation or outages. You need better visibility into you how your applications and business services are correlated with your dynamic infrastructure, so you can track irregular behavior to topology changes. And, you need an easier way of determining acceptable thresholds and real anomalies.

Much like the advanced analytics used by today’s hurricane forecasters, Predictive Analytics offers a smarter way to manage IT so you can anticipate IT problems before they occur.

Analytics Built Upon a Dynamic Service Model

However, not any predictive analytics toolset will do the job. When managing a cloud/dynamic environment, you need a predictive analytics tool set built on top of a real-time, dynamic service model (service map that provide a view of how the applications work with the underlying infrastructure). Why? So you can correlate metric abnormalities with topology. This is critical when monitoring your services to forecast potentials problems for four essential reasons:

1. You need to identify lead suspects of the issue to determine root cause and restore service; the focus being on reducing mean time to repair.

2. You need to determine if the current anomaly you are seeing is a result of topology changes.

3. You need a way to compare today’s anomaly to those seen in the past to leverage the knowledge used to resolve the issue and prevent similar issues in the future.

4. You need to understand the business impact of each issue and prioritize resolution.

With a dynamic service model as a foundation, you can now use analytics to:

- collect data

- learn normal behavior

- identify abnormalities

- correlate topology information

- analyze current anomalies to those from the past to reduce noise and speed mean time to repair

- be alerted to potential issues with events that provide additional context to how to solve the potential problem

- automate the remediation process by fusing analytics with an automated closed loop incident process (CLIP)


Both meteorologists and IT can now use dynamic models with sensors to get updates in real time. The data collected goes into a dynamic model and you get an intelligent forecast based on up-to-date information and the latest environmental factors. The forecasts change as the environment changes. This is the elegance of a dynamic model.

Scott Edwards is Product Marketing Manager for Service Intelligence, HP Software.

Share this