Turn Your Big Dumb Data Into Small Smart Data
June 14, 2012

Charley Rich
Nastel Technologies

Share this

Much has been said about both the challenges and opportunities of Big Data. As in most things, it is really nothing new. The idea of an overwhelming amount of data and no way to make rapid sense of it has been around since the early days of IT ... actually maybe longer. If we go back to the Library of Alexandria in ancient Greece, the problem of too many scrolls and not enough time to read them and duplicate them for backup may have been the actual beginning. As you probably know, they had a fire and much of the data in those scrolls was lost forever. But enough on the need to have a good backup.

Today, lurking in the background we have a variation on Moore’s Law at play. The faster the speed of our technology as in our microprocessors, the faster we can create data and thus, the more of it we have. Whether this data is useful or not is another question. Who knows? There is already too much to consume. An interesting side note is that we also get better each year at storing the data, just take a look at the advances in holographic storage technology.

An extreme amount of data is created on a daily basis. And we in the IT industry actually exacerbate the problem by creating our own Big Data. Our data is data about data, describing what we call events. This seems to imply we are very clever pack rats, acquiring our events and storing them away for winter.

The number of alerts annually generated by event and performance systems has increased, on average, by 300 percent among Global 2000 enterprises, according to industry analyst firm, Gartner. In some cases, monitoring systems generate millions of alerts per day.

But, the real purpose isn’t the acquisition, storage and retrieval. It’s the analysis of the data and the ability to act on it. We wish to rely on this information for competitive advantage but in reality only truly use a small portion of what we generate. There may even be a Darwin-like effect here where the organisms that most quickly make sense of the big data they acquire, analyze it and take action are the ones that have the greatest competitive advantage in their niche and thus survive, while others die out.

Having that actionable information has proven to be incredibly powerful. There is no arguing the axiom “knowledge is power” when it comes to Big Data, but at times it seems we struggle to turn raw data into usable information. It is predominately a matter of knowing what you need and being able to separate it from what you do not need. The signal-to-noise ratio for this data is off the charts. But, we can only separate the signal if we know what we are looking for. This is where proper low-latency analytics such as complex event processing comes in.

What could be hidden in this data? For one thing, the first initial symptoms of a problem that can adversely affect your business. It’s great to have root-cause analysis after the library has burned down - “shouldn’t have used papyrus for those damn scrolls…” - but it is much more effective to catch the problem before the fire is overwhelming and you need to evacuate the building.

The next value hidden in this data is the behavior of your customers. It can help you resolve problems faster, improve service levels and retain customers. By understanding the patterns in this data, you can learn how your users make use of your applications and from this design ones that better meet their needs. If you don’t master the exploitation of Big Data, your competitors will.

Having said that, we have accelerated the rate at which we can collect data but not the rate at which we can cope with the data. The more we automate, the more we will create; the more we create, the less we are able to make sense of it. This process is breeding complexity and volatility in our business environments that we have never faced before. Not the least of these is the application performance monitoring environment.

With many enterprises having multiple monitoring systems in place for their middleware, making sense of the data in a fast and unified way is an impossible endeavor. In order to make Big Dumb Data into small, smart data, it is important to have a single point of actionable analysis in the stream.

As a parting word, I invite you to think about this in your own environment. Are you funneling all of your application performance information through the same contextual filter to guarantee continuity and data integrity? Fail to do this and you will fail to effectively turn Big Data into usable information.

Charley Rich is VP Product Management and Marketing at Nastel Technologies.

Related Links:

www.nastel.com

Charley Rich is VP Product Management and Marketing at Nastel Technologies and has over 28 years of technical, hands-on experience working with large-scale customers to meet their application and systems management requirements. Prior to joining Nastel, Charley was Product Manager for IBM's Tivoli Application Dependency Discovery Manager software, where he co-authored an IBM Redbook, charted the product roadmap, managed an agile requirements process and was recognized for his accomplishments by winning the Tivoli General Manager's Award. Recently, Charley was granted a patent for an Application Discovery and Monitoring process.
Share this