BSM Analytics: What Are They? And Why Should You Care? Part One
September 26, 2011

Dennis Drogseth

Share this

"Analytics" is -- like many words applied to BSM, service management and the management of applications and networked infrastructures -- a somewhat conflicted term.

However unlike some debates over the meaning of terms -– my current least favorite are the rather enervating debates about the meaning of "service" versus "application" -- the discussions around the analytics are more encouraging. This is because most vendors at least, and most actual IT deployments, seem to be looking for more creative ways to expand and combine analytic technologies without getting caught up in tidy definition-making that more often than not tends to stifle innovation rather than encourage it.

The general consensus as I see it is that analytics really are important, all the more so because of the dynamic nature of cloud computing and service assurance in the age of application ecosystems, along with the expanding role of IT in not only supporting business models, but actually transforming them. Over the last few years, there has been a growing awareness of the links between analytics and automation. And there is some recognition that analytics, service modeling, and the CMDB/CMS also belong together as highly synergistic technologies—although there the awareness as to why seems to remain a lot more muddied.

Five years ago, EMA developed the EMA Analytics Roadmap (November, 2006) -- admittedly the rough equivalent of a few centuries in the past in high-tech-think. Since then, last year and this year, EMA has examined analytics in context with BSM Service Impact, Application Discovery and Dependency Mapping, and CMDB/CMS vendor offerings and deployments. New vendor offerings such as those from AccelOps (, the eMite Service Intelligence Platform (, and Prelert ( have also kept analytics in the forefront of current EMA activity.

So what are we talking about when we use the word "analytics?" A quick summary of some of the various approaches are listed below, as taken from EMA’S Analytics Roadmap:

Anomaly detection – This category touches a broad and increasingly valuable range of solutions that capture normal patterns and map anomalous behaviors against them. Anomaly detection is most often applied to service performance and security issues, but can be used to interpret a wide variety of other behaviors, as well. It is a root technology in security, as well as in many forms of service assurance.

Case-based reasoning – This is a form of analysis in which problems are solved by comparing them to a historical database of old problems. The term is actually derived from law, or legal precedent, which is built around "cases." Case-based reasoning has been most often used in Help Desk applications but with modest success.

Chaos theory – Chaos theory analyzes problems in terms of "sensitive influencers" – those initial inputs into conditions that are most likely to cause change or influence outcomes versus historically documented, linear trends. Chaos theory often reflects real patterns in nature, which can be deterministic in terms of such things as rhythms and scaling, but which are not otherwise completely predictable. Fractals are one expression of chaos theory.

Comparators – These are signature-based analyses that can assess multiple conditions in real-time as they indicate specific problem outcomes.

Correlators – As the name implies, these correlate across multiple events sometimes from multiple management sources.

Data mining and OLAP – These are generally not applied to support real-time requirements, but are used to uncover unique and unobvious problems. Data mining is especially powerful in uncovering those problems not "looked for" and hence missed by more limited forms of analysis.

Fuzzy logic – Rather than seeking hard-and-fast binary conditions to assess analytical problems, fuzzy logic looks for varying degrees of truth or falsehood. This can be useful in capturing patterns that are not inherently binary in nature.

Neural networks – Neural networking is patterned after how neurons work in the brain and are typically organized into clusters of input layers, output layers and hidden layers. The connections are weighted by prior historical knowledge and training.

Object-based modeling – Modeling is most often used in data stores, as a means for preserving and sharing information, and so can also be viewed as an advanced context for presenting information over which various types of analytic techniques can be applied.

Optimization algorithms – These usually operate according to communication protocol parameters, such as tracking the frequencies of TCP acknowledgements, or the frequency of redefinitions in large server buffer pools, or, conversely, monitoring content compression such as video/audio caudexes. More advanced optimization can take place on a macro level focused on achieving optimal system behavior rather than optimizing individualized distinct components – for example if a server is slow, flooding it with an optimally performing protocol may only accelerate the demise of the service.

Predictive Algorithms – Typically, these capture or "learn" patterns of behavior reflecting normal performance criteria, and then leverage these patterns as they begin to deviate from normal to anticipate recurring conditions for outages and performance degradations.

Some of these approaches, such as "neural networking" and "chaos theory" have all but disappeared from vendor marketing, even if elements of them survive in some solutions running in IT shops today. But other categories, such as "data mining" and "OLAP," "predictive algorithms," and "anomaly detection" or more broadly "pattern matching" based on self-learning approaches to normative behaviors -- have, if anything, risen in prominence.

But why even go through the list of heuristics? You may well feel that getting into the mathematical guts of the solutions you purchase is probably no more relevant than worrying about the nuances of jet propulsion once you board an airplane. And in some respects, you would be right.

But what does matter is that these approaches, as they become combined in real world solutions, typically support different types of problem solving and can combine either in useful ways, or redundant and sometimes even destructive ways. And the market as a whole has done a terrible job in helping you steer clearly across this murky jungle -- as a growing number of application performance management (APM) vendors claim that they can "do it all," while another set of vendors would just take everything from the above list, dump it into a data warehouse, and declare (premature) victory. Needless to say, brand identity and analytics "secret sauce" often go so closely hand in hand that sorting through the analytics landscape can take on a religious quality -- much like trying to ascertain the virtues of one hermetic sect from the other.

The truth is, you should approach your analytic investments much like you should approach your discovery investments (though few do)—as a set of complementary capabilities that can help to support a more effective and cohesive way of working, help drive automation, and help sustain currency in an evolving CMS.

In my next blog, I’ll provide at least a few preliminary insights as to how.

Related Links:

Click here to read Part Two of Dennis Drogseth's blog: BSM Analytics: What Are They? And Why Should You Care?

Dennis Drogseth is VP at Enterprise Management Associates (EMA)
Share this