Root Cause Analysis: Causal Versus Derived Events
April 15, 2014

Tom Molfetto
SericeNow

Share this

Today’s business landscape is saturated with data. Big Data has become one of the most hyped trends in the tech space, and all indicators point to the reality that this volume of data is only going to grow. IDC estimates that we’ll see a 60% growth in structured and unstructured data annually. Global 2000 organizations are investing billions of dollars into harnessing the power of Big Data to help make it meaningful and actionable. In other words, organizations are spending a ton of money in an effort to translate data into information.

Data – in and of itself – is fairly useless. When data is interpreted, processed and analyzed – when its true meaning is unearthed – it becomes useful and is called information. Thus the race between players like Splunk, QlikView and others to be the first or the best to harness the power of Big Data by translating it into actionable information.

Helping data center personnel and enterprise IT professionals translate their data into information by isolating causal versus derived events is really relevant to businesses these days. In most of my explorations, I have discovered that organizations are using a best-of-breed approach to monitoring, in what has resulted in a sort of Balkanization of the data center. In a common use case: network teams may be using Cisco for monitoring, the database teams use Oracle and web server teams uses Nagios. But nothing ties all of that information together in a unified view. There is no monitor of monitors, or manager of managers, so to speak. Let alone a unified view that goes beyond the IT components and maps them to their associated business services.

So what happens when a LAN port fails, and the app server and database server that both communicate through that LAN port also fail as a result? In that scenario, the LAN port failure is the causal event and the app/database server failures are derived events. By being able to quickly distinguish between the two types of events, and isolate the root cause of the failure, the dependent business services can be restored while minimizing negative impact on overall operations.

Standard monitoring solutions will trigger a bunch of red flags showing failures, but in order to make the map “come alive” it needs to be architected and displayed in a topological format. This is what allows easier assessment of root cause versus derived events, and what contributed to a dramatically reduced Meant-Time-To-Know (MTTK) with regard to diagnosing the underlying issues impacting business services.

Best-of-breed monitoring tools should continue to be leveraged in their respective domains, but the most forward-thinking organizations are unifying these tools from a service-centric perspective to create a monitor of monitors that maps IT components to associated business services, and connects with the best-of-breed solutions to create a complete and up-to-date topology that empowers IT to do their jobs more effectively.

Providing IT with the tools required to interpret data meaningfully and isolate the root cause of problems helps to create an informed perspective from which decisions can be made and responses taken.

Tom Molfetto is Marketing Director for Neebula.

Share this

The Latest

December 15, 2017

CIOs around the globe are more determined than ever to achieve digital transformation within their organizations despite setbacks, according to a survey by Logicalis ...

December 14, 2017

The Spiceworks 2018 IT Career Outlook found that 32 percent of IT professionals plan to search for or take an IT job with a new employer in the next 12 months ...

December 12, 2017

Downtime and security risks were present in each cloud environment tested, according to 2016 Private Cloud Resiliency Benchmarks, a report from Continuity Software ...

December 11, 2017

Companies that empower employees with the applications they want and need, and make them readily accessible — anytime, anywhere, on any device — can benefit from measurable gains at the individual and organizational level, according to a survey, The Impact of the Digital Workforce: A New Equilibrium of the Digitally Transformed Enterprise, conducted by VMware ...

December 08, 2017

Metrics-oriented thinking is key to continuous improvement – and a core tenant of any agile or DevOps philosophy. Metrics are factual and once agreed upon, these facts are used to drive discussions and methods. They also allow for a collaborative effort to execute decisions that contribute towards business outcomes ...

December 06, 2017

The recent outage of the University of Cambridge website hosting Stephen Hawking's doctoral thesis is a prime example of what happens when niche websites become exposed to mainstream levels of traffic ...

December 05, 2017

Even as many organizations continue to adopt multi-cloud technologies as part of their dramatic transformation, the mainframe remains a relevant and growing data center hub for many, according to BMC's 12th annual Mainframe Research Report ...

December 04, 2017

Banks are laying the foundation for the digitization of their businesses and anticipate emerging technologies -- from IoT to biometric authentications and blockchain -- to make a substantial imprint on the industry within five years, according to a recent survey of banking professionals commissioned by VMware ...

December 01, 2017

A recent blog on APMdigest — Protecting Network Performance is as Essential as Securing the Network — mentions that performance issues and outages are possible when security tools (like an IPS, WAF, etc.) are inserted inline. However, one easy way to mitigate this concern is to deploy a bypass switch before the inline tool ...

November 30, 2017

While self-service and self-help IT are in common practice, about half of organizations surveyed are still struggling with full deployment and realizing its value, according to a new report by Ivanti and the Service Desk Institute ...