Shifting to Analytics Driven Management for IT Operations
September 22, 2011
Sasha Gilenson
Share this

Today’s market environment demands businesses to change and adapt rapidly according to market dynamics, while still remaining in control. For business, these dynamics can mean sifting through what can amount to petabytes of data to act tactically and strategically.

Business Intelligence (BI) analytics tools help companies catch what could have been missed opportunities, using robust infrastructure to sift through mountains of data, and applying intelligent analytics. This way, business can identify hidden trends, customer relationships, buying behavior, operational and financial patterns, business opportunities and other vital information allowing business to take part in the market proactively.

Through BSM initiatives, IT is charged with supporting the changing demands of business, maintaining availability and ensuring that performance remains high. Similar to the business side experience, the IT landscape has grown in complexity, supporting a wider and growing range of technologies and platforms (Virtualization, Cloud, Open Source etc.), and accelerated application release schedules. This now means IT faces near-overwhelming quantities of information.

So while business progresses via BI, adopting analytics for management decisions, ironically the organization supporting this infrastructure, IT Operations, has adhered to an older, static-process driven paradigm. By not applying the analytics-based approach (like business) for their own operations, IT jeopardizes system stability, ultimately exposing business to the risk of devastating consequences.

Mountains of Data

Mountains of dynamic information confront IT. One of the prominent areas is in the cloud scenario. Self-service provisioning has multiplied the amount of activities occurring outside of static processes. The new provisioning opportunities are beyond IT management, leaving IT with limited visibility to what happens there. For example, an organization sets up a private cloud with a dynamic management system, allowing self-service provisioning of servers for the testing team. Traditionally, testing professionals would have come to IT and request an environment, and IT would oversee and manage this entire process. Now the process is independent, when testing needs an environment, they just create it.

Today’s Approach: Static Processes Drive IT

IT Operations has been running on static processes and strict workflows. For instance, ITIL has a process for Change Management that works according to certain steps. There are also a set metrics for measuring performance, like the amount of changes that successfully went through or failed.

IT Ops can plan as much as possible, but it won’t ensure that everything will occur as planned.

For example, when IT implements an application upgrade, and makes changes to the environment, IT administration can go through an entire established process, and still the application doesn’t function as planned. IT managers check the processes that the upgrade went through, yet still performance lags. Then they need to go into the fine, granular details and see every step, identifying the make-up of even minor changes, seeing how it was deployed to all the servers, what is the consistency between servers, have there been additional interference to the servers. They need to take this enormous amount of data – configuration and granular changes – and pinpoint what was the root cause.

Workflow-driven Management Processes

Static processes operate through workflows. The workflow only supports part of the process but there are so many things surrounding the workflow, happening outside the workflow. Business demands can force shortcuts to be taken. Steps in the workflow can be skipped in order to get immediate approval, even omitting the test stage.

Workflows Create False Security

Even when processes are enforced, like having registrations as part of the workflow management, this creates the belief that everything has been solved. There is no organization that can claim they operate completely within the bounds of established processes and approvals.

This situation creates a sense of false security that IT is on top of all the changes. IT Ops can think that everything works perfect and then the organization religiously adheres to their processes, relying on CMDB systems and workflows, ultimately undermining operations.

A Shift in Paradigm to Analytics Driven Management

Neurologists will explain that the brain has two distinct hemispheres. The right side of the brain collects information, while the left side is cognitive and analyzes this information, translating all of the sensory input into usable data.

This is really the same model for today’s IT organization, where operations need to know what is happening now. IT Ops can find itself stuck, trying to adjust static processes while keeping track of and handling dynamic events, and then getting caught off-guard when issues arise. The solution is to approach this situation with dynamic analytics, for dealing with all the changing data, and to see what is really happening. This goes beyond those few designated indicators that were usually watched, rather IT Ops needs Analytic Driven Management, similar to how business has adopted BI, extracting actionable information out of mountains of data to help decision makers respond efficiently.

About Sasha Gilenson

Sasha Gilenson is Founder and CEO of Evolven. Prior to founding Evolven in 2007, he spent 13 years with Mercury Interactive (acquired by HP), managing the QA organization and participating in establishing Mercury Interactive's Software as a Service (SaaS). Sasha played a key role in the development of Mercury Interactive's worldwide Business Technology Optimization (BTO) strategy and drove field operations of the Wireless Business Unit, all while taking on the duties as the Mercury Interactive's top "guru" in quality processes and IT practices domain.

Related Links:

www.evolven.com

Share this

The Latest

July 25, 2024

The 2024 State of the Data Center Report from CoreSite shows that although C-suite confidence in the economy remains high, a VUCA (volatile, uncertain, complex, ambiguous) environment has many business leaders proceeding with caution when it comes to their IT and data ecosystems, with an emphasis on cost control and predictability, flexibility and risk management ...

July 24, 2024

In June, New Relic published the State of Observability for Energy and Utilities Report to share insights, analysis, and data on the impact of full-stack observability software in energy and utilities organizations' service capabilities. Here are eight key takeaways from the report ...

July 23, 2024

The rapid rise of generative AI (GenAI) has caught everyone's attention, leaving many to wonder if the technology's impact will live up to the immense hype. A recent survey by Alteryx provides valuable insights into the current state of GenAI adoption, revealing a shift from inflated expectations to tangible value realization across enterprises ... Here are five key takeaways that underscore GenAI's progression from hype to real-world impact ...

July 22, 2024
A defective software update caused what some experts are calling the largest IT outage in history on Friday, July 19. The impact reverberated through multiple industries around the world ...
July 18, 2024

As software development grows more intricate, the challenge for observability engineers tasked with ensuring optimal system performance becomes more daunting. Current methodologies are struggling to keep pace, with the annual Observability Pulse surveys indicating a rise in Mean Time to Remediation (MTTR). According to this survey, only a small fraction of organizations, around 10%, achieve full observability today. Generative AI, however, promises to significantly move the needle ...

July 17, 2024

While nearly all data leaders surveyed are building generative AI applications, most don't believe their data estate is actually prepared to support them, according to the State of Reliable AI report from Monte Carlo Data ...

July 16, 2024

Enterprises are putting a lot of effort into improving the digital employee experience (DEX), which has become essential to both improving organizational performance and attracting and retaining talented workers. But to date, most efforts to deliver outstanding DEX have focused on people working with laptops, PCs, or thin clients. Employees on the frontlines, using mobile devices to handle logistics ... have been largely overlooked ...

July 15, 2024

The average customer-facing incident takes nearly three hours to resolve (175 minutes) while the estimated cost of downtime is $4,537 per minute, meaning each incident can cost nearly $794,000, according to new research from PagerDuty ...

July 12, 2024

In MEAN TIME TO INSIGHT Episode 8, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses AutoCon with the conference founders Scott Robohn and Chris Grundemann ...

July 11, 2024

Numerous vendors and service providers have recently embraced the NaaS concept, yet there is still no industry consensus on its definition or the types of networks it involves. Furthermore, providers have varied in how they define the NaaS service delivery model. I conducted research for a new report, Network as a Service: Understanding the Cloud Consumption Model in Networking, to refine the concept of NaaS and reduce buyer confusion over what it is and how it can offer value ...