Skip to main content

Shifting to Analytics Driven Management for IT Operations

Today’s market environment demands businesses to change and adapt rapidly according to market dynamics, while still remaining in control. For business, these dynamics can mean sifting through what can amount to petabytes of data to act tactically and strategically.

Business Intelligence (BI) analytics tools help companies catch what could have been missed opportunities, using robust infrastructure to sift through mountains of data, and applying intelligent analytics. This way, business can identify hidden trends, customer relationships, buying behavior, operational and financial patterns, business opportunities and other vital information allowing business to take part in the market proactively.

Through BSM initiatives, IT is charged with supporting the changing demands of business, maintaining availability and ensuring that performance remains high. Similar to the business side experience, the IT landscape has grown in complexity, supporting a wider and growing range of technologies and platforms (Virtualization, Cloud, Open Source etc.), and accelerated application release schedules. This now means IT faces near-overwhelming quantities of information.

So while business progresses via BI, adopting analytics for management decisions, ironically the organization supporting this infrastructure, IT Operations, has adhered to an older, static-process driven paradigm. By not applying the analytics-based approach (like business) for their own operations, IT jeopardizes system stability, ultimately exposing business to the risk of devastating consequences.

Mountains of Data

Mountains of dynamic information confront IT. One of the prominent areas is in the cloud scenario. Self-service provisioning has multiplied the amount of activities occurring outside of static processes. The new provisioning opportunities are beyond IT management, leaving IT with limited visibility to what happens there. For example, an organization sets up a private cloud with a dynamic management system, allowing self-service provisioning of servers for the testing team. Traditionally, testing professionals would have come to IT and request an environment, and IT would oversee and manage this entire process. Now the process is independent, when testing needs an environment, they just create it.

Today’s Approach: Static Processes Drive IT

IT Operations has been running on static processes and strict workflows. For instance, ITIL has a process for Change Management that works according to certain steps. There are also a set metrics for measuring performance, like the amount of changes that successfully went through or failed.

IT Ops can plan as much as possible, but it won’t ensure that everything will occur as planned.

For example, when IT implements an application upgrade, and makes changes to the environment, IT administration can go through an entire established process, and still the application doesn’t function as planned. IT managers check the processes that the upgrade went through, yet still performance lags. Then they need to go into the fine, granular details and see every step, identifying the make-up of even minor changes, seeing how it was deployed to all the servers, what is the consistency between servers, have there been additional interference to the servers. They need to take this enormous amount of data – configuration and granular changes – and pinpoint what was the root cause.

Workflow-driven Management Processes

Static processes operate through workflows. The workflow only supports part of the process but there are so many things surrounding the workflow, happening outside the workflow. Business demands can force shortcuts to be taken. Steps in the workflow can be skipped in order to get immediate approval, even omitting the test stage.

Workflows Create False Security

Even when processes are enforced, like having registrations as part of the workflow management, this creates the belief that everything has been solved. There is no organization that can claim they operate completely within the bounds of established processes and approvals.

This situation creates a sense of false security that IT is on top of all the changes. IT Ops can think that everything works perfect and then the organization religiously adheres to their processes, relying on CMDB systems and workflows, ultimately undermining operations.

A Shift in Paradigm to Analytics Driven Management

Neurologists will explain that the brain has two distinct hemispheres. The right side of the brain collects information, while the left side is cognitive and analyzes this information, translating all of the sensory input into usable data.

This is really the same model for today’s IT organization, where operations need to know what is happening now. IT Ops can find itself stuck, trying to adjust static processes while keeping track of and handling dynamic events, and then getting caught off-guard when issues arise. The solution is to approach this situation with dynamic analytics, for dealing with all the changing data, and to see what is really happening. This goes beyond those few designated indicators that were usually watched, rather IT Ops needs Analytic Driven Management, similar to how business has adopted BI, extracting actionable information out of mountains of data to help decision makers respond efficiently.

About Sasha Gilenson

Sasha Gilenson is Founder and CEO of Evolven. Prior to founding Evolven in 2007, he spent 13 years with Mercury Interactive (acquired by HP), managing the QA organization and participating in establishing Mercury Interactive's Software as a Service (SaaS). Sasha played a key role in the development of Mercury Interactive's worldwide Business Technology Optimization (BTO) strategy and drove field operations of the Wireless Business Unit, all while taking on the duties as the Mercury Interactive's top "guru" in quality processes and IT practices domain.

Related Links:

www.evolven.com

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Shifting to Analytics Driven Management for IT Operations

Today’s market environment demands businesses to change and adapt rapidly according to market dynamics, while still remaining in control. For business, these dynamics can mean sifting through what can amount to petabytes of data to act tactically and strategically.

Business Intelligence (BI) analytics tools help companies catch what could have been missed opportunities, using robust infrastructure to sift through mountains of data, and applying intelligent analytics. This way, business can identify hidden trends, customer relationships, buying behavior, operational and financial patterns, business opportunities and other vital information allowing business to take part in the market proactively.

Through BSM initiatives, IT is charged with supporting the changing demands of business, maintaining availability and ensuring that performance remains high. Similar to the business side experience, the IT landscape has grown in complexity, supporting a wider and growing range of technologies and platforms (Virtualization, Cloud, Open Source etc.), and accelerated application release schedules. This now means IT faces near-overwhelming quantities of information.

So while business progresses via BI, adopting analytics for management decisions, ironically the organization supporting this infrastructure, IT Operations, has adhered to an older, static-process driven paradigm. By not applying the analytics-based approach (like business) for their own operations, IT jeopardizes system stability, ultimately exposing business to the risk of devastating consequences.

Mountains of Data

Mountains of dynamic information confront IT. One of the prominent areas is in the cloud scenario. Self-service provisioning has multiplied the amount of activities occurring outside of static processes. The new provisioning opportunities are beyond IT management, leaving IT with limited visibility to what happens there. For example, an organization sets up a private cloud with a dynamic management system, allowing self-service provisioning of servers for the testing team. Traditionally, testing professionals would have come to IT and request an environment, and IT would oversee and manage this entire process. Now the process is independent, when testing needs an environment, they just create it.

Today’s Approach: Static Processes Drive IT

IT Operations has been running on static processes and strict workflows. For instance, ITIL has a process for Change Management that works according to certain steps. There are also a set metrics for measuring performance, like the amount of changes that successfully went through or failed.

IT Ops can plan as much as possible, but it won’t ensure that everything will occur as planned.

For example, when IT implements an application upgrade, and makes changes to the environment, IT administration can go through an entire established process, and still the application doesn’t function as planned. IT managers check the processes that the upgrade went through, yet still performance lags. Then they need to go into the fine, granular details and see every step, identifying the make-up of even minor changes, seeing how it was deployed to all the servers, what is the consistency between servers, have there been additional interference to the servers. They need to take this enormous amount of data – configuration and granular changes – and pinpoint what was the root cause.

Workflow-driven Management Processes

Static processes operate through workflows. The workflow only supports part of the process but there are so many things surrounding the workflow, happening outside the workflow. Business demands can force shortcuts to be taken. Steps in the workflow can be skipped in order to get immediate approval, even omitting the test stage.

Workflows Create False Security

Even when processes are enforced, like having registrations as part of the workflow management, this creates the belief that everything has been solved. There is no organization that can claim they operate completely within the bounds of established processes and approvals.

This situation creates a sense of false security that IT is on top of all the changes. IT Ops can think that everything works perfect and then the organization religiously adheres to their processes, relying on CMDB systems and workflows, ultimately undermining operations.

A Shift in Paradigm to Analytics Driven Management

Neurologists will explain that the brain has two distinct hemispheres. The right side of the brain collects information, while the left side is cognitive and analyzes this information, translating all of the sensory input into usable data.

This is really the same model for today’s IT organization, where operations need to know what is happening now. IT Ops can find itself stuck, trying to adjust static processes while keeping track of and handling dynamic events, and then getting caught off-guard when issues arise. The solution is to approach this situation with dynamic analytics, for dealing with all the changing data, and to see what is really happening. This goes beyond those few designated indicators that were usually watched, rather IT Ops needs Analytic Driven Management, similar to how business has adopted BI, extracting actionable information out of mountains of data to help decision makers respond efficiently.

About Sasha Gilenson

Sasha Gilenson is Founder and CEO of Evolven. Prior to founding Evolven in 2007, he spent 13 years with Mercury Interactive (acquired by HP), managing the QA organization and participating in establishing Mercury Interactive's Software as a Service (SaaS). Sasha played a key role in the development of Mercury Interactive's worldwide Business Technology Optimization (BTO) strategy and drove field operations of the Wireless Business Unit, all while taking on the duties as the Mercury Interactive's top "guru" in quality processes and IT practices domain.

Related Links:

www.evolven.com

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...