Skip to main content

Flying Blind — The 2013 IT Operations Quotient Report

Sasha Gilenson

IT Operations is now overwhelmed — by the volume, velocity and variety of change and configuration data, lacking insight or actionable information, all making change and configuration problems a chronic pain.

As shown by recent surveys at the Gartner Data Center Summit and ServiceNow Knowledge13 conferences, where Evolven surveyed over 300 IT Operations professionals asking questions critical to IT operations management, 84% of IT professionals said that they want to significantly improve their IT operations management.

The 2013 IT OQ (Operations Quotient) Report provides a good indication to IT executives as to whether IT ops investments have yielded desired results, using the IT Operations Quotient (OQ), a metric for evaluating operational ability to support existing business services and incoming business requirements.

When an Incident Occurs, Can You Quickly Know What Changed?

Only 7% of the professionals surveys indicated that, using their current IT management tools, they could quickly identify what changed in order to respond to problems and incidents.

The first question IT operations asks themselves when an incident occurs is "what changed?" Due to the complexity and dynamics taking place in the modern data center, with overwhelming configuration data and frequent changes, this question has become quite formidable.

Between applications, environments, and individual instances, mistakes and unauthorized changes happen, demanding that IT ops spend significant amounts of time managing configuration values.

Traditional IT management tools were not designed to deal with the complexity and dynamics of the modern data center. These tools have not been automated to collect data down to granular details, analyzing all changes and consolidating information to extract meaningful information from the sea of raw change and configuration data.

Without systems to manage and organize this growth, IT will drown in its own data.

Can You Automatically Validate that Your Release Deployed Accurately?

Only 8% of the participants surveyed agreed that they could currently automatically validate the accuracy of their deployments. Available release management tools are unprepared for one-off changes or changes that do not follow policy.

IT organizations regularly transition changes to production environments, checking changes throughout a set of pre-production environments.

Now IT is under even more pressure. To meet business requirements, application deployments have accelerated and software deployment schedules have driven up high-paced change activity. The increasingly agile nature of application and infrastructure change requests, leaves IT operations at a loss as they are inundated by change requests that run the gamut from the critical and high priority to the minor and unimportant.

With a typical environment having thousands of different system configuration parameters, any little change can impact performance. So it’s not surprising to see many companies going through painful stabilization periods after a release, as well as production outages.

Even when using automated tools for deployment, the lack of detailed visibility into the release means IT ops can’t ensure accurate, error-free deployments.

Can You Quickly Identify the Incident’s Root Cause?

As shown in this survey, the vast majority of IT professionals surveyed concurred that they lack the capabilities to quickly identify an incident’s root cause. IT organizations find themselves challenged when assessing system failure and tracking down the root cause, such as if a patch wasn't deployed or a server failed.

Any minute misconfiguration or omission of a single configuration parameter can quickly lead to an incident with high impact. With an infinite number of these configuration parameters in play when an environment incident hits, finding the root cause consumes both precious time and manpower, making MTTR woefully high in most organizations.

The root cause of downtime and incidents often start at the most granular level of configuration changes where today's configuration management and change management tools don't provide visibility. The different groups in organizations, like IT Development, Support, and Operations, tend to point the finger of blame for issues, and fail to diagnose or deal with the root cause of the problem.

After a major incident, root cause analysis should focus on root cause of the failure in order to not only resolve the incident but to head off a recurrence. Even when IT teams manage to suppress a failure, and operations can return to "normal", the true root cause may still remain unresolved, leaving the organization exposed to further chaos.

Can You Automatically Verify the Consistency of Your Environments?

From our survey, only 5% of the respondents felt that currently they can automatically verify the consistency of their environments, where they need to go into the fine, granular details and identify the make-up of even minor changes, having to process the enormous amounts of configuration data, for verifying the consistency between servers and environments.

As IT organizations regularly transition changes to production environments, IT teams need to check changes throughout a set of pre-production environments that can include system test, performance test, UAT, staging, etc (changes are also mirrored in a Disaster Recovery environment). IT has sought to diversify their workloads, spreading deployments over multiple IT environments to mitigate risk, yet also doubling complexity.

The high volumes of changes means that not all changes consistently make their way to all environments (pre-prod, prod, DR). The configuration parameters must be validated for consistency in real-time.

IT Operations Analytics Helps

With performance at risk from any disruptions to stability, IT teams need to know exactly what has changed in an environment.

Managing IT environments with intelligent automated analytics will drive more sophisticated proactive processes like comparing environment states, validating releases, and verifying consistency of changes,helping to prevent or identify critical issues. So rather than continue to feed bloated system tools, IT Operations should strive to simplify and implement configuration management based on IT Operations Analytics, and turn the situation around from what can’t be managed to being what can be done about performance and availability.

Sasha Gilenson is the Founder and CEO of Evolven Software.

Hot Topics

The Latest

In the final part of APMdigest's 2025 Predictions Series, industry experts offer predictions on how AI will evolve and impact technology and business in 2025 ...

E-commerce is set to skyrocket with a 9% rise over the next few years ... To thrive in this competitive environment, retailers must identify digital resilience as their top priority. In a world where savvy shoppers expect 24/7 access to online deals and experiences, any unexpected downtime to digital services can lead to significant financial losses, damage to brand reputation, abandoned carts with designer shoes, and additional issues ...

Efficiency is a highly-desirable objective in business ... We're seeing this scenario play out in enterprises around the world as they continue to struggle with infrastructures and remote work models with an eye toward operational efficiencies. In contrast to that goal, a recent Broadcom survey of global IT and network professionals found widespread adoption of these strategies is making the network more complex and hampering observability, leading to uptime, performance and security issues. Let's look more closely at these challenges ...

Image
Broadcom

The 2025 Catchpoint SRE Report dives into the forces transforming the SRE landscape, exploring both the challenges and opportunities ahead. Let's break down the key findings and what they mean for SRE professionals and the businesses relying on them ...

Image
Catchpoint

The pressure on IT teams has never been greater. As data environments grow increasingly complex, resource shortages are emerging as a major obstacle for IT leaders striving to meet the demands of modern infrastructure management ... According to DataStrike's newly released 2025 Data Infrastructure Survey Report, more than half (54%) of IT leaders cite resource limitations as a top challenge, highlighting a growing trend toward outsourcing as a solution ...

Image
Datastrike

Gartner revealed its top strategic predictions for 2025 and beyond. Gartner's top predictions explore how generative AI (GenAI) is affecting areas where most would assume only humans can have lasting impact ...

The adoption of artificial intelligence (AI) is accelerating across the telecoms industry, with 88% of fixed broadband service providers now investigating or trialing AI automation to enhance their fixed broadband services, according to new research from Incognito Software Systems and Omdia ...

 

AWS is a cloud-based computing platform known for its reliability, scalability, and flexibility. However, as helpful as its comprehensive infrastructure is, disparate elements and numerous siloed components make it difficult for admins to visualize the cloud performance in detail. It requires meticulous monitoring techniques and deep visibility to understand cloud performance and analyze operational efficiency in detail to ensure seamless cloud operations ...

Imagine a future where software, once a complex obstacle, becomes a natural extension of daily workflow — an intuitive, seamless experience that maximizes productivity and efficiency. This future is no longer a distant vision but a reality being crafted by the transformative power of Artificial Intelligence ...

Enterprise data sprawl already challenges companies' ability to protect and back up their data. Much of this information is never fully secured, leaving organizations vulnerable. Now, as GenAI platforms emerge as yet another environment where enterprise data is consumed, transformed, and created, this fragmentation is set to intensify ...

Image
Crashplan

Flying Blind — The 2013 IT Operations Quotient Report

Sasha Gilenson

IT Operations is now overwhelmed — by the volume, velocity and variety of change and configuration data, lacking insight or actionable information, all making change and configuration problems a chronic pain.

As shown by recent surveys at the Gartner Data Center Summit and ServiceNow Knowledge13 conferences, where Evolven surveyed over 300 IT Operations professionals asking questions critical to IT operations management, 84% of IT professionals said that they want to significantly improve their IT operations management.

The 2013 IT OQ (Operations Quotient) Report provides a good indication to IT executives as to whether IT ops investments have yielded desired results, using the IT Operations Quotient (OQ), a metric for evaluating operational ability to support existing business services and incoming business requirements.

When an Incident Occurs, Can You Quickly Know What Changed?

Only 7% of the professionals surveys indicated that, using their current IT management tools, they could quickly identify what changed in order to respond to problems and incidents.

The first question IT operations asks themselves when an incident occurs is "what changed?" Due to the complexity and dynamics taking place in the modern data center, with overwhelming configuration data and frequent changes, this question has become quite formidable.

Between applications, environments, and individual instances, mistakes and unauthorized changes happen, demanding that IT ops spend significant amounts of time managing configuration values.

Traditional IT management tools were not designed to deal with the complexity and dynamics of the modern data center. These tools have not been automated to collect data down to granular details, analyzing all changes and consolidating information to extract meaningful information from the sea of raw change and configuration data.

Without systems to manage and organize this growth, IT will drown in its own data.

Can You Automatically Validate that Your Release Deployed Accurately?

Only 8% of the participants surveyed agreed that they could currently automatically validate the accuracy of their deployments. Available release management tools are unprepared for one-off changes or changes that do not follow policy.

IT organizations regularly transition changes to production environments, checking changes throughout a set of pre-production environments.

Now IT is under even more pressure. To meet business requirements, application deployments have accelerated and software deployment schedules have driven up high-paced change activity. The increasingly agile nature of application and infrastructure change requests, leaves IT operations at a loss as they are inundated by change requests that run the gamut from the critical and high priority to the minor and unimportant.

With a typical environment having thousands of different system configuration parameters, any little change can impact performance. So it’s not surprising to see many companies going through painful stabilization periods after a release, as well as production outages.

Even when using automated tools for deployment, the lack of detailed visibility into the release means IT ops can’t ensure accurate, error-free deployments.

Can You Quickly Identify the Incident’s Root Cause?

As shown in this survey, the vast majority of IT professionals surveyed concurred that they lack the capabilities to quickly identify an incident’s root cause. IT organizations find themselves challenged when assessing system failure and tracking down the root cause, such as if a patch wasn't deployed or a server failed.

Any minute misconfiguration or omission of a single configuration parameter can quickly lead to an incident with high impact. With an infinite number of these configuration parameters in play when an environment incident hits, finding the root cause consumes both precious time and manpower, making MTTR woefully high in most organizations.

The root cause of downtime and incidents often start at the most granular level of configuration changes where today's configuration management and change management tools don't provide visibility. The different groups in organizations, like IT Development, Support, and Operations, tend to point the finger of blame for issues, and fail to diagnose or deal with the root cause of the problem.

After a major incident, root cause analysis should focus on root cause of the failure in order to not only resolve the incident but to head off a recurrence. Even when IT teams manage to suppress a failure, and operations can return to "normal", the true root cause may still remain unresolved, leaving the organization exposed to further chaos.

Can You Automatically Verify the Consistency of Your Environments?

From our survey, only 5% of the respondents felt that currently they can automatically verify the consistency of their environments, where they need to go into the fine, granular details and identify the make-up of even minor changes, having to process the enormous amounts of configuration data, for verifying the consistency between servers and environments.

As IT organizations regularly transition changes to production environments, IT teams need to check changes throughout a set of pre-production environments that can include system test, performance test, UAT, staging, etc (changes are also mirrored in a Disaster Recovery environment). IT has sought to diversify their workloads, spreading deployments over multiple IT environments to mitigate risk, yet also doubling complexity.

The high volumes of changes means that not all changes consistently make their way to all environments (pre-prod, prod, DR). The configuration parameters must be validated for consistency in real-time.

IT Operations Analytics Helps

With performance at risk from any disruptions to stability, IT teams need to know exactly what has changed in an environment.

Managing IT environments with intelligent automated analytics will drive more sophisticated proactive processes like comparing environment states, validating releases, and verifying consistency of changes,helping to prevent or identify critical issues. So rather than continue to feed bloated system tools, IT Operations should strive to simplify and implement configuration management based on IT Operations Analytics, and turn the situation around from what can’t be managed to being what can be done about performance and availability.

Sasha Gilenson is the Founder and CEO of Evolven Software.

Hot Topics

The Latest

In the final part of APMdigest's 2025 Predictions Series, industry experts offer predictions on how AI will evolve and impact technology and business in 2025 ...

E-commerce is set to skyrocket with a 9% rise over the next few years ... To thrive in this competitive environment, retailers must identify digital resilience as their top priority. In a world where savvy shoppers expect 24/7 access to online deals and experiences, any unexpected downtime to digital services can lead to significant financial losses, damage to brand reputation, abandoned carts with designer shoes, and additional issues ...

Efficiency is a highly-desirable objective in business ... We're seeing this scenario play out in enterprises around the world as they continue to struggle with infrastructures and remote work models with an eye toward operational efficiencies. In contrast to that goal, a recent Broadcom survey of global IT and network professionals found widespread adoption of these strategies is making the network more complex and hampering observability, leading to uptime, performance and security issues. Let's look more closely at these challenges ...

Image
Broadcom

The 2025 Catchpoint SRE Report dives into the forces transforming the SRE landscape, exploring both the challenges and opportunities ahead. Let's break down the key findings and what they mean for SRE professionals and the businesses relying on them ...

Image
Catchpoint

The pressure on IT teams has never been greater. As data environments grow increasingly complex, resource shortages are emerging as a major obstacle for IT leaders striving to meet the demands of modern infrastructure management ... According to DataStrike's newly released 2025 Data Infrastructure Survey Report, more than half (54%) of IT leaders cite resource limitations as a top challenge, highlighting a growing trend toward outsourcing as a solution ...

Image
Datastrike

Gartner revealed its top strategic predictions for 2025 and beyond. Gartner's top predictions explore how generative AI (GenAI) is affecting areas where most would assume only humans can have lasting impact ...

The adoption of artificial intelligence (AI) is accelerating across the telecoms industry, with 88% of fixed broadband service providers now investigating or trialing AI automation to enhance their fixed broadband services, according to new research from Incognito Software Systems and Omdia ...

 

AWS is a cloud-based computing platform known for its reliability, scalability, and flexibility. However, as helpful as its comprehensive infrastructure is, disparate elements and numerous siloed components make it difficult for admins to visualize the cloud performance in detail. It requires meticulous monitoring techniques and deep visibility to understand cloud performance and analyze operational efficiency in detail to ensure seamless cloud operations ...

Imagine a future where software, once a complex obstacle, becomes a natural extension of daily workflow — an intuitive, seamless experience that maximizes productivity and efficiency. This future is no longer a distant vision but a reality being crafted by the transformative power of Artificial Intelligence ...

Enterprise data sprawl already challenges companies' ability to protect and back up their data. Much of this information is never fully secured, leaving organizations vulnerable. Now, as GenAI platforms emerge as yet another environment where enterprise data is consumed, transformed, and created, this fragmentation is set to intensify ...

Image
Crashplan