Skip to main content

How Legacy Friction Strangles AI-Driven DevOps

David Torgerson
Lucid Software

Organizations are discovering that AI performance reflects the health of their core systems as pilots move into production. Whether organizations realize it or not, they are already somewhere on the AI maturity curve — progressing from fragmented AI use to aggregated consumption, contextual processing, logic execution, and ultimately strategic transformation. Most stall in the early stages, not because of model limitations, but because their operational foundation isn't ready to support the next level. Lucid's AI Readiness Report found that only 26% of organizations that have implemented AI agents say those efforts have been "completely successful," a clear sign that something beneath the surface is holding teams back.

In many cases, the constraint is what I call the "Legacy Layer": the accumulation of old systems, and undocumented and fragmented workflows that quietly power day-to-day operations. Over time, this layer becomes the operational backbone and the primary source of friction.

This infrastructure of undocumented workarounds and isolated data silos drains momentum long before a project reaches production. When you pull back the covers on AI success stories, it almost always boils down to the maturity of their documentation and processes. If your AI efforts have hit a wall, the problem is likely the hidden blockers in your workflow. When organizations try to layer AI on top of this legacy foundation, they often assume automation will make up for these complex issues, but in reality, it only exposes them.

Spotting Associated Pain Points and Frictions in Legacy Systems

AI thrives on clean data and clearly defined processes, yet legacy systems offer the opposite — siloed tools, point-to-point integrations, and human workarounds. This structural disconnect creates associated pain — subtle frictions that rarely trigger alarms but steadily drain momentum. When 61% of workers say their AI strategy is misaligned with operational capabilities, they are feeling the weight of this friction.

Because modern AI depends on an open architecture where data moves freely, these isolated silos make it nearly impossible for an agent to create a single source of truth or act across a broader ecosystem. Without that shared context, AI is able to analyze data in one corner of the organization but is unable to execute meaningful action across the entire workflow.

This lack of connectivity is compounded by the tacit knowledge gap. Many DevOps environments function because only a handful of people know how things really work. They understand the edge cases, and the undocumented steps that keep systems running. AI can't learn from tacit knowledge. It needs that expertise extracted and structured, which is why 49% of organizations say undocumented or ad-hoc processes impact efficiency. In practice, much of this knowledge already surfaces in diagrams, scratch pads, and collaborative workspaces created as part of day-to-day activities.

Recognizing AI Readiness Gaps

Until hidden expertise is codified, AI remains blocked by a map it cannot read. If a workflow is inherently inefficient or relies on human intuition to bridge technical gaps, deploying AI will only serve to make those inefficiencies move at machine speed.

Time compounds the risk. As experienced employees leave, organizations lose the institutional memory of how their legacy systems actually behave. Once that tacit knowledge is gone, it becomes nearly impossible to train an AI to replicate those nuances accurately. This explains why 46% of organizations have integrated AI into only "some" or "almost no" workflows. They lack the basic visibility needed to support day-to-day operations, let alone a sophisticated automation layer.

Before scaling AI, you must assess your associated pain metric. If a system requires constant manual intervention or custom workarounds, it is a high-drag environment. Highly associated pain acts as a firewall that prevents AI from delivering measurable ROI.

Practical Interventions to Reduce Pain Points/Friction

The good news is that stalled AI initiatives don't require a full IT overhaul to get moving again. Small, targeted interventions can unlock immediate progress. For DevOps teams looking to reduce friction, I recommend these four steps:

  • Make the current state visible. Intelligent diagramming tools can help teams map workflows as they actually exist, not only as they were designed on paper. This extracts low-level documentation without making it an extra step, because you are tying into the place where people actually work day-to-day.
  • Streamline and standardize where possible. You don't need perfection, but consistency matters. Standard inputs and outputs give AI something reliable to work with.
  • Focus on quick wins. Automating a single high-friction handoff or reducing manual reporting can show immediate productivity gains and build internal confidence in AI-driven improvements.
  • Align systems with business objectives. AI should support real operational goals, not abstract innovation metrics. When workflows are clearer and less fragmented, AI becomes more actionable by default.

Moving Past Stalled AI Projects

AI can't deliver results in disconnected systems or broken workflows. The organizations seeing real productivity gains aren't deploying more tools, they're identifying pain points, clarifying processes and aligning stakeholders around how work actually gets done.

The organizations seeing real productivity gains today aren't necessarily the ones with the most advanced models or the largest budgets, it's the ones identifying hidden pain points, clarifying their Legacy Layer, and aligning stakeholders around how work actually gets done.

For DevOps leaders, the takeaway is simple: before deploying more AI, look for the hidden blockers underneath. If humans don't understand the workflow, AI never will. 

David Torgerson is VP of Infrastructure and IT at Lucid Software

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

How Legacy Friction Strangles AI-Driven DevOps

David Torgerson
Lucid Software

Organizations are discovering that AI performance reflects the health of their core systems as pilots move into production. Whether organizations realize it or not, they are already somewhere on the AI maturity curve — progressing from fragmented AI use to aggregated consumption, contextual processing, logic execution, and ultimately strategic transformation. Most stall in the early stages, not because of model limitations, but because their operational foundation isn't ready to support the next level. Lucid's AI Readiness Report found that only 26% of organizations that have implemented AI agents say those efforts have been "completely successful," a clear sign that something beneath the surface is holding teams back.

In many cases, the constraint is what I call the "Legacy Layer": the accumulation of old systems, and undocumented and fragmented workflows that quietly power day-to-day operations. Over time, this layer becomes the operational backbone and the primary source of friction.

This infrastructure of undocumented workarounds and isolated data silos drains momentum long before a project reaches production. When you pull back the covers on AI success stories, it almost always boils down to the maturity of their documentation and processes. If your AI efforts have hit a wall, the problem is likely the hidden blockers in your workflow. When organizations try to layer AI on top of this legacy foundation, they often assume automation will make up for these complex issues, but in reality, it only exposes them.

Spotting Associated Pain Points and Frictions in Legacy Systems

AI thrives on clean data and clearly defined processes, yet legacy systems offer the opposite — siloed tools, point-to-point integrations, and human workarounds. This structural disconnect creates associated pain — subtle frictions that rarely trigger alarms but steadily drain momentum. When 61% of workers say their AI strategy is misaligned with operational capabilities, they are feeling the weight of this friction.

Because modern AI depends on an open architecture where data moves freely, these isolated silos make it nearly impossible for an agent to create a single source of truth or act across a broader ecosystem. Without that shared context, AI is able to analyze data in one corner of the organization but is unable to execute meaningful action across the entire workflow.

This lack of connectivity is compounded by the tacit knowledge gap. Many DevOps environments function because only a handful of people know how things really work. They understand the edge cases, and the undocumented steps that keep systems running. AI can't learn from tacit knowledge. It needs that expertise extracted and structured, which is why 49% of organizations say undocumented or ad-hoc processes impact efficiency. In practice, much of this knowledge already surfaces in diagrams, scratch pads, and collaborative workspaces created as part of day-to-day activities.

Recognizing AI Readiness Gaps

Until hidden expertise is codified, AI remains blocked by a map it cannot read. If a workflow is inherently inefficient or relies on human intuition to bridge technical gaps, deploying AI will only serve to make those inefficiencies move at machine speed.

Time compounds the risk. As experienced employees leave, organizations lose the institutional memory of how their legacy systems actually behave. Once that tacit knowledge is gone, it becomes nearly impossible to train an AI to replicate those nuances accurately. This explains why 46% of organizations have integrated AI into only "some" or "almost no" workflows. They lack the basic visibility needed to support day-to-day operations, let alone a sophisticated automation layer.

Before scaling AI, you must assess your associated pain metric. If a system requires constant manual intervention or custom workarounds, it is a high-drag environment. Highly associated pain acts as a firewall that prevents AI from delivering measurable ROI.

Practical Interventions to Reduce Pain Points/Friction

The good news is that stalled AI initiatives don't require a full IT overhaul to get moving again. Small, targeted interventions can unlock immediate progress. For DevOps teams looking to reduce friction, I recommend these four steps:

  • Make the current state visible. Intelligent diagramming tools can help teams map workflows as they actually exist, not only as they were designed on paper. This extracts low-level documentation without making it an extra step, because you are tying into the place where people actually work day-to-day.
  • Streamline and standardize where possible. You don't need perfection, but consistency matters. Standard inputs and outputs give AI something reliable to work with.
  • Focus on quick wins. Automating a single high-friction handoff or reducing manual reporting can show immediate productivity gains and build internal confidence in AI-driven improvements.
  • Align systems with business objectives. AI should support real operational goals, not abstract innovation metrics. When workflows are clearer and less fragmented, AI becomes more actionable by default.

Moving Past Stalled AI Projects

AI can't deliver results in disconnected systems or broken workflows. The organizations seeing real productivity gains aren't deploying more tools, they're identifying pain points, clarifying processes and aligning stakeholders around how work actually gets done.

The organizations seeing real productivity gains today aren't necessarily the ones with the most advanced models or the largest budgets, it's the ones identifying hidden pain points, clarifying their Legacy Layer, and aligning stakeholders around how work actually gets done.

For DevOps leaders, the takeaway is simple: before deploying more AI, look for the hidden blockers underneath. If humans don't understand the workflow, AI never will. 

David Torgerson is VP of Infrastructure and IT at Lucid Software

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...