Many organizations describe AI as strategic, but they do not manage it strategically. When AI plans are disconnected from strategy, detached from organizational learning, and protected from serious assumptions testing, the problem is no longer technical immaturity; it is a failure of management discipline.
In conversations with clients and conference attendees, I still hear that executives seek to drive AI adoption by edict, often without a strategic framework. Executives too often tell organizations to "use AI" before they define what AI is supposed to change. The problem deepens in organizations where strategy isn't well articulated in the first place.
AI's transformative potential should require more than executive urgency. Organizations need design thinking to determine where AI belongs, what work it changes, what risks it creates, and what capabilities must be built before it scales. Without that work, organizations risk fragile deployments, failed pilots, and a disconnect between employee expectations and the experience of being told to use AI without guidance about where or how it should change daily work.
Most organizations and their workers remain underprepared for AI. They think about it too simply, apply it too naively, and then wonder why the results do not live up to the hype.
AI also requires continuous organizational learning. Although it is often framed as a threat to knowledge work, understanding AI has become knowledge work in its own right.
With AI, last week's knowledge may prove inadequate when a new feature ships, a limitation disappears, or a capability is discovered through use.
Paying attention to the world around us remains a strategic imperative. AI may help analyze competitive and geopolitical situations. It cannot, by itself, determine how AI should be applied inside an organization, adopted by people, or measured against specific business results. People who know the business need to guide AI use in line with the organization's strategy.
One way to tackle that problem is to apply rigorous knowledge management and organizational learning to AI practice. That means involving AI teams and line-of-business subject matter experts in the same learning system, not treating deployment, governance, and adoption as separate workstreams.
Most organizations do not make guardrails transparent, share prompts for collaborative reuse and improvement, or manage context as a governed asset. That context includes the data used for retrieval-augmented generation, the prompts that guide and orchestrate agents, and the nodes and relationships inside a knowledge graph.
Each area benefits from knowledge management. Each also makes AI more visible, more manageable, and more likely to deliver on its design intent.
From an IT operations perspective, this is also an observability problem. Organizations cannot manage what they cannot see. If AI systems are changing work through hidden prompts, undocumented retrieval sources, opaque guardrails and loosely governed agents, then AI adoption becomes a form of unmonitored operational change. The issue is not only whether a model produces a plausible or useful answer. The issue is whether the organization understands the conditions under which that answer was produced, how it was used, what changed as a result, and how to adapt if those conditions change.
Organizations must also challenge assumptions about AI. At the strategic level, that means using scenario planning to test how industries might evolve as AI capabilities advance, stall, redirect, or disappoint. They need to explore the social, technological, economic, environmental and political dimensions of AI's future, not only the technology itself.
Growing skepticism among some Gen Z workers raises questions about AI's appeal to future employees, while labor movements within creative industries seek to curtail its use.
Although AI is likely to remain embedded in business and consumer interactions, organizations should not assume universal enthusiasm, frictionless adoption, or the pace of scale forecast by its most aggressive advocates.
The "T" in STEEP is also not preordained. Autonomous agents, for instance, have not yet produced a defining enterprise failure. That absence should not be read as evidence of safety. At some point, system complexity will expose gaps in governance, security, authoritative control and accountability.
Further, new AI-based systems may optimize processes and codify practice, but those implementations reflect the current context. As regulations, customer expectations, product innovations and operating models change, those systems must change with them. If organizations reduce the number of people accountable for change management too aggressively, they may find that AI systems have optimized for yesterday's context and lack the stewardship needed to adapt safely to tomorrow's.
AI is not an infrastructure choice in the way moving to the cloud often was. It is a pervasive invention that pushes at the boundaries of what it means to work, to know and to be an organization. It also occupies the mundane: rephrasing a sentence, summarizing a meeting, drafting a ticket, or drawing an image for a birthday card.
The subtlety and pervasiveness of AI mean that organizations must design strategic intent, work experience, governance, knowledge stewardship and learning at both the individual and organizational level. AI is often marketed as magic, but it is better understood as engineered change. Organizations that fail to apply design to their AI-augmented futures risk not just dysfunction, but irrelevance.