APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 2 covers solutions and strategies that support AI.
INFRASTRUCTURE MODERNIZATION
After several years of AI-first investments, 2026 will mark a return to fundamentals. Organizations are recognizing that lasting innovation depends on the strength of the infrastructure supporting it. With 25% of mission-critical systems near end-of-service, leaders will shift their focus to reinforcing their IT core — sometimes at the expense of experimental AI pilots. Infrastructure modernization will move to the forefront, claiming a larger share of IT budgets as companies solidify the systems that enable AI. This isn't a retreat from innovation; it's a recalibration. The next wave of competitive advantage will come from combining advanced AI capabilities with secure, high-performing, and compliant foundations.
Dennis Perpetua
Global CTO Digital Workplace Services & Experience Officer, VP & Distinguished Engineer, Kyndryl
AI compatibility will force modernization of legacy estates: As AI becomes embedded in routine decisioning and operational processes, enterprise businesses and financial institutions will accelerate efforts to modernize or bridge their legacy systems. Firms will invest heavily in integration layers, data pipelines, and architectural clean-up to ensure older platforms can participate in AI-enhanced workflows. By 2026, "AI readiness" will be a primary driver of technical debt remediation projects.
Robert Cooke
CEO, 3forge
AGENTIC ENTERPRISE BLUEPRINT
The new architecture blueprint for Agentic Enterprises: In 2026, today's IT architecture will officially become a legacy model, unable to support the autonomous power of AI agents. To scale, enterprises will urgently pivot to a new Agentic Enterprise blueprint with 4 new architectural layers: a shared Semantic Layer to unify data meaning, an integrated AI/ML Layer for centralized intelligence, an Agentic Layer to manage the full lifecycle of a scalable agent workforce, and an Enterprise Orchestration Layer to securely manage complex, cross-silo agent workflows. This architectural shift will be the defining competitive wedge, separating companies that achieve end-to-end automation from those whose agents remain trapped in application silos.
Emin Gerba
Chief Architect, Salesforce
MASSIVE COMPUTE POWER
AI's Next Leap - Expect Some Surprises in 2026: In 2026, we'll see the true second wave of AI innovation, driven not by new algorithms, but by the arrival of massive compute capacity finally coming online. Over the past year, leading organizations like OpenAI have hinted at projects they can't yet launch due to compute constraints. That's about to change. The world's largest super-scale clusters are only in their infancy, with several just beginning to come online. As they do, we'll see breakthroughs that have been quietly waiting in the background, including new multimodal models, real-time video generation tools that go beyond Sora, and entirely new categories of AI services. Many of the business concerns around cost, scalability, and accessibility will start to ease as this infrastructure matures. Expect surprises — some truly magical — because, at this point, we still don't know what we don't know.
Tom Traugott
SVP of Emerging Technologies, EdgeCore Digital Infrastructure
REAL-TIME DATA
Real-time data will become critical to AI ROI. Following years of historic investment in AI without clear returns, 2026 will see business leaders come under increasing pressure to deliver value. Companies that integrate real-time data pipelines into AI architectures will unlock ROI faster by making models context-aware and continuously adaptive."
Tun Shwe
Head of AI, Lenses.io
DATA QUALITY
Data Quality and Organization Will Make or Break AI Success: Generative AI growth will continue to strain networks, storage and compute, forcing upgrades in data availability, curation and throughput. But AI success will depend less on algorithms and more on data quality, organization and curation, echoing long-standing analytics challenges. Without that, downstream outcomes will falter.
Jeb Horton
SVP of Global Services, Hitachi Vantara
Data Integrity Will Determine the True Value of AI: The advancement of AI is wholly dependent on the data that fuels it, and it's with this in mind that in 2026 it's likely to become increasingly stark. Many companies have rushed into AI without first checking that their data is correct, complete or bias-free. Executives will conduct data integrity audits with the same seriousness and intensity as financial audits to check that their data is trustworthy and traceable. Without clean data, even AI will be unable to reliably perform its responsibilities.
Kim Crawford Goodman
CEO, Smarsh
SEMANTIC LAYER
The AI-Ready Semantic Layer Becomes Non-Negotiable: To make AI accurate, you first have to make data understandable. In 2026, the semantic layer will shift from a "nice-to-have" to the backbone of trusted AI. Auto-generated semantics, metadata, and business rules will provide the needed context to translate raw schemas into governed, explainable data models, bridging the gap between data engineering and AI reasoning. Without it, enterprises risk models that sound confident but act blind.
Yael Lev
Senior Director AI Strategy & Engineering, Sisense
In 2026, semantics will be the most important AI governance guardrail. Training AI is akin to managing well-intentioned interns. AI models may be smart and capable, but like any agent — human or otherwise — they still require clear direction, oversight, and consistent evaluation. Adding a semantic layer transforms complex data into a business-friendly format that's more digestible, helping AI interpret and translate data into reliable output. As AI conversations shift from implementation to purposeful action in 2026, leaders will prioritize the people and resources needed to build the semantic layer, in order to ensure that the input data directly aligns with the desired, measurable outputs.
Dave Shuman
CDO, Precisely
CONTEXT ENGINES
By 2026, as AI agents become deeply embedded in software and business systems, their biggest bottleneck won't be reasoning—it will be serving them the right context at the right time. Developers are realizing that stitching together vector databases, long-term memory storage, session stores, SQL databases, and API caches creates a fragile patchwork of solutions. The next evolution will be unified "context engines"— platforms that can store, index, and serve all forms of data through a single abstraction layer. These systems will merge structured and unstructured retrieval, manage both persistent and ephemeral memory, and dynamically route information across diverse sources. This unification will replace fragmented architectures, reduce latency, simplify development, and enable AI agents to operate with fluid, on-demand intelligence across all data modalities.
Manvinder Singh
VP of AI Product, Redis
TRADITIONAL SEARCH
Traditional Search - The Quiet Superpower: In 2026, traditional search won't fade away, it will quietly become the backbone of trustworthy AI. As organizations demand accuracy over hallucination, traditional search's precision will ground both generative and agentic systems. Far from obsolete, traditional search is emerging as the core capability that keeps AI anchored to reality and data retrieval reliable.
Bianca Lewis
Executive Director, OpenSearch Software Foundation
MODEL CONTEXT PROTOCOL (MCP)
To orchestrate real-time data and AI at scale, MCPs will emerge as a backbone for operationalizing AI. Acting as the connective tissue between real-time data streams and AI systems, MCPs ensure data integrity and accelerate time-to-value for AI investments.
Tun Shwe
Head of AI, Lenses.io
NATIVE INTEGRATION
Companies Will Shift Standalone Models to Deeply Integrated, Context-Enriched Systems: While today's AI adoption often starts with generic LLMs and isolated prototypes, enterprises are realizing that real value doesn't come from the model alone — it comes from how well that model is connected to their internal systems. In 2026, the focus will move away from "building your own" models and toward deploying AI that natively integrates with internal assets: data sources, tools, APIs, operational workflows, and governance layers. Models and agents will increasingly use MCP-like connectors to enrich prompts with internal organizational context, retrieve real-time business data, and perform actions across existing enterprise systems. This shift turns AI from a static text generator into an operational participant — one that queries, validates, updates, and orchestrates tasks based on live internal information. As a result, companies will reduce drift, improve reliability, and unlock far faster time-to-value. Instead of experimenting in isolation, enterprises will rely on integrated, governed, production-ready AI systems that understand their business, operate within their environment, and continuously stay aligned with their internal truth.
Yuval Fernbach
VP and CTO of MLOps, JFrog
BIGGER VS. SMARTER MODELS
The AI Arms Race Hits a Wall, Shifting Focus From Bigger Models to Smarter Ones: The era of simply throwing more compute and data at pre-training to build ever-larger foundation models is ending. We began hitting a wall in 2025 with established scaling laws like the Chinchilla formula, for two key reasons: we're running out of high-quality pre-training data, and the token horizons needed for training are becoming unmanageably long. The frantic race to build the biggest model will consequently slow down. The real innovation has shifted to post-training techniques, where companies are already dedicating as much as 50% of their compute resources. This means the focus in 2026 won't be on sheer size of AI models, but on refining and specializing models with techniques like reinforcement learning to make them dramatically more capable for specific tasks.
Vivek Raghunathan
SVP of Engineering, Snowflake
SMALL LANGUAGE MODEL (SLM)
Smaller, Smarter AI Models Will Power Enterprise Innovation: In 2026, enterprises will shift away from large, general-purpose models toward smaller, specialized systems trained on their own data. These Small Language Models (SLMs) will deliver more accuracy, control, and efficiency, proving the point that bigger isn't always better. Companies will start building networks of narrow models, each designed to excel in a specific area, from HR to supply chain to customer support. The real advantage will come from how well these systems work together. Training and maintaining proprietary SLMs will also become a key competitive edge for companies. Those who invest early in grounding AI on their unique data and workflows will create models that are not just tools, but strategic assets that reflect how their business truly runs.
Ed Macosky
Chief Product & Technology Officer, Boomi
Cost pressure will drive the rise of small, task-optimized AI models: Unsustainable inference costs will drive a shift from general-purpose models to smaller, specialized models optimized for specific coding tasks. The new evaluation standard will be whether models can solve problems correctly, securely, efficiently, and cost-effectively — with organizations reducing AI spend while improving code quality through specialization rather than scale.
Harry Wang
Chief Growth Officer, Sonar
PURPOSE-BUILT MODELS
Purpose-built, enterprise-specific AI agents will outperform generic large language models in operations contexts by embedding real-world IT knowledge and business context.
Ritu Dubey
Market Head, Digitate
Small, purpose-built LLMs begin to take hold: Enterprises will continue exploring small, lightweight models to support agentic AI initiatives, but widespread deployment is just beginning. The assumption that every AI solution will revolve around ChatGPT, Anthropic, or Perplexity is inaccurate. Instead, smaller, purpose-built models optimized for inference will be the norm, enabling faster, more efficient AI applications.
Kevin Cochrane
CMO, Vultr
AI Becomes Weird and Very Focused: In 2026 and beyond, AI will fragment into highly specialized "micro-LLMs" that serve niche communities, both inside and outside of technology. These micro-LLMs will act like tribal elders of old, but in a digital way. They will focus on holding and curating domain-specific knowledge, growing smarter through community interactions. They will blur the lines between human knowledge and machine learning. From hobbyists to professionals, people will increasingly create and participate in these micro-LLM communities that are tailored to their specific interests. This trend will make AI feel both "weird" and deeply focused, reflecting the quirks and passions of the communities that shape it. In fact, it’s the passion and not profession that will drive the expertise of micro-LLMs."
Bill Peterson
Sr. Director of Product Marketing, Sumo Logic
INPUT LAYER ACCURACY
The competitive battleground will shift from model size (LLMs) to input layer accuracy and structuring. As foundation models become commoditized, the ability to reliably convert high-friction, unstructured data (especially voice, jargon, and complex technical logs) into clean, schema-mapped inputs will become the most valuable IP. Enterprises will recognize that AI cannot deliver ROI without "AI-ready" data, making specialized input technologies that guarantee data fidelity at the source the biggest driver of enterprise spending.
Amir Haramaty
Co-Founder and President, aiOla
MINI AI PODS
Department-Scale, Compliant AI Footprints Outpace Agency-Wide Platforms: The idea of a single, monolithic, agency-wide AI platform will be largely abandoned in favor of "mini AI pods" and sovereign guardrails that prioritize time-to-value. The fatal flaw of large government IT projects — slow deployment and ballooning costs — will cripple monolithic AI platforms. Instead, agencies will find success with modular, FedRAMP/StateRAMP-aligned reference stacks that can be deployed in weeks. These "pods" will be right-sized for specific department needs (like a DA's office or a state DOT) and come with data residency, cost controls, and turnkey MLOps baked in. This modular approach is validated by market-wide architecture trends. Monolithic systems are known to accumulate technical debt and create bottlenecks, whereas modular, API-driven architectures are "ideal for dynamic AI workloads," (Shaped.ai, 2025). The private sector is already building a market around this, with major vendors now offering "FedRAMP Moderate Authorized" platforms designed for rapid reuse, collapsing the procurement-to-production timeline from years to weeks.
Carm Taglienti
CTO, Insight Public Sector, Insight Enterprises
MULTI-AGENTIC ENTERPRISE
In 2026, single AI agents will become digital dead-end islands, offering isolated value but failing to scale, trapping enterprises in a productivity paradox. True enterprise success will demand a fully orchestrated digital workforce where agents collaborate seamlessly with other agents, across departments, and outside the organization. This transition from single agents to multi-agent intelligence is blocked by a failure to establish three necessary technology foundations: multi-agent protocol for open interoperability and communication, integrated multi-agent context for a unified data foundation, and robust multi-agent governance for security and observability of all agents.
MK
President & CTO of Engineering, Salesforce
HYBRID AGENTIC SYSTEMS
Hybrid agentic systems will carve out a big space in enterprise application development and agentic systems: The next phase of AI will lean heavily on smarter orchestration and efficiency. A big bet for 2026 is that companies seeking higher margins while witnessing diminishing improvements in frontier models will increasingly favor hybrid agentic systems that blend large language models (LLMs) with small language models (SLMs). Most organizations are unlikely to invest heavily in training or fine-tuning new models, as the integration of SLMs into these ecosystems will become their strongest asset. Sufficiently powerful, inherently more suitable, and necessarily more economical for many agentic systems, SLMs are the evident future of effective agentic AI. In the years ahead, the hybrid orchestration of LLMs and SLMs is likely to define the practical architecture of intelligent enterprises.
Gonçalo Borrêga
VP of Product, AI & AppDev, OutSystems
AGENT INTEROPERABILITY
Agent Interoperability Will Unlock the Next Wave of AI Productivity: Today, most AI agents operate in walled gardens, unable to communicate or collaborate with agents from other platforms. This is about to change. By 2026, the next major frontier in enterprise AI will be interoperability — the development of open standards and protocols that allow disparate AI agents to speak to one another. Just as the API economy connected different software services, an "agent economy" will quickly emerge, where agents from different platforms can autonomously discover, negotiate, and exchange services with one another. Solving this challenge will unlock compound efficiencies and automate complex, multi-platform workflows that are impossible today to usher in the next massive wave of AI-driven productivity.
Baris Gultekin
VP of AI, Snowflake
AGENT-TO-AGENT ECOSYSTEMS
AI agents will stop operating as isolated tools and start forming true ecosystems, where agents routinely communicate and collaborate with one another to get work done. One agent will be able to discover, invoke, and coordinate with other specialized agents to solve more complex problems. This will be enabled by a kind of "agent register" or app store, where different agents and their capabilities are listed and can be programmatically accessed. Over time, this register will evolve into a marketplace where agents and their creators can be monetized, with clear pricing and usage models.
Stefan Ostwald
Co-Founder and CAIO, Parloa
COMPOSITE AI PLATFORMS
Composite AI platforms (combining machine learning, generative AI, and AI agents into unified systems) will become table stakes, replacing single-purpose solutions.
Ritu Dubey
Market Head, Digitate
HIVE ARCHITECTURES
Rise of Hive Architectures and AI Orchestration Platforms: We can expect a major shift from simple agent swarms to more structured "hive" architectures — systems where a central decision-making agent orchestrates a team of specialized agents. This approach will enable more scalable, task-oriented automation across enterprises. We can also expect a clearer separation between the tools used to build agents and the platforms used to run and manage them, similar to how application development and cloud deployment diverged in past technology eras. These orchestration platforms will become foundational infrastructure for AI-driven businesses by 2030.
James Urquhart
Field CTO and Technology Evangelist, Kamiwaza AI
AI MEMORY
From Forgetful Oracles to Persistent Partners: AI Will Finally Get a Memory: A major limitation of today's AI assistants is their transactional nature; they largely forget you after each interaction. This is set to change as memory becomes a core, practical capability for AI agents. Instead of starting from scratch with every query, AI will learn to persist and retrieve relevant information from past conversations and context, allowing for truly personalized and continuous collaboration. This is more than just recalling facts — it's about an AI tool understanding user preferences, project histories, and evolving goals. This shift will transform AI from a simple tool you use for a task into a persistent partner that grows with you, making interactions more efficient and deeply contextual over time.
Vivek Raghunathan
SVP of Engineering, Snowflake
AI SELF-EVALUATION
AI self-evaluations are going to be a big thing in 2026: AI agents will be designed to self-evaluate that what they generated was appropriate for the current situation. LLMs are going to keep improving, and the vendors of those tools have every incentive to try to replace as much of the work of human developers as possible. It remains to be seen if and when cost becomes a factor because, as of right now, most of the frontier model companies are losing a lot more money than they are making. Presumably, at some point, they'll have to increase prices to compensate for those losses. With so much attention focused on these AI tools, it's less clear what the role of new languages and frameworks will be in 2026 or what would have to happen for a new one to take hold.
Jon Friskics
Senior Technical Author, Pluralsight
The Future of AI Agents Is In Self-Verification, Not Human Intervention: In 2026, the biggest obstacle to scaling AI agents — the build up of errors in multi-step workflows — will be solved by self-verification. Instead of relying on human oversight for every step, AI will be equipped with internal feedback loops, allowing them to autonomously verify the accuracy of their own work and correct mistakes. This shift to self-aware, "auto-judging" agents will enable the development of complex, multi-hop workflows that are both reliable and scalable, moving them from a promising concept to a viable enterprise solution.
Dwarak Rajagopal
VP of AI Engineering and Research, Snowflake
FINOPS FOR AI
If 2025 was the year of exponential AI investment, 2026 will be the year of marrying cost optimization into development practices. AI workloads produce some of the most significant blind spots, particularly around efficient GPU utilization, network spend, and misconfigurations that could drive untold waste. In many cases, companies have been so focused on the AI horse race that model cost considerations have been deprioritized. As organizations face increasing pressure to demonstrate AI value and align investments with business outcomes, FinOps for AI will gain traction as a critical framework to understand and optimize AI and infrastructure spend.
Kai Wombacher
Product Manager, IBM Kubecost
BUSINESS PROCESS REENGINEERING
In 2026, we will see a resurgence of Business Process Reengineering. First popularized by Hammer and Champy in Reengineering the Corporation, this approach prioritizes radical business process change through focus on business process redesign. Companies that are most successful in deploying AI in 2026 will reimagine value flows completely, transforming entire functions and processes rather than incremental change. This will empower leaders and teams to make dramatic and high-value changes to how business outcomes are achieved.
Jeremiah Stone
CTO, SnapLogic
INSTRUCTION ADHERENCE
Instruction adherence becomes a key reliability metric: In 2026, the question of whether an agent is "working" will shift from simple output to measurable Instruction Adherence, becoming the industry's new key reliability metric for AI governance. Enterprises will demand probabilistic adherence scores — categorized as high, low, or uncertain — to enable developers to refine their instructions accordingly and gain the confidence of CIOs in their agents' reliability and trustworthiness. This focus on a quantitative measure of compliance will be essential for scaling enterprise agents and avoiding costly errors, setting a new benchmark for "safe and reliable" AI.
Mohith Shrivastava
Principal Developer Advocate, Salesforce
AGENT-AS-A-SERVICE
Software will fade away as agent-as-a-service delivers outcomes: The agent-as-a-service market is projected to expand from $5.1 billion in 2024 to $47.1 billion by 2030. In 2026, we'll see more agent subscriptions and services that operate across multiple repositories and databases without discriminating between backend systems and fewer SaaS instances. This means that employees will command groups of AI agents that orchestrate workflows across systems instead of opening multiple tabs to use different software or SaaS platforms. Real, tangible outcomes that drive business forward will be the stars of the show, not the software that gets them there.
Tiago Azevedo
CIO, OutSystems
Go to: 2026 AI Predictions - Part 3, covering barriers and challenges for AI