APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 4 covers negative impacts of AI.
AI AGENTS OUTNUMBER PEOPLE
2026 will be the year that AI agents outnumber people. By the end of the year expect to see at least one agent per connected person. In 3 years, it will be up to 10 AI agents per connected person. This is a huge security issue that security teams should be planning for now. Most AI agent developers are focused on efficiency, not security. If you don't have an AI policy already, you need to create one now. Step one to ensuring users comply with the policy is to create visibility. Figure out how to monitor AI to see what it's accessing, which users are using it, and what they're using it for.
Prakash Mana
CEO, Cloudbrink
ON-DEMAND WEBINAR: Beyond the VPN: Why ZTNA Alone Isn't Enough — and What's Next
TECH OUTAGES
The next major tech outage will be caused by AI-induced drift from autonomous or semi-autonomous agents engineers didn't even know were running — something we call "AI-induced drift." Incorrect agent actions, stale context windows, silent tool misuse, or over-generalization policies gradually start pushing production systems into invalid states. Unlike traditional software regressions, these failures accumulate slowly, evade canary detection, and only surface after state divergence has already propagated across services. Postmortems will start citing "agent-induced state divergence" the same way they once cited misconfigurations or bad deploys. This will catalyze an entirely new market around agent observability, behavioral diffing, policy rollback, and temporal replay of agent decisions.
Sameer Agarwal
CTO, Deductive AI
AI DATA POISONING
AI Poisoning is Coming: AI data poisoning is a growing threat that has dulled the shine of AI. All models can be tainted, however the companies that prioritize secure, high-quality data while scaling automation will define the future of AI. In 2026, the advantage won't come from flashy innovation or hype. Organizations who deliver accessible, integrated, and trustworthy systems will be the winners.
Steven Pappadakes
Founder & CEO, NE2NE
BLIND TRUST
We're already seeing the rise of AI browsers and autonomous tools that can book trips, buy products, or manage finances on a user's behalf. However, many of these systems are built on unchecked assumptions, turning every brand into a potential risk surface. The biggest issue we'll face in 2026 is blind trust. Most AI failures don't explode; they whisper misinformation, bias, or unsafe results. In 2026, those silent failures will move from chatbots to autonomous decision-making systems. The solution is continuous, real-world testing with human oversight. Companies that test their AI like they test their security will stay trusted; those that don't will learn too late that AI's confidence isn't accuracy.
Dean Hickman-Smith
Chief Revenue Officer, Testlio
BURNOUT
The Hidden Cost of Speed – Burnout: In 2026, organizations across virtually every industry may start to feel the downside of pushing AI for short-term wins without matching investments in human oversight and system understanding. As day-to-day work leans more heavily on automated reasoning, error rates can rise, not because people care less, but because constant delegation slowly dulls human judgment and pattern recognition. Cognitive overload is likely to become a real operational risk: machine-paced outputs will keep arriving faster and in greater abundance than human-paced decisions can be made, creating backlogs of unresolved work. Automation complacency may spread as teams grow comfortable trusting systems they no longer fully understand, widening the gap between perceived and actual risk. Burnout can increase as AI accelerates the tempo of work beyond what individuals and organizations can sustainably adapt to. And accountability may blur when outcomes are produced by human–machine collaboration, leaving no clear owner when things go sideways. In that environment, speed looks like progress until its hidden cost shows up in reduced stability, weaker resilience, and eroding human capability.
Tom Gorup
VP SOC Operations, Sophos
TOOL SPRAWL
The rapid flood of AI-powered development and DevOps tools has grown the temptation to buy everything new and shiny without adhering to a proper strategy. In 2026, organizations will realize that AI doesn't eliminate tool sprawl, just accelerates it. Every tool — especially the AI-driven ones — still requires continuous tuning, governance, and integration. Redundant or poorly managed tools quickly bloat developer workflows, degrade efficiency, and expand the attack surface, ultimately detracting from true business priorities. To stay competitive, enterprises must adopt tools based on the organization's architecture, development methodology, compliance needs, and operational maturity. Gone are the days of treating tech stacks as a collection of point solutions. Now more than ever, it's time for leaders to manage tooling like a curated ecosystem where each tool has a clear owner, defined value, and measurable impact to speed, quality, and security.
Nabil Hannan
Field CISO, NetSPI
AI TECHNICAL DEBT
We will spend another year building AI technical debt, and it won't be something we can fix quickly or easily: Decisions made now will have a significant impact later, especially concerning technical debt. Experimental commits with AI coding are being performed by developers today at great frequency and speed, and it's more likely than not that at least some of this work will need to be corrected in the coming years when we all realize that AI solutions were not the catch-all solution to all of our development issues. Whether it's being fixed by a skilled human, or by a skilled human wrangling better AI, the debt impact remains the same.
Pieter Danhieux
Co-Founder and CEO, Secure Code Warrior
AI sprawl becomes the new tech debt: Just like technical debt, many organizations will confront "AI debt," scattered, redundant, and ungoverned models created in silos. 2026 will be the year CIOs focus on rationalizing and centralizing their AI ecosystems. The AI gold rush is over, and now comes the cleanup. 2026 will separate organizations experimenting with AI from those actually operationalizing it with discipline, governance, and measurable impact.
Ha Hoang
CIO, Commvault
SHADOW AI
The Agent Governance Imperative: Over the next year, organizations will realize they are facing a "shadow AI" problem as teams spin up autonomous agents from development tools, cloud platforms, and countless other sources without centralized oversight. This creates both a discovery challenge and a cost management crisis. As agents increase system usage and computing costs, organizations will demand clear ROI tracking for their AI investments. But visibility alone isn't enough. Agent-to-agent interactions expose the limitations of traditional access control systems. These systems were designed for individual human actors, not autonomous agents that direct other autonomous agents. The most successful organizations will implement governance platforms that track what agents are running, the resources they consume, the business value they deliver, and how they interact with each other and critical systems.
Michelle Gill
Sr. Director of Engineering, GitLab
Shadow AI has quickly grown to become one of the greatest technology risks, fueled by organizational gaps that allow ungoverned usage to thrive. Much like shadow IT before it, shadow AI emerges from employee-fueled vulnerabilities, primarily related to a lack of clarity, support, and safe alternatives. As we look to 2026, shadow AI will continue to consume the mindshare of executives. To drive change, leaders must aim to close this gap, which starts with a trifecta of cultural alignment, proactive enablement, and improved visibility. Without each of these areas, autonomous AI systems will only continue to operate unpredictably, creating security, compliance, and cost risks that erode trust and ROI — a disastrous scenario for businesses of all sizes.
Brian Shannon
CTO, Flexera
Shadow AI Will Become a Bigger Risk Than Shadow IT: With open-source models, SaaS AI tools, and API-based agents now one click away, organizations will see a surge in "shadow AI" — teams adopting unvetted models outside formal processes. In 2026, this will eclipse shadow IT as the top operational risk CIOs face. Security teams won't just worry about rogue infrastructure; they'll worry about unapproved models with hidden vulnerabilities, poisoned datasets, and undocumented behaviors. To counter this, enterprises will adopt centralized AI catalogs and enforce model allow-lists as standard practice, similar to how software artifact governance became mandatory during the DevOps era.
Yuval Fernbach
VP and CTO of MLOps, JFrog
Shadow AI continues to take root: In 2026, organizations will need to address the growing issue of shadow AI — the unauthorized use of AI tools by employees — which poses significant governance and security risks. As investments in AI surge, with a projected increase of 40% in 2026, organizations must take proactive measures to mitigate these security risks and ensure their workforce is aligned with organizational AI policies. To counter the increase in employee usage of unauthorized AI tools, companies must implement clear AI governance policies, educate employees on the risks and benefits of AI, and monitor AI tool usage to ensure alignment with organizational goals and security protocols.
Monica Landen
CISO, Diligent
AI ACCOUNTABILITY
The Next AI Crisis Will Be Accountability: As agentic AI becomes a part of everyday workflows, regulators, investors and customers will expect clear proof that AI decisions reflect human intent. When incidents involving autonomous systems reveal new risks, organizations will need to treat trust, risk and security as a continuous framework for business performance. While the discovery of AI agents will remain important, the greater challenge will be establishing trust. Each AI agent will need to be linked to a responsible human with clear delegated authority, creating a clear line of accountability for every autonomous action. This will become a defining business requirement and a new measure of enterprise trustworthiness. Human accountability will form the foundation for AI Trust, Risk, and Security Management (TRiSM) frameworks that unify governance, runtime security, and risk oversight. Organizations that operationalize TRiSM will set a new standard of digital trust and transform it from a compliance checkbox into a board-level KPI and a public signal of integrity in the era of agentic AI.
Abe Ankumah
Chief Product Officer, 1Password
In 2026, the focus will shift from what AI can do to what it can do wrong. As organizations scale AI, accountability and visibility become mission-critical. Every deployment needs clear ownership, structured risk assessments, and continuous human oversight. The more complex the system, the greater the need for deliberate governance. Companies that scale AI without human-in-the-loop controls won't just face risks, they'll face existential trust failures.
Kristel Kruustük
Founder, Testlio
Go to: 2026 AI Predictions - Part 5, the final installment, covering AI's impact on IT Teams