Skip to main content

Solving GenAI's Trust Problem: Why Enterprises Need Predictable AI

Don Schuerman
Pega

Every week, a new AI tool claims to reinvent the enterprise. But beneath the hype, many enterprises are grappling with a sobering reality: the GenAI solutions they've deployed are falling far short of expectations.

According to McKinsey, nearly eight in 10 enterprises deploying GenAI are still seeing no meaningful bottom-line impact. More recently, MIT found that 95% of AI pilots fail in the enterprise. The culprit? The unpredictable nature of GenAI itself.

The Trust Problem with GenAI

While GenAI can be a powerful tool for creativity and ideation, it is inherently unpredictable with randomness baked into the algorithm, and that is the one thing enterprises can't afford — especially at runtime. Organizations need reliability, transparency, and control to survive, particularly in highly regulated industries like healthcare, finance and insurance. If operations rely on free-form GenAI, the results can be chaotic.

One incorrect AI output can erode customer confidence, trigger compliance violations, or create significant financial risk. Take banking, for example: if a customer applies for a loan, the institution must ensure the decision is both accurate and repeatable. That means if the customer applies again tomorrow with the same information, they should receive the same outcome. Without that consistency, you undermine the trust of your organization.

The Path to Predictable AI Agents

The answer to predictable agents lies in the less glitzy, tried-and-true approach used by nearly all organizations in some form: the common workflow.

Enterprises typically leverage hundreds of different types of approved processes and workflows to get tasks done. This ensures consistency in outcomes no matter who or what is completing the work. Examples include the process for processing a loan, running a credit report inquiry, or resolving a customer service complaint.

When AI agents are grounded in these workflows, the results can be truly transformative. This ensures the agents follow approved procedures step by step, so they don't go off track and perform unexpected actions. This turns unpredictable AI agents into fully reliable teammates, which allows organizations to effectively scale agentic deployments across the enterprise.

Using the Right AI at the Right Time

AI definitely has a role in helping enterprises transform their workflows.

Deploying the right AI at right time can inject innovation and accelerate transformation. For example:

  • Design time is when enterprises create the optimal workflows and logic that run their organizations. This is when GenAI's powerful creative capabilities can really shine as the team brainstorms and explores different possibilities. At this ideation stage, you want all the reasoning power of GenAI to stimulate thinking, foster collaboration, and bring in the wisdom of the internet together with your own experiences.
  • Run time is when enterprises actively engage with customers or employees in live conversations. This is not the time for AI creativity or improv — when they need predictability and reliability at every turn. In run time, enterprises should leverage semantic AI, a specialized AI that understands context and follows the right workflow to ensure consistency, transparency, and control.

A Win-Win for Agentic AI

By separating creative reasoning AI at design time from contextual semantic AI at run time, organizations get the best of both AI worlds while eliminating the risk.

When agents get a request from a user or customer, they use the language power of AI (these are Large Language Models after all) to search through the library of workflows and find the right one. These agents don't guess their way forward — they simply follow the best available workflow for the job. This delivers real outcomes for customers, which engenders trust in the organization's agents.

Reasoning AI is better left to design time, when creativity is a benefit, not a detriment. Ultimately, a human will decide which workflow idea best serves the company's needs. The winning workflows are then approved and discoverable by the AI agents to follow when the appropriate run-time situation arises.

Another way to think of this: consider the difference between practice time and game time. At practice time (or design time), the coach can devise a number of creative plays (or workflows) to try with his players (or agents) until they land on the right ones that will go into their playbook. Then in a live game time situation (run time), the coach assesses the situation, calls the right play for the moment, and the players execute it to perfection — or at least that's how we're used to it happening here in Boston. (Sorry, not sorry.)

This approach gives regulators and an organization's customers confidence that outcomes are fair and consistent. Workflows have always defined how businesses run, and now they can make AI agents work smarter. If two customers come to an agent with the same problem, they should get the same treatment every time. Only agents grounded in workflows can guarantee this outcome.

Moving Beyond Pilots and Hype

Real GenAI in the enterprise isn't about flash, it's about making organizations more dependable.

The winners will be those with predictable agentic solutions that customers and employees can trust. That's how you move beyond hype to real business transformation.

A new era of reliable, workflow-driven agentic AI is within our reach. Resist falling into the trough of disillusionment and focus on building systems that deliver consistent results at scale. Start by evaluating your current AI strategy, identifying where reasoning AI can bring creative ideas to light and where semantic AI can bring greater transparency and control. By doing so, your enterprise can move from hype to lasting impact and become a true leader in the predictable agentic AI revolution.

Don Schuerman is CTO and VP of Product Strategy and Marketing at Pega

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Solving GenAI's Trust Problem: Why Enterprises Need Predictable AI

Don Schuerman
Pega

Every week, a new AI tool claims to reinvent the enterprise. But beneath the hype, many enterprises are grappling with a sobering reality: the GenAI solutions they've deployed are falling far short of expectations.

According to McKinsey, nearly eight in 10 enterprises deploying GenAI are still seeing no meaningful bottom-line impact. More recently, MIT found that 95% of AI pilots fail in the enterprise. The culprit? The unpredictable nature of GenAI itself.

The Trust Problem with GenAI

While GenAI can be a powerful tool for creativity and ideation, it is inherently unpredictable with randomness baked into the algorithm, and that is the one thing enterprises can't afford — especially at runtime. Organizations need reliability, transparency, and control to survive, particularly in highly regulated industries like healthcare, finance and insurance. If operations rely on free-form GenAI, the results can be chaotic.

One incorrect AI output can erode customer confidence, trigger compliance violations, or create significant financial risk. Take banking, for example: if a customer applies for a loan, the institution must ensure the decision is both accurate and repeatable. That means if the customer applies again tomorrow with the same information, they should receive the same outcome. Without that consistency, you undermine the trust of your organization.

The Path to Predictable AI Agents

The answer to predictable agents lies in the less glitzy, tried-and-true approach used by nearly all organizations in some form: the common workflow.

Enterprises typically leverage hundreds of different types of approved processes and workflows to get tasks done. This ensures consistency in outcomes no matter who or what is completing the work. Examples include the process for processing a loan, running a credit report inquiry, or resolving a customer service complaint.

When AI agents are grounded in these workflows, the results can be truly transformative. This ensures the agents follow approved procedures step by step, so they don't go off track and perform unexpected actions. This turns unpredictable AI agents into fully reliable teammates, which allows organizations to effectively scale agentic deployments across the enterprise.

Using the Right AI at the Right Time

AI definitely has a role in helping enterprises transform their workflows.

Deploying the right AI at right time can inject innovation and accelerate transformation. For example:

  • Design time is when enterprises create the optimal workflows and logic that run their organizations. This is when GenAI's powerful creative capabilities can really shine as the team brainstorms and explores different possibilities. At this ideation stage, you want all the reasoning power of GenAI to stimulate thinking, foster collaboration, and bring in the wisdom of the internet together with your own experiences.
  • Run time is when enterprises actively engage with customers or employees in live conversations. This is not the time for AI creativity or improv — when they need predictability and reliability at every turn. In run time, enterprises should leverage semantic AI, a specialized AI that understands context and follows the right workflow to ensure consistency, transparency, and control.

A Win-Win for Agentic AI

By separating creative reasoning AI at design time from contextual semantic AI at run time, organizations get the best of both AI worlds while eliminating the risk.

When agents get a request from a user or customer, they use the language power of AI (these are Large Language Models after all) to search through the library of workflows and find the right one. These agents don't guess their way forward — they simply follow the best available workflow for the job. This delivers real outcomes for customers, which engenders trust in the organization's agents.

Reasoning AI is better left to design time, when creativity is a benefit, not a detriment. Ultimately, a human will decide which workflow idea best serves the company's needs. The winning workflows are then approved and discoverable by the AI agents to follow when the appropriate run-time situation arises.

Another way to think of this: consider the difference between practice time and game time. At practice time (or design time), the coach can devise a number of creative plays (or workflows) to try with his players (or agents) until they land on the right ones that will go into their playbook. Then in a live game time situation (run time), the coach assesses the situation, calls the right play for the moment, and the players execute it to perfection — or at least that's how we're used to it happening here in Boston. (Sorry, not sorry.)

This approach gives regulators and an organization's customers confidence that outcomes are fair and consistent. Workflows have always defined how businesses run, and now they can make AI agents work smarter. If two customers come to an agent with the same problem, they should get the same treatment every time. Only agents grounded in workflows can guarantee this outcome.

Moving Beyond Pilots and Hype

Real GenAI in the enterprise isn't about flash, it's about making organizations more dependable.

The winners will be those with predictable agentic solutions that customers and employees can trust. That's how you move beyond hype to real business transformation.

A new era of reliable, workflow-driven agentic AI is within our reach. Resist falling into the trough of disillusionment and focus on building systems that deliver consistent results at scale. Start by evaluating your current AI strategy, identifying where reasoning AI can bring creative ideas to light and where semantic AI can bring greater transparency and control. By doing so, your enterprise can move from hype to lasting impact and become a true leader in the predictable agentic AI revolution.

Don Schuerman is CTO and VP of Product Strategy and Marketing at Pega

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...