Skip to main content

AI in Databases Is Here - But Where's the Governance?

Bharath Vasudevan
Quest Software

Enterprises are racing to leverage AI in their database environments — but most are skipping the guardrails. According to Quest research, 67% of organizations say AI is already critical to their database operations. Yet fewer than half report having a formal governance framework in place to manage it. That mismatch puts businesses at risk — operationally, financially, and reputationally.

Let's be honest: when machines start making decisions that used to require human judgment, we need to know exactly how those decisions are made. Or at least be able to trace them when something breaks.

And AI is making more decisions than ever. In the same research, 77% of organizations say they've added a moderate to extensive number of new databases with AI capabilities. The top use cases? Natural language querying, fraud detection, predictive analytics, and enabling large language models to generate queries or summaries based on enterprise data. This isn't the future. It's already here.

Many modern database platforms are embedding AI not just to enhance analytics, but to automate and optimize core database functions that were previously manual and time-intensive. These include AI-driven indexing, query rewriting, storage management, and performance tuning. AI is also supporting predictive maintenance, automated anomaly detection, and intelligent data classification to improve discovery, compliance, and security.

The Real Risk Isn't the AI - It's the Blind Spot

Here's a fair question: are AI-generated queries really more dangerous than ones written by overworked analysts at 2 a.m.?

Maybe not. But with human-authored queries, we know who wrote what, when, and why. We can assign responsibility. With AI, those lines blur, especially when suggestions are blended invisibly into workflows.

The risk isn't that AI is wildly inaccurate. It's that it's plausible. A wrong answer that looks right is much harder to catch when you don't know it came from a model. And GenAI doesn't raise its hand when it hallucinates. It just runs.

Without labeling, traceability, and human review, there's no way to know if a model just rewrote a query that violates business logic — or returned biased results without context.

Multi-Platform Chaos Is Making It Worse

Most DBAs already manage hybrid environments. According to the same Quest study, 84% support three or more database platforms, spanning private cloud, public cloud, and on-prem systems.

AI doesn't simplify this. It adds a layer of abstraction — making it harder to track what's happening, where, and why. And many teams are already stretched thin: 40% of people managing databases today aren't formally trained DBAs, and only half of "unofficial" DBAs feel confident in their expertise.

In that context, AI-generated automation can be helpful — but it can also amplify problems. If a GenAI tool tunes a query on Platform A, will it break downstream flows on Platform B? If a model interprets schema metadata incorrectly, will anyone notice before it goes live?

The complexity isn't just technical. It's organizational. And that's exactly why governance has to evolve.

DBAs Are Evolving - But They Can't Do It Alone

Let's challenge an assumption: that DBAs should lead AI governance.

We don't think that's realistic. DBAs are critical enablers — but they can't carry the full weight of compliance, oversight, and cross-system validation.

Still, their role is changing: 77% of DBAs now work across security, AI, and compliance teams, according to Quest's data. They're being asked to validate outputs, explain AI behavior, and spot issues before they ripple into production.

It's a shift from "managing databases" to "managing how AI interacts with data." That requires context, curiosity, and collaboration.

And, yes, it raises anxiety. Even among highly skilled DBAs, 61% worry that AI might replace parts of their job. But the reality is simpler: AI isn't replacing the DBA, it's redefining their job.

DBAs now have the chance to shift toward higher-value work, validating AI outputs, applying governance policies, and guiding safe automation. But to do that effectively, they need structure: clear frameworks and tools that support oversight, traceability, and explainability.

Human oversight still matters. In fact, it matters more than ever.

So, What Does Good Governance Actually Look Like?

Before you can govern, you have to see. That's why 90% of organizations now rely on data observability and monitoring tools. These systems don't just flag issues — they help:

  • Speed up root cause analysis
  • Detect anomalies in query behavior
  • Improve collaboration between dev, ops, and data teams
  • Enable less experienced staff to safely handle growing workloads

Observability gives teams insight into what the AI is doing, where it's acting, and whether those actions are aligned with policy. It answers questions like:

What did the AI do?

Was it supposed to?

And what happens next?

But observability is just one piece of a larger governance strategy. Based on our research and field work, here are five areas where organizations can begin strengthening governance for AI in database environments

1. Metadata and Lineage Management: Even basic metadata tracking helps teams trace how AI modifies or accesses data. Mapping lineage can flag risks introduced by automation.

2. Model and Algorithm Transparency: Start small: keep a registry of GenAI tools or embedded logic in use, even if only for internal reference. Over time, build toward documented purpose, inputs, and outputs.

3. AI Auditing and Monitoring: Dashboards and alerts can grow in complexity — but even simple logs of AI activity help surface early warning signs.

4. Human-in-the-Loop Oversight: Not every task needs human review, but critical actions like access control and data classification often do.

5. Policy-Based Controls and Guardrails: Role-based access or explainability thresholds can start as guidelines and evolve into enforceable policies.

Not every organization can implement all five at once but even starting with one or two can materially reduce risk and build toward a sustainable governance model.

Modern tooling is starting to support these practices. While we won't name names here, recent GenAI features in database management software now emphasize explainability, version control, and dual-mode execution (AI with human confirmation). That's a move in the right direction.

Don't Wait for a Breakdown

Here's the uncomfortable truth: AI won't slow down. The real question is whether we'll step up to govern it — or let it govern us.

If we wait until an AI-generated query triggers a compliance breach or a bad recommendation reaches the CEO's desk, it'll be too late. The time to act is now — while adoption is still fresh and workflows are still flexible.

That doesn't mean locking things down or adding red tape. It means asking better questions:

  • Can we trace what AI is doing?
  • Do we have the right people reviewing its outputs?
  • Are we sure the AI is helping us — not quietly making decisions we don't understand?

You can move faster with AI. But you need brakes, too.

Bharath Vasudevan is VP of Product Management at Quest Software

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

AI in Databases Is Here - But Where's the Governance?

Bharath Vasudevan
Quest Software

Enterprises are racing to leverage AI in their database environments — but most are skipping the guardrails. According to Quest research, 67% of organizations say AI is already critical to their database operations. Yet fewer than half report having a formal governance framework in place to manage it. That mismatch puts businesses at risk — operationally, financially, and reputationally.

Let's be honest: when machines start making decisions that used to require human judgment, we need to know exactly how those decisions are made. Or at least be able to trace them when something breaks.

And AI is making more decisions than ever. In the same research, 77% of organizations say they've added a moderate to extensive number of new databases with AI capabilities. The top use cases? Natural language querying, fraud detection, predictive analytics, and enabling large language models to generate queries or summaries based on enterprise data. This isn't the future. It's already here.

Many modern database platforms are embedding AI not just to enhance analytics, but to automate and optimize core database functions that were previously manual and time-intensive. These include AI-driven indexing, query rewriting, storage management, and performance tuning. AI is also supporting predictive maintenance, automated anomaly detection, and intelligent data classification to improve discovery, compliance, and security.

The Real Risk Isn't the AI - It's the Blind Spot

Here's a fair question: are AI-generated queries really more dangerous than ones written by overworked analysts at 2 a.m.?

Maybe not. But with human-authored queries, we know who wrote what, when, and why. We can assign responsibility. With AI, those lines blur, especially when suggestions are blended invisibly into workflows.

The risk isn't that AI is wildly inaccurate. It's that it's plausible. A wrong answer that looks right is much harder to catch when you don't know it came from a model. And GenAI doesn't raise its hand when it hallucinates. It just runs.

Without labeling, traceability, and human review, there's no way to know if a model just rewrote a query that violates business logic — or returned biased results without context.

Multi-Platform Chaos Is Making It Worse

Most DBAs already manage hybrid environments. According to the same Quest study, 84% support three or more database platforms, spanning private cloud, public cloud, and on-prem systems.

AI doesn't simplify this. It adds a layer of abstraction — making it harder to track what's happening, where, and why. And many teams are already stretched thin: 40% of people managing databases today aren't formally trained DBAs, and only half of "unofficial" DBAs feel confident in their expertise.

In that context, AI-generated automation can be helpful — but it can also amplify problems. If a GenAI tool tunes a query on Platform A, will it break downstream flows on Platform B? If a model interprets schema metadata incorrectly, will anyone notice before it goes live?

The complexity isn't just technical. It's organizational. And that's exactly why governance has to evolve.

DBAs Are Evolving - But They Can't Do It Alone

Let's challenge an assumption: that DBAs should lead AI governance.

We don't think that's realistic. DBAs are critical enablers — but they can't carry the full weight of compliance, oversight, and cross-system validation.

Still, their role is changing: 77% of DBAs now work across security, AI, and compliance teams, according to Quest's data. They're being asked to validate outputs, explain AI behavior, and spot issues before they ripple into production.

It's a shift from "managing databases" to "managing how AI interacts with data." That requires context, curiosity, and collaboration.

And, yes, it raises anxiety. Even among highly skilled DBAs, 61% worry that AI might replace parts of their job. But the reality is simpler: AI isn't replacing the DBA, it's redefining their job.

DBAs now have the chance to shift toward higher-value work, validating AI outputs, applying governance policies, and guiding safe automation. But to do that effectively, they need structure: clear frameworks and tools that support oversight, traceability, and explainability.

Human oversight still matters. In fact, it matters more than ever.

So, What Does Good Governance Actually Look Like?

Before you can govern, you have to see. That's why 90% of organizations now rely on data observability and monitoring tools. These systems don't just flag issues — they help:

  • Speed up root cause analysis
  • Detect anomalies in query behavior
  • Improve collaboration between dev, ops, and data teams
  • Enable less experienced staff to safely handle growing workloads

Observability gives teams insight into what the AI is doing, where it's acting, and whether those actions are aligned with policy. It answers questions like:

What did the AI do?

Was it supposed to?

And what happens next?

But observability is just one piece of a larger governance strategy. Based on our research and field work, here are five areas where organizations can begin strengthening governance for AI in database environments

1. Metadata and Lineage Management: Even basic metadata tracking helps teams trace how AI modifies or accesses data. Mapping lineage can flag risks introduced by automation.

2. Model and Algorithm Transparency: Start small: keep a registry of GenAI tools or embedded logic in use, even if only for internal reference. Over time, build toward documented purpose, inputs, and outputs.

3. AI Auditing and Monitoring: Dashboards and alerts can grow in complexity — but even simple logs of AI activity help surface early warning signs.

4. Human-in-the-Loop Oversight: Not every task needs human review, but critical actions like access control and data classification often do.

5. Policy-Based Controls and Guardrails: Role-based access or explainability thresholds can start as guidelines and evolve into enforceable policies.

Not every organization can implement all five at once but even starting with one or two can materially reduce risk and build toward a sustainable governance model.

Modern tooling is starting to support these practices. While we won't name names here, recent GenAI features in database management software now emphasize explainability, version control, and dual-mode execution (AI with human confirmation). That's a move in the right direction.

Don't Wait for a Breakdown

Here's the uncomfortable truth: AI won't slow down. The real question is whether we'll step up to govern it — or let it govern us.

If we wait until an AI-generated query triggers a compliance breach or a bad recommendation reaches the CEO's desk, it'll be too late. The time to act is now — while adoption is still fresh and workflows are still flexible.

That doesn't mean locking things down or adding red tape. It means asking better questions:

  • Can we trace what AI is doing?
  • Do we have the right people reviewing its outputs?
  • Are we sure the AI is helping us — not quietly making decisions we don't understand?

You can move faster with AI. But you need brakes, too.

Bharath Vasudevan is VP of Product Management at Quest Software

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...