Skip to main content

AI in Databases Is Here - But Where's the Governance?

Bharath Vasudevan
Quest Software

Enterprises are racing to leverage AI in their database environments — but most are skipping the guardrails. According to Quest research, 67% of organizations say AI is already critical to their database operations. Yet fewer than half report having a formal governance framework in place to manage it. That mismatch puts businesses at risk — operationally, financially, and reputationally.

Let's be honest: when machines start making decisions that used to require human judgment, we need to know exactly how those decisions are made. Or at least be able to trace them when something breaks.

And AI is making more decisions than ever. In the same research, 77% of organizations say they've added a moderate to extensive number of new databases with AI capabilities. The top use cases? Natural language querying, fraud detection, predictive analytics, and enabling large language models to generate queries or summaries based on enterprise data. This isn't the future. It's already here.

Many modern database platforms are embedding AI not just to enhance analytics, but to automate and optimize core database functions that were previously manual and time-intensive. These include AI-driven indexing, query rewriting, storage management, and performance tuning. AI is also supporting predictive maintenance, automated anomaly detection, and intelligent data classification to improve discovery, compliance, and security.

The Real Risk Isn't the AI - It's the Blind Spot

Here's a fair question: are AI-generated queries really more dangerous than ones written by overworked analysts at 2 a.m.?

Maybe not. But with human-authored queries, we know who wrote what, when, and why. We can assign responsibility. With AI, those lines blur, especially when suggestions are blended invisibly into workflows.

The risk isn't that AI is wildly inaccurate. It's that it's plausible. A wrong answer that looks right is much harder to catch when you don't know it came from a model. And GenAI doesn't raise its hand when it hallucinates. It just runs.

Without labeling, traceability, and human review, there's no way to know if a model just rewrote a query that violates business logic — or returned biased results without context.

Multi-Platform Chaos Is Making It Worse

Most DBAs already manage hybrid environments. According to the same Quest study, 84% support three or more database platforms, spanning private cloud, public cloud, and on-prem systems.

AI doesn't simplify this. It adds a layer of abstraction — making it harder to track what's happening, where, and why. And many teams are already stretched thin: 40% of people managing databases today aren't formally trained DBAs, and only half of "unofficial" DBAs feel confident in their expertise.

In that context, AI-generated automation can be helpful — but it can also amplify problems. If a GenAI tool tunes a query on Platform A, will it break downstream flows on Platform B? If a model interprets schema metadata incorrectly, will anyone notice before it goes live?

The complexity isn't just technical. It's organizational. And that's exactly why governance has to evolve.

DBAs Are Evolving - But They Can't Do It Alone

Let's challenge an assumption: that DBAs should lead AI governance.

We don't think that's realistic. DBAs are critical enablers — but they can't carry the full weight of compliance, oversight, and cross-system validation.

Still, their role is changing: 77% of DBAs now work across security, AI, and compliance teams, according to Quest's data. They're being asked to validate outputs, explain AI behavior, and spot issues before they ripple into production.

It's a shift from "managing databases" to "managing how AI interacts with data." That requires context, curiosity, and collaboration.

And, yes, it raises anxiety. Even among highly skilled DBAs, 61% worry that AI might replace parts of their job. But the reality is simpler: AI isn't replacing the DBA, it's redefining their job.

DBAs now have the chance to shift toward higher-value work, validating AI outputs, applying governance policies, and guiding safe automation. But to do that effectively, they need structure: clear frameworks and tools that support oversight, traceability, and explainability.

Human oversight still matters. In fact, it matters more than ever.

So, What Does Good Governance Actually Look Like?

Before you can govern, you have to see. That's why 90% of organizations now rely on data observability and monitoring tools. These systems don't just flag issues — they help:

  • Speed up root cause analysis
  • Detect anomalies in query behavior
  • Improve collaboration between dev, ops, and data teams
  • Enable less experienced staff to safely handle growing workloads

Observability gives teams insight into what the AI is doing, where it's acting, and whether those actions are aligned with policy. It answers questions like:

What did the AI do?

Was it supposed to?

And what happens next?

But observability is just one piece of a larger governance strategy. Based on our research and field work, here are five areas where organizations can begin strengthening governance for AI in database environments

1. Metadata and Lineage Management: Even basic metadata tracking helps teams trace how AI modifies or accesses data. Mapping lineage can flag risks introduced by automation.

2. Model and Algorithm Transparency: Start small: keep a registry of GenAI tools or embedded logic in use, even if only for internal reference. Over time, build toward documented purpose, inputs, and outputs.

3. AI Auditing and Monitoring: Dashboards and alerts can grow in complexity — but even simple logs of AI activity help surface early warning signs.

4. Human-in-the-Loop Oversight: Not every task needs human review, but critical actions like access control and data classification often do.

5. Policy-Based Controls and Guardrails: Role-based access or explainability thresholds can start as guidelines and evolve into enforceable policies.

Not every organization can implement all five at once but even starting with one or two can materially reduce risk and build toward a sustainable governance model.

Modern tooling is starting to support these practices. While we won't name names here, recent GenAI features in database management software now emphasize explainability, version control, and dual-mode execution (AI with human confirmation). That's a move in the right direction.

Don't Wait for a Breakdown

Here's the uncomfortable truth: AI won't slow down. The real question is whether we'll step up to govern it — or let it govern us.

If we wait until an AI-generated query triggers a compliance breach or a bad recommendation reaches the CEO's desk, it'll be too late. The time to act is now — while adoption is still fresh and workflows are still flexible.

That doesn't mean locking things down or adding red tape. It means asking better questions:

  • Can we trace what AI is doing?
  • Do we have the right people reviewing its outputs?
  • Are we sure the AI is helping us — not quietly making decisions we don't understand?

You can move faster with AI. But you need brakes, too.

Bharath Vasudevan is VP of Product Management at Quest Software

Hot Topics

The Latest

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 3 covers more predictions about Observability ...

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 2 covers predictions about Observability and AIOps ...

The Holiday Season means it is time for APMdigest's annual list of predictions, covering Observability and other IT performance topics. Industry experts — from analysts and consultants to the top vendors — offer thoughtful, insightful, and often controversial predictions on how Observability, AIOps, APM and related technologies will evolve and impact business in 2026 ...

IT organizations are preparing for 2026 with increased expectations around modernization, cloud maturity, and data readiness. At the same time, many teams continue to operate with limited staffing and are trying to maintain complex environments with small internal groups. These conditions are creating a distinct set of priorities for the year ahead. The DataStrike 2026 Data Infrastructure Survey Report, based on responses from nearly 280 IT leaders across industries, points to five trends that are shaping data infrastructure planning for 2026 ...

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...

AI in Databases Is Here - But Where's the Governance?

Bharath Vasudevan
Quest Software

Enterprises are racing to leverage AI in their database environments — but most are skipping the guardrails. According to Quest research, 67% of organizations say AI is already critical to their database operations. Yet fewer than half report having a formal governance framework in place to manage it. That mismatch puts businesses at risk — operationally, financially, and reputationally.

Let's be honest: when machines start making decisions that used to require human judgment, we need to know exactly how those decisions are made. Or at least be able to trace them when something breaks.

And AI is making more decisions than ever. In the same research, 77% of organizations say they've added a moderate to extensive number of new databases with AI capabilities. The top use cases? Natural language querying, fraud detection, predictive analytics, and enabling large language models to generate queries or summaries based on enterprise data. This isn't the future. It's already here.

Many modern database platforms are embedding AI not just to enhance analytics, but to automate and optimize core database functions that were previously manual and time-intensive. These include AI-driven indexing, query rewriting, storage management, and performance tuning. AI is also supporting predictive maintenance, automated anomaly detection, and intelligent data classification to improve discovery, compliance, and security.

The Real Risk Isn't the AI - It's the Blind Spot

Here's a fair question: are AI-generated queries really more dangerous than ones written by overworked analysts at 2 a.m.?

Maybe not. But with human-authored queries, we know who wrote what, when, and why. We can assign responsibility. With AI, those lines blur, especially when suggestions are blended invisibly into workflows.

The risk isn't that AI is wildly inaccurate. It's that it's plausible. A wrong answer that looks right is much harder to catch when you don't know it came from a model. And GenAI doesn't raise its hand when it hallucinates. It just runs.

Without labeling, traceability, and human review, there's no way to know if a model just rewrote a query that violates business logic — or returned biased results without context.

Multi-Platform Chaos Is Making It Worse

Most DBAs already manage hybrid environments. According to the same Quest study, 84% support three or more database platforms, spanning private cloud, public cloud, and on-prem systems.

AI doesn't simplify this. It adds a layer of abstraction — making it harder to track what's happening, where, and why. And many teams are already stretched thin: 40% of people managing databases today aren't formally trained DBAs, and only half of "unofficial" DBAs feel confident in their expertise.

In that context, AI-generated automation can be helpful — but it can also amplify problems. If a GenAI tool tunes a query on Platform A, will it break downstream flows on Platform B? If a model interprets schema metadata incorrectly, will anyone notice before it goes live?

The complexity isn't just technical. It's organizational. And that's exactly why governance has to evolve.

DBAs Are Evolving - But They Can't Do It Alone

Let's challenge an assumption: that DBAs should lead AI governance.

We don't think that's realistic. DBAs are critical enablers — but they can't carry the full weight of compliance, oversight, and cross-system validation.

Still, their role is changing: 77% of DBAs now work across security, AI, and compliance teams, according to Quest's data. They're being asked to validate outputs, explain AI behavior, and spot issues before they ripple into production.

It's a shift from "managing databases" to "managing how AI interacts with data." That requires context, curiosity, and collaboration.

And, yes, it raises anxiety. Even among highly skilled DBAs, 61% worry that AI might replace parts of their job. But the reality is simpler: AI isn't replacing the DBA, it's redefining their job.

DBAs now have the chance to shift toward higher-value work, validating AI outputs, applying governance policies, and guiding safe automation. But to do that effectively, they need structure: clear frameworks and tools that support oversight, traceability, and explainability.

Human oversight still matters. In fact, it matters more than ever.

So, What Does Good Governance Actually Look Like?

Before you can govern, you have to see. That's why 90% of organizations now rely on data observability and monitoring tools. These systems don't just flag issues — they help:

  • Speed up root cause analysis
  • Detect anomalies in query behavior
  • Improve collaboration between dev, ops, and data teams
  • Enable less experienced staff to safely handle growing workloads

Observability gives teams insight into what the AI is doing, where it's acting, and whether those actions are aligned with policy. It answers questions like:

What did the AI do?

Was it supposed to?

And what happens next?

But observability is just one piece of a larger governance strategy. Based on our research and field work, here are five areas where organizations can begin strengthening governance for AI in database environments

1. Metadata and Lineage Management: Even basic metadata tracking helps teams trace how AI modifies or accesses data. Mapping lineage can flag risks introduced by automation.

2. Model and Algorithm Transparency: Start small: keep a registry of GenAI tools or embedded logic in use, even if only for internal reference. Over time, build toward documented purpose, inputs, and outputs.

3. AI Auditing and Monitoring: Dashboards and alerts can grow in complexity — but even simple logs of AI activity help surface early warning signs.

4. Human-in-the-Loop Oversight: Not every task needs human review, but critical actions like access control and data classification often do.

5. Policy-Based Controls and Guardrails: Role-based access or explainability thresholds can start as guidelines and evolve into enforceable policies.

Not every organization can implement all five at once but even starting with one or two can materially reduce risk and build toward a sustainable governance model.

Modern tooling is starting to support these practices. While we won't name names here, recent GenAI features in database management software now emphasize explainability, version control, and dual-mode execution (AI with human confirmation). That's a move in the right direction.

Don't Wait for a Breakdown

Here's the uncomfortable truth: AI won't slow down. The real question is whether we'll step up to govern it — or let it govern us.

If we wait until an AI-generated query triggers a compliance breach or a bad recommendation reaches the CEO's desk, it'll be too late. The time to act is now — while adoption is still fresh and workflows are still flexible.

That doesn't mean locking things down or adding red tape. It means asking better questions:

  • Can we trace what AI is doing?
  • Do we have the right people reviewing its outputs?
  • Are we sure the AI is helping us — not quietly making decisions we don't understand?

You can move faster with AI. But you need brakes, too.

Bharath Vasudevan is VP of Product Management at Quest Software

Hot Topics

The Latest

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 3 covers more predictions about Observability ...

In APMdigest's 2026 Observability Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 2 covers predictions about Observability and AIOps ...

The Holiday Season means it is time for APMdigest's annual list of predictions, covering Observability and other IT performance topics. Industry experts — from analysts and consultants to the top vendors — offer thoughtful, insightful, and often controversial predictions on how Observability, AIOps, APM and related technologies will evolve and impact business in 2026 ...

IT organizations are preparing for 2026 with increased expectations around modernization, cloud maturity, and data readiness. At the same time, many teams continue to operate with limited staffing and are trying to maintain complex environments with small internal groups. These conditions are creating a distinct set of priorities for the year ahead. The DataStrike 2026 Data Infrastructure Survey Report, based on responses from nearly 280 IT leaders across industries, points to five trends that are shaping data infrastructure planning for 2026 ...

Developers building AI applications are not just looking for fault patterns after deployment; they must detect issues quickly during development and have the ability to prevent issues after going live. Unfortunately, traditional observability tools can no longer meet the needs of AI-driven enterprise application development. AI-powered detection and auto-remediation tools designed to keep pace with rapid development are now emerging to proactively manage performance and prevent downtime ...

Every few years, the cybersecurity industry adopts a new buzzword. "Zero Trust" has endured longer than most — and for good reason. Its promise is simple: trust nothing by default, verify everything continuously. Yet many organizations still hesitate to implement Zero Trust Network Access (ZTNA). The problem isn't that ZTNA doesn't work. It's that it's often misunderstood ...

For many retail brands, peak season is the annual stress test of their digital infrastructure. It's also when often technical dashboards glow green, yet customer feedback, digital experience frustration, and conversion trends tell a different story entirely. Over the past several years, we've seen the same pattern across retail, financial services, travel, and media: internal application performance metrics fail to capture the true experience of users connecting over local broadband, mobile carriers, and congested networks using multiple devices across geographies ...

PostgreSQL promises greater flexibility, performance, and cost savings compared to proprietary alternatives. But successfully deploying it isn't always straightforward, and there are some hidden traps along the way that even seasoned IT leaders can stumble into. In this blog, I'll highlight five of the most common pitfalls with PostgreSQL deployment and offer guidance on how to avoid them, along with the best path forward ...

The rise of hybrid cloud environments, the explosion of IoT devices, the proliferation of remote work, and advanced cyber threats have created a monitoring challenge that traditional approaches simply cannot meet. IT teams find themselves drowning in a sea of data, struggling to identify critical threats amidst a deluge of alerts, and often reacting to incidents long after they've begun. This is where AI and ML are leveraged ...

Three practices, chaos testing, incident retrospectives, and AIOps-driven monitoring, are transforming platform teams from reactive responders into proactive builders of resilient, self-healing systems. The evolution is not just technical; it's cultural. The modern platform engineer isn't just maintaining infrastructure. They're product owners designing for reliability, observability, and continuous improvement ...