Skip to main content

Achieving Intelligent Data Governance

Why It's Better to be Intelligent than High, When it Comes to Data Governance
Don Boxley

High availability's (HA) primary objective has historically been focused on ensuring continuous operations and performance. HA was built on a foundation of redundancy and failover technologies and methodologies to ensure business continuity in the event of workload spikes, planned maintenance, and unplanned downtime.

Today, HA methodologies have been superseded by intelligent workload routing automation (i.e., intelligent availability), in that data and their processing are consistently directed to the proper place at the right time. Intelligent availability partially stems from the distributed realities of the modern data landscape, in that information assets are disbursed on premises, in the cloud, and at the cloud's edge.

Consequently, regulatory compliance has emerged as much a driver for intelligent availability as has performance. With increasing regulations and penalties (such as those for the European Union's General Data Protection Regulation, i.e., GDPR), missteps about where workloads are routed could have dire legal and financial consequences — especially for data in the cloud.

Many countries and industries have stringent regulations about data's location that directly affect cloud deployments. Organizations must know how and where such data is permitted in the cloud before shifting it there for availability and performance issues.

Crafting policies in accordance with these regulations is crucial to leveraging intelligent availability to ensure compliance, and effectively transforms data governance into intelligent data governance.

Cloud Concerns

Cloud deployments have a number of opaque areas in relation to routing workloads for availability. These pertain to the type of cloud involved (public, private, or hybrid), the method of redundancy used, and the nature of the data.

The GDPR, for example, has a number of regulations for personal data, a broad term for "any information related to an identified or identifiable natural person." As such, organizations must be extremely cautious about transporting this type of data, despite the performance gains of doing so. For example, cloud bursting is advantageous for optimizing performance during sudden peaks in network activity, that are common for online transaction processing in finance or manufacturing. Migrating these workloads from local settings to public ones may balance network activity, but can forsake regulations in the process.

Organizations must take similar precautions when strategizing for disaster recovery (DR), one of the chief benefits of intelligent availability. Downtime may be minimized by implementing automatic failovers into the cloud, but can also compromise regulatory compliance.

Cloud compliance issues not only involve where data are stored, but also where (and how) they're processed. GDPR, for example, distinguishes data processors from data controllers. The latter are organizations using data, but the former can involve any assortment of SaaS or SOA options that must adhere to GDPR's personal data regulations. Organizations must assess these measures when cloud brokering among various providers — particularly for transient pricing specials.

Other regulations such as the Payment Card Industry Data Security Standard have rigid stipulations about encrypting data (especially for data in transit) that may apply to workloads spontaneously moved to the cloud. Those in the e-commerce or retail spaces must consider the intricacies of server side or client-side encryption, especially when replicating data between clouds.

The Intelligent Way

Intelligent availability provides the best means of effecting regulatory compliance while dynamically shifting workloads between environments for all of the preceding scenarios. The core of this method is the governance policies devised to meet compliance standards.

Although intelligent availability doesn't determine sensitive information or dictate where it can be routed, it offers portability freedom across settings (including operation systems, physical and virtual infrastructure) that all but forces organizations to identify these factors because of its flexibility. This real-time, on-demand shifting of resources is the catalyst to evaluate workloads through a governance lens, update policies as needed, and leverage them to predetermine optimal routing of data and their processing for availability. Intelligent availability is the means of implementing Intelligent data governance; it's a conduit between performance and regulatory compliance that increases competitive advantage.

Implementing Intelligent Governance

Once those policies are in place, the intelligent availability approach maximizes cloud deployments while maintaining regulatory adherence. Its intelligent algorithms continuously monitor server performance to automatically detect surges, either issuing alerts to organizations or heralding the transfer of workloads to alternative hosts. By already having agreed upon policies conforming to governance practices, prudent organizations can confidently move data to the cloud without violating regulations. Thus, cloud bursting measures can regularly be deployed to minimize network strain during spikes for OLTP (or any other reason) without costly penalties.

Companies also have the benefit of automatic failovers to the cloud to maintain business continuity in the event of natural disasters or failure. This option virtually eliminates downtime, enabling IT to perform maintenance on even the most mission critical infrastructure once data is properly re-routed offsite.

One of the most useful intelligent availability advantages is the capability to span clouds, both among providers and all the variations of clouds available. Although well sourced governance policies are essential to receiving the pricing boons of cloud brokering, intelligent availability's ability to start and stop workloads at the instance level while transporting data between settings is just as valuable.

The data processing issue is a little more complicated but is assisted by intelligent availability's flexibility. Once organizations have researched the various Service Level Agreements of cloud vendors — as well as policies for other types of data processing, including software companies' — they can utilize these platforms in accordance with regulations, transferring their resources where they're permitted. Most encryption concerns are solved with client-side encryption whereby organizations encrypt data before replicating them to the cloud, retaining the sole keys to them. Intelligent availability measures transport this data to the cloud and back as needed.

Adherence

The escalating presence of regulatory mandates isn't likely to soon subside. Compliance standards are just as critical as performance issues, when making workloads available across heterogeneous settings. Intelligent availability's support of versatile storage and processing environments, in conjunction with its low-latency portability, makes it a natural extension of intelligent data governance implementations. These methods ensure data is moved correctly — the first time — to maintain regulatory adherence during an age when it's most difficult to do so.

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...

Achieving Intelligent Data Governance

Why It's Better to be Intelligent than High, When it Comes to Data Governance
Don Boxley

High availability's (HA) primary objective has historically been focused on ensuring continuous operations and performance. HA was built on a foundation of redundancy and failover technologies and methodologies to ensure business continuity in the event of workload spikes, planned maintenance, and unplanned downtime.

Today, HA methodologies have been superseded by intelligent workload routing automation (i.e., intelligent availability), in that data and their processing are consistently directed to the proper place at the right time. Intelligent availability partially stems from the distributed realities of the modern data landscape, in that information assets are disbursed on premises, in the cloud, and at the cloud's edge.

Consequently, regulatory compliance has emerged as much a driver for intelligent availability as has performance. With increasing regulations and penalties (such as those for the European Union's General Data Protection Regulation, i.e., GDPR), missteps about where workloads are routed could have dire legal and financial consequences — especially for data in the cloud.

Many countries and industries have stringent regulations about data's location that directly affect cloud deployments. Organizations must know how and where such data is permitted in the cloud before shifting it there for availability and performance issues.

Crafting policies in accordance with these regulations is crucial to leveraging intelligent availability to ensure compliance, and effectively transforms data governance into intelligent data governance.

Cloud Concerns

Cloud deployments have a number of opaque areas in relation to routing workloads for availability. These pertain to the type of cloud involved (public, private, or hybrid), the method of redundancy used, and the nature of the data.

The GDPR, for example, has a number of regulations for personal data, a broad term for "any information related to an identified or identifiable natural person." As such, organizations must be extremely cautious about transporting this type of data, despite the performance gains of doing so. For example, cloud bursting is advantageous for optimizing performance during sudden peaks in network activity, that are common for online transaction processing in finance or manufacturing. Migrating these workloads from local settings to public ones may balance network activity, but can forsake regulations in the process.

Organizations must take similar precautions when strategizing for disaster recovery (DR), one of the chief benefits of intelligent availability. Downtime may be minimized by implementing automatic failovers into the cloud, but can also compromise regulatory compliance.

Cloud compliance issues not only involve where data are stored, but also where (and how) they're processed. GDPR, for example, distinguishes data processors from data controllers. The latter are organizations using data, but the former can involve any assortment of SaaS or SOA options that must adhere to GDPR's personal data regulations. Organizations must assess these measures when cloud brokering among various providers — particularly for transient pricing specials.

Other regulations such as the Payment Card Industry Data Security Standard have rigid stipulations about encrypting data (especially for data in transit) that may apply to workloads spontaneously moved to the cloud. Those in the e-commerce or retail spaces must consider the intricacies of server side or client-side encryption, especially when replicating data between clouds.

The Intelligent Way

Intelligent availability provides the best means of effecting regulatory compliance while dynamically shifting workloads between environments for all of the preceding scenarios. The core of this method is the governance policies devised to meet compliance standards.

Although intelligent availability doesn't determine sensitive information or dictate where it can be routed, it offers portability freedom across settings (including operation systems, physical and virtual infrastructure) that all but forces organizations to identify these factors because of its flexibility. This real-time, on-demand shifting of resources is the catalyst to evaluate workloads through a governance lens, update policies as needed, and leverage them to predetermine optimal routing of data and their processing for availability. Intelligent availability is the means of implementing Intelligent data governance; it's a conduit between performance and regulatory compliance that increases competitive advantage.

Implementing Intelligent Governance

Once those policies are in place, the intelligent availability approach maximizes cloud deployments while maintaining regulatory adherence. Its intelligent algorithms continuously monitor server performance to automatically detect surges, either issuing alerts to organizations or heralding the transfer of workloads to alternative hosts. By already having agreed upon policies conforming to governance practices, prudent organizations can confidently move data to the cloud without violating regulations. Thus, cloud bursting measures can regularly be deployed to minimize network strain during spikes for OLTP (or any other reason) without costly penalties.

Companies also have the benefit of automatic failovers to the cloud to maintain business continuity in the event of natural disasters or failure. This option virtually eliminates downtime, enabling IT to perform maintenance on even the most mission critical infrastructure once data is properly re-routed offsite.

One of the most useful intelligent availability advantages is the capability to span clouds, both among providers and all the variations of clouds available. Although well sourced governance policies are essential to receiving the pricing boons of cloud brokering, intelligent availability's ability to start and stop workloads at the instance level while transporting data between settings is just as valuable.

The data processing issue is a little more complicated but is assisted by intelligent availability's flexibility. Once organizations have researched the various Service Level Agreements of cloud vendors — as well as policies for other types of data processing, including software companies' — they can utilize these platforms in accordance with regulations, transferring their resources where they're permitted. Most encryption concerns are solved with client-side encryption whereby organizations encrypt data before replicating them to the cloud, retaining the sole keys to them. Intelligent availability measures transport this data to the cloud and back as needed.

Adherence

The escalating presence of regulatory mandates isn't likely to soon subside. Compliance standards are just as critical as performance issues, when making workloads available across heterogeneous settings. Intelligent availability's support of versatile storage and processing environments, in conjunction with its low-latency portability, makes it a natural extension of intelligent data governance implementations. These methods ensure data is moved correctly — the first time — to maintain regulatory adherence during an age when it's most difficult to do so.

The Latest

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...