Achieving Intelligent Data Governance
Why It's Better to be Intelligent than High, When it Comes to Data Governance
August 21, 2018

Don Boxley
DH2i

Share this

High availability's (HA) primary objective has historically been focused on ensuring continuous operations and performance. HA was built on a foundation of redundancy and failover technologies and methodologies to ensure business continuity in the event of workload spikes, planned maintenance, and unplanned downtime.

Today, HA methodologies have been superseded by intelligent workload routing automation (i.e., intelligent availability), in that data and their processing are consistently directed to the proper place at the right time. Intelligent availability partially stems from the distributed realities of the modern data landscape, in that information assets are disbursed on premises, in the cloud, and at the cloud's edge.

Consequently, regulatory compliance has emerged as much a driver for intelligent availability as has performance. With increasing regulations and penalties (such as those for the European Union's General Data Protection Regulation, i.e., GDPR), missteps about where workloads are routed could have dire legal and financial consequences — especially for data in the cloud.

Many countries and industries have stringent regulations about data's location that directly affect cloud deployments. Organizations must know how and where such data is permitted in the cloud before shifting it there for availability and performance issues.

Crafting policies in accordance with these regulations is crucial to leveraging intelligent availability to ensure compliance, and effectively transforms data governance into intelligent data governance.

Cloud Concerns

Cloud deployments have a number of opaque areas in relation to routing workloads for availability. These pertain to the type of cloud involved (public, private, or hybrid), the method of redundancy used, and the nature of the data.

The GDPR, for example, has a number of regulations for personal data, a broad term for "any information related to an identified or identifiable natural person." As such, organizations must be extremely cautious about transporting this type of data, despite the performance gains of doing so. For example, cloud bursting is advantageous for optimizing performance during sudden peaks in network activity, that are common for online transaction processing in finance or manufacturing. Migrating these workloads from local settings to public ones may balance network activity, but can forsake regulations in the process.

Organizations must take similar precautions when strategizing for disaster recovery (DR), one of the chief benefits of intelligent availability. Downtime may be minimized by implementing automatic failovers into the cloud, but can also compromise regulatory compliance.

Cloud compliance issues not only involve where data are stored, but also where (and how) they're processed. GDPR, for example, distinguishes data processors from data controllers. The latter are organizations using data, but the former can involve any assortment of SaaS or SOA options that must adhere to GDPR's personal data regulations. Organizations must assess these measures when cloud brokering among various providers — particularly for transient pricing specials.

Other regulations such as the Payment Card Industry Data Security Standard have rigid stipulations about encrypting data (especially for data in transit) that may apply to workloads spontaneously moved to the cloud. Those in the e-commerce or retail spaces must consider the intricacies of server side or client-side encryption, especially when replicating data between clouds.

The Intelligent Way

Intelligent availability provides the best means of effecting regulatory compliance while dynamically shifting workloads between environments for all of the preceding scenarios. The core of this method is the governance policies devised to meet compliance standards.

Although intelligent availability doesn't determine sensitive information or dictate where it can be routed, it offers portability freedom across settings (including operation systems, physical and virtual infrastructure) that all but forces organizations to identify these factors because of its flexibility. This real-time, on-demand shifting of resources is the catalyst to evaluate workloads through a governance lens, update policies as needed, and leverage them to predetermine optimal routing of data and their processing for availability. Intelligent availability is the means of implementing Intelligent data governance; it's a conduit between performance and regulatory compliance that increases competitive advantage.

Implementing Intelligent Governance

Once those policies are in place, the intelligent availability approach maximizes cloud deployments while maintaining regulatory adherence. Its intelligent algorithms continuously monitor server performance to automatically detect surges, either issuing alerts to organizations or heralding the transfer of workloads to alternative hosts. By already having agreed upon policies conforming to governance practices, prudent organizations can confidently move data to the cloud without violating regulations. Thus, cloud bursting measures can regularly be deployed to minimize network strain during spikes for OLTP (or any other reason) without costly penalties.

Companies also have the benefit of automatic failovers to the cloud to maintain business continuity in the event of natural disasters or failure. This option virtually eliminates downtime, enabling IT to perform maintenance on even the most mission critical infrastructure once data is properly re-routed offsite.

One of the most useful intelligent availability advantages is the capability to span clouds, both among providers and all the variations of clouds available. Although well sourced governance policies are essential to receiving the pricing boons of cloud brokering, intelligent availability's ability to start and stop workloads at the instance level while transporting data between settings is just as valuable.

The data processing issue is a little more complicated but is assisted by intelligent availability's flexibility. Once organizations have researched the various Service Level Agreements of cloud vendors — as well as policies for other types of data processing, including software companies' — they can utilize these platforms in accordance with regulations, transferring their resources where they're permitted. Most encryption concerns are solved with client-side encryption whereby organizations encrypt data before replicating them to the cloud, retaining the sole keys to them. Intelligent availability measures transport this data to the cloud and back as needed.

Adherence

The escalating presence of regulatory mandates isn't likely to soon subside. Compliance standards are just as critical as performance issues, when making workloads available across heterogeneous settings. Intelligent availability's support of versatile storage and processing environments, in conjunction with its low-latency portability, makes it a natural extension of intelligent data governance implementations. These methods ensure data is moved correctly — the first time — to maintain regulatory adherence during an age when it's most difficult to do so.

Don Boxley is CEO and Co-Founder of DH2i
Share this

The Latest

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...

March 11, 2024

Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...

March 07, 2024
In this digital era, consumers prefer a seamless user experience, and here, the significance of performance testing cannot be overstated. Application performance testing is essential in ensuring that your software products, websites, or other related systems operate seamlessly under varying conditions. However, the cost of poor performance extends beyond technical glitches and slow load times; it can directly affect customer satisfaction and brand reputation. Understand the tangible and intangible consequences of poor application performance and how it can affect your business ...
March 06, 2024

Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...

March 05, 2024

Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...

March 04, 2024

This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...