Achieving Intelligent Data Governance
Why It's Better to be Intelligent than High, When it Comes to Data Governance
August 21, 2018

Don Boxley
DH2i

Share this

High availability's (HA) primary objective has historically been focused on ensuring continuous operations and performance. HA was built on a foundation of redundancy and failover technologies and methodologies to ensure business continuity in the event of workload spikes, planned maintenance, and unplanned downtime.

Today, HA methodologies have been superseded by intelligent workload routing automation (i.e., intelligent availability), in that data and their processing are consistently directed to the proper place at the right time. Intelligent availability partially stems from the distributed realities of the modern data landscape, in that information assets are disbursed on premises, in the cloud, and at the cloud's edge.

Consequently, regulatory compliance has emerged as much a driver for intelligent availability as has performance. With increasing regulations and penalties (such as those for the European Union's General Data Protection Regulation, i.e., GDPR), missteps about where workloads are routed could have dire legal and financial consequences — especially for data in the cloud.

Many countries and industries have stringent regulations about data's location that directly affect cloud deployments. Organizations must know how and where such data is permitted in the cloud before shifting it there for availability and performance issues.

Crafting policies in accordance with these regulations is crucial to leveraging intelligent availability to ensure compliance, and effectively transforms data governance into intelligent data governance.

Cloud Concerns

Cloud deployments have a number of opaque areas in relation to routing workloads for availability. These pertain to the type of cloud involved (public, private, or hybrid), the method of redundancy used, and the nature of the data.

The GDPR, for example, has a number of regulations for personal data, a broad term for "any information related to an identified or identifiable natural person." As such, organizations must be extremely cautious about transporting this type of data, despite the performance gains of doing so. For example, cloud bursting is advantageous for optimizing performance during sudden peaks in network activity, that are common for online transaction processing in finance or manufacturing. Migrating these workloads from local settings to public ones may balance network activity, but can forsake regulations in the process.

Organizations must take similar precautions when strategizing for disaster recovery (DR), one of the chief benefits of intelligent availability. Downtime may be minimized by implementing automatic failovers into the cloud, but can also compromise regulatory compliance.

Cloud compliance issues not only involve where data are stored, but also where (and how) they're processed. GDPR, for example, distinguishes data processors from data controllers. The latter are organizations using data, but the former can involve any assortment of SaaS or SOA options that must adhere to GDPR's personal data regulations. Organizations must assess these measures when cloud brokering among various providers — particularly for transient pricing specials.

Other regulations such as the Payment Card Industry Data Security Standard have rigid stipulations about encrypting data (especially for data in transit) that may apply to workloads spontaneously moved to the cloud. Those in the e-commerce or retail spaces must consider the intricacies of server side or client-side encryption, especially when replicating data between clouds.

The Intelligent Way

Intelligent availability provides the best means of effecting regulatory compliance while dynamically shifting workloads between environments for all of the preceding scenarios. The core of this method is the governance policies devised to meet compliance standards.

Although intelligent availability doesn't determine sensitive information or dictate where it can be routed, it offers portability freedom across settings (including operation systems, physical and virtual infrastructure) that all but forces organizations to identify these factors because of its flexibility. This real-time, on-demand shifting of resources is the catalyst to evaluate workloads through a governance lens, update policies as needed, and leverage them to predetermine optimal routing of data and their processing for availability. Intelligent availability is the means of implementing Intelligent data governance; it's a conduit between performance and regulatory compliance that increases competitive advantage.

Implementing Intelligent Governance

Once those policies are in place, the intelligent availability approach maximizes cloud deployments while maintaining regulatory adherence. Its intelligent algorithms continuously monitor server performance to automatically detect surges, either issuing alerts to organizations or heralding the transfer of workloads to alternative hosts. By already having agreed upon policies conforming to governance practices, prudent organizations can confidently move data to the cloud without violating regulations. Thus, cloud bursting measures can regularly be deployed to minimize network strain during spikes for OLTP (or any other reason) without costly penalties.

Companies also have the benefit of automatic failovers to the cloud to maintain business continuity in the event of natural disasters or failure. This option virtually eliminates downtime, enabling IT to perform maintenance on even the most mission critical infrastructure once data is properly re-routed offsite.

One of the most useful intelligent availability advantages is the capability to span clouds, both among providers and all the variations of clouds available. Although well sourced governance policies are essential to receiving the pricing boons of cloud brokering, intelligent availability's ability to start and stop workloads at the instance level while transporting data between settings is just as valuable.

The data processing issue is a little more complicated but is assisted by intelligent availability's flexibility. Once organizations have researched the various Service Level Agreements of cloud vendors — as well as policies for other types of data processing, including software companies' — they can utilize these platforms in accordance with regulations, transferring their resources where they're permitted. Most encryption concerns are solved with client-side encryption whereby organizations encrypt data before replicating them to the cloud, retaining the sole keys to them. Intelligent availability measures transport this data to the cloud and back as needed.

Adherence

The escalating presence of regulatory mandates isn't likely to soon subside. Compliance standards are just as critical as performance issues, when making workloads available across heterogeneous settings. Intelligent availability's support of versatile storage and processing environments, in conjunction with its low-latency portability, makes it a natural extension of intelligent data governance implementations. These methods ensure data is moved correctly — the first time — to maintain regulatory adherence during an age when it's most difficult to do so.

Don Boxley is CEO and Co-Founder of DH2i
Share this

The Latest

March 31, 2020

Organizations face major infrastructure and security challenges in supporting multi-cloud and edge deployments, according to new global survey conducted by Propeller Insights for Volterra ...

March 30, 2020

Developers spend roughly 17.3 hours each week debugging, refactoring and modifying bad code — valuable time that could be spent writing more code, shipping better products and innovating. The bottom line? Nearly $300B (US) in lost developer productivity every year ...

March 26, 2020

While remote work policies have been gaining steam for the better part of the past decade across the enterprise space — driven in large part by more agile and scalable, cloud-delivered business solutions — recent events have pushed adoption into overdrive ...

March 25, 2020

Time-critical, unplanned work caused by IT disruptions continues to plague enterprises around the world, leading to lost revenue, significant employee morale problems and missed opportunities to innovate, according to the State of Unplanned Work Report 2020, conducted by Dimensional Research for PagerDuty ...

March 24, 2020

In today's iterative world, development teams care a lot more about how apps are running. There's a demand for fixing actionable items. Developers want to know exactly what's broken, what to fix right now, and what can wait. They want to know, "Do we build or fix?" This trade-off between building new features versus fixing bugs is one of the key factors behind the adoption of Application Stability management tools ...

March 23, 2020

With the rise of mobile apps and iterative development releases, Application Stability has answered the widespread need to monitor applications in a new way, shifting the focus from servers and networks to the customer experience. The emergence of Application Stability has caused some consternation for diehard APM fans. However, these two solutions embody very distinct monitoring focuses, which leads me to believe there's room for both tools, as well as different teams for both ...

March 19, 2020

The 2019 State of E-Commerce Infrastructure Report, from Webscale, analyzes findings from a comprehensive survey of more than 450 ecommerce professionals regarding how their online stores performed during the 2019 holiday season. Some key insights from the report include ...

March 18, 2020

Robinhood is a unicorn startup that has been disrupting the way by which many millennials have been investing and managing their money for the past few years. For Robinhood, the burden of proof was to show that they can provide an infrastructure that is as scalable, reliable and secure as that of major banks who have been developing their trading infrastructure for the last quarter-century. That promise fell flat last week, when the market volatility brought about a set of edge cases that brought Robinhood's trading app to its knees ...

March 17, 2020

Application backend monitoring is the key to acquiring visibility across the enterprise's application stack, from the application layer and underlying infrastructure to third-party API services, web servers and databases, be they on-premises, in a public or private cloud, or in a hybrid model. By tracking and reporting performance in real time, IT teams can ensure applications perform at peak efficiency — and guarantee a seamless customer experience. How can IT operations teams improve application backend monitoring? By embracing artificial intelligence for operations — AIOps ...

March 16, 2020

In 2020, DevOps teams will face heightened expectations for higher speed and frequency of code delivery, which means their IT environments will become even more modular, ephemeral and dynamic — and significantly more complicated to monitor. As a result, AIOps will further cement its position as the most effective technology that DevOps teams can use to see and control what's going on with their applications and their underlying infrastructure, so that they can prevent outages. Here I outline five key trends to watch related to how AIOps will impact DevOps in 2020 and beyond ...