The Recurring Advantages of Intelligent Availability
July 17, 2018

Don Boxley
DH2i

Share this

The essential value resulting from data-driven processes has become progressively linked with analytics. Once considered a desired complement to intuitive decision-making, analytics has developed into a main focus of mission-critical applications across industries for any number of use cases.

However, as the motives for employing analytics for business processes have increased, so has the intricacy of deployments. Organizations must now habitually confront circumstances in which data is spread across a plenitude of environments, making it arduous, error-prone and time-consuming to try to centralize for a single use case. Perhaps even more widespread is the reality in which it’s beneficial to deploy in multiple settings (such as with Linux platforms, in the cloud, or with containers), but budgetary or technological shortcomings make it unviable. Certainly, application performance oftentimes suffers as well.

The truth is today’s ever-shifting data space warrants enterprise agility for analytics as much as for any other aspect of competitive advantage. Processing is optimized by performing analytics as close to data as possible, which may need to switch locations for disaster recovery (DR), scheduled downtime, or limited-time pricing offers in the cloud.

By embracing an agile approach predicated on what can be called “intelligent availability” organizations can dynamically provision analytics in a plethora of environments to satisfy numerous business use cases, seamlessly and rapidly transferring data between on-premises settings (including both Windows and Linux machines), the cloud and containers.

Consequently, they enjoy decreased infrastructure costs, effective DR, and an overall greater yield for analytics — and that of data in general.

Analytics in the Cloud

One of the more widespread methodologies in which intelligent availability improves analytics is with cloud deployments. There are a number of advantages to going to the cloud for analytics, not the least of which are the pay-per-use pricing model, decreased infrastructure, and elastic scalability of cloud resources. There are also several software-as-a-service (SaaS) and platform-as-a-service (PaaS) options — some of which involve advanced analytics capabilities for machine learning and neural networks — for users without data science experts on staff.

Nonetheless, the most persuasive reason for running analytics in the cloud is facing the alternative: attempting to scale on premises. Customarily, scaling in physical environments involved an exponential curve with numerous unalterable costs which frequently limited application performance and enterprise agility. By scaling in the cloud and with other contemporary measures, however, organizations enjoy a far more affordable linear curve.

This point is best demonstrated by a healthcare example in which a well-known, global healthcare organization was using SQL Server on premises for its OLTP, yet wanted to deploy a cloud model for Business Intelligence (BI). The choice was clear: either ignore budget constraints by indulging in additional physical infrastructure (with all the unavoidable costs for licenses and servers) or deploy to the cloud for real-time data access of their present kit. The latter option decreased costs and maximized operational efficiency, as will the majority of well-implemented analytics solutions in the cloud.

The Upside

In this case and a number of others, optimizing cloud analytics involves continually replicating on-premises data to the cloud. Shrewd organizations minimize these costs by opting for asynchronous replication; the aforementioned healthcare entity did so with approximately a second latency for near real-time access of its healthcare data. Replication to the cloud is often inexpensive or even free, making the data transfer component highly cost-effective. By making this data available for BI in the cloud, this organization effected several advantages. The most prominent was the reproducibility of a single dataset for multiple uses. Business users — in this case physicians, clinicians, nurses, back-office staff, etc. — are able to access this read-only data for intelligence to impact diagnosis or treatment options, as well as for administrative/operational requirements (OLTP).

This latter point is extremely important. With this paradigm, there are no application performance issues compromising the work of those using on-premises data because of reporting — which could occur if each group was provisioning the same copy of the data for their respective uses. Instead, each user benefits mutually from this model.

The healthcare group is assisted by the primary data being stored on premises, which is important for compliance measures in this highly regulated industry. It’s also important to note the flexibility of this architecture, which most immediately affects cloud users. Organizations can establish clusters in any of the major cloud providers such as Amazon Web Services (AWS), Azure, or any private or hybrid clouds they like. They can also readily transition resources between these providers as they see fit: feasibly according to use case or for discounted pricing. Even better, when they no longer need those analytics they can speedily and painlessly halt those deployments — or simply migrate them to other environments involving containers, for example.

Plus Automatic Failovers

The above-mentioned healthcare group also gets a third advantage when utilizing an intelligent availability approach for running analytics in the cloud: automatic failover. In the event of any sort of downtime for on-premises infrastructure (which could include scheduled maintenance or any sort of catastrophic event), its data will automatically failover to the cloud using intelligent availability techniques. The ensuing continuity enables both groups of users to continue accessing data so that there is no downtime. Those primary workloads simply transfer to cloud servers, so workloads are still running. This benefit typifies the agility of an intelligent availability approach. Workloads are able to run continuously despite downtime situations. What’s more, they run where users specify them to create the most meaningful competitive advantage. Most high availability methods don’t provide users with the flexibility of choosing between Windows or Linux settings. There’s also a simplicity of management and resiliency for Availability Groups facilitated by intelligent availability solutions, which provision resources where they’re needed without downtime.

Recurring Advantages

Intelligent availability solutions and methodologies enable users to maximize analytic output by creating recurring advantages from what is essentially the same dataset. They allow users to move copies of that data to and between cloud providers for low latency analytics capabilities, with some of the most advanced techniques in use today. What’s more, this approach does so while maintaining critical governance and performance requirements for on-premises deployments. Perhaps best of all, it maintains these benefits while automatically failing over to offsite locations to preserve the continuity of workflows in an era in which information technology is anything but predictable.

Don Boxley is CEO and Co-Founder of DH2i
Share this

The Latest

January 17, 2019

APMdigest invited industry experts to predict how Cloud will evolve and impact application performance and business in 2019. Part 3, the final installment, covers monitoring and managing application performance in the Cloud ...

January 16, 2019

APMdigest invited industry experts to predict how Cloud will evolve and impact application performance and business in 2019. Part 2 covers multi-cloud, hybrid cloud, serverless and more ...

January 15, 2019

As a continuation of the list of 2019 predictions, APMdigest invited industry experts to predict how Cloud will evolve and impact application performance and business in 2019 ...

January 14, 2019

APMdigest invited industry experts to predict how Network Performance Management (NPM) and related technologies will evolve and impact business in 2019 ...

January 11, 2019

I would like to highlight some of the predictions made at the start of 2018, and how those have panned out, or not actually occurred. I will review some of the predictions and trends from APMdigest's 2018 APM Predictions. Here is Part 2 ...

January 10, 2019

I would like to highlight some of the predictions made at the start of 2018, and how those have panned out, or not actually occurred. I will review some of the predictions and trends from APMdigest's 2018 APM Predictions ...

January 09, 2019

I sat down with Stephen Elliot, VP of Management Software and DevOps at IDC, to discuss where the market is headed, how legacy vendors will need to adapt, and how customers can get ahead of these trends to gain a competitive advantage. Part 2 of the interview ...

January 08, 2019

Monitoring and observability requirements are continuing to adapt to the rapid advances in public cloud, containers, serverless, microservices, and DevOps and CI/CD practices. As new technology and development processes become mainstream, enterprise adoption begins to increase, bringing its own set of security, scalability, and manageability needs. I sat down with Stephen Elliot, VP of Management Software and DevOps at IDC, to discuss where the market is headed, how legacy vendors will need to adapt, and how customers can get ahead of these trends to gain a competitive advantage ...

December 20, 2018

APMdigest invited industry experts to predict how APM and related technologies will evolve and impact business in 2019. Part 6 covers the Internet of Things (IoT) ...

December 19, 2018

APMdigest invited industry experts to predict how APM and related technologies will evolve and impact business in 2019. Part 5 covers the evolution of ITOA and its impact on the IT team ...