The Recurring Advantages of Intelligent Availability
July 17, 2018

Don Boxley
DH2i

Share this

The essential value resulting from data-driven processes has become progressively linked with analytics. Once considered a desired complement to intuitive decision-making, analytics has developed into a main focus of mission-critical applications across industries for any number of use cases.

However, as the motives for employing analytics for business processes have increased, so has the intricacy of deployments. Organizations must now habitually confront circumstances in which data is spread across a plenitude of environments, making it arduous, error-prone and time-consuming to try to centralize for a single use case. Perhaps even more widespread is the reality in which it’s beneficial to deploy in multiple settings (such as with Linux platforms, in the cloud, or with containers), but budgetary or technological shortcomings make it unviable. Certainly, application performance oftentimes suffers as well.

The truth is today’s ever-shifting data space warrants enterprise agility for analytics as much as for any other aspect of competitive advantage. Processing is optimized by performing analytics as close to data as possible, which may need to switch locations for disaster recovery (DR), scheduled downtime, or limited-time pricing offers in the cloud.

By embracing an agile approach predicated on what can be called “intelligent availability” organizations can dynamically provision analytics in a plethora of environments to satisfy numerous business use cases, seamlessly and rapidly transferring data between on-premises settings (including both Windows and Linux machines), the cloud and containers.

Consequently, they enjoy decreased infrastructure costs, effective DR, and an overall greater yield for analytics — and that of data in general.

Analytics in the Cloud

One of the more widespread methodologies in which intelligent availability improves analytics is with cloud deployments. There are a number of advantages to going to the cloud for analytics, not the least of which are the pay-per-use pricing model, decreased infrastructure, and elastic scalability of cloud resources. There are also several software-as-a-service (SaaS) and platform-as-a-service (PaaS) options — some of which involve advanced analytics capabilities for machine learning and neural networks — for users without data science experts on staff.

Nonetheless, the most persuasive reason for running analytics in the cloud is facing the alternative: attempting to scale on premises. Customarily, scaling in physical environments involved an exponential curve with numerous unalterable costs which frequently limited application performance and enterprise agility. By scaling in the cloud and with other contemporary measures, however, organizations enjoy a far more affordable linear curve.

This point is best demonstrated by a healthcare example in which a well-known, global healthcare organization was using SQL Server on premises for its OLTP, yet wanted to deploy a cloud model for Business Intelligence (BI). The choice was clear: either ignore budget constraints by indulging in additional physical infrastructure (with all the unavoidable costs for licenses and servers) or deploy to the cloud for real-time data access of their present kit. The latter option decreased costs and maximized operational efficiency, as will the majority of well-implemented analytics solutions in the cloud.

The Upside

In this case and a number of others, optimizing cloud analytics involves continually replicating on-premises data to the cloud. Shrewd organizations minimize these costs by opting for asynchronous replication; the aforementioned healthcare entity did so with approximately a second latency for near real-time access of its healthcare data. Replication to the cloud is often inexpensive or even free, making the data transfer component highly cost-effective. By making this data available for BI in the cloud, this organization effected several advantages. The most prominent was the reproducibility of a single dataset for multiple uses. Business users — in this case physicians, clinicians, nurses, back-office staff, etc. — are able to access this read-only data for intelligence to impact diagnosis or treatment options, as well as for administrative/operational requirements (OLTP).

This latter point is extremely important. With this paradigm, there are no application performance issues compromising the work of those using on-premises data because of reporting — which could occur if each group was provisioning the same copy of the data for their respective uses. Instead, each user benefits mutually from this model.

The healthcare group is assisted by the primary data being stored on premises, which is important for compliance measures in this highly regulated industry. It’s also important to note the flexibility of this architecture, which most immediately affects cloud users. Organizations can establish clusters in any of the major cloud providers such as Amazon Web Services (AWS), Azure, or any private or hybrid clouds they like. They can also readily transition resources between these providers as they see fit: feasibly according to use case or for discounted pricing. Even better, when they no longer need those analytics they can speedily and painlessly halt those deployments — or simply migrate them to other environments involving containers, for example.

Plus Automatic Failovers

The above-mentioned healthcare group also gets a third advantage when utilizing an intelligent availability approach for running analytics in the cloud: automatic failover. In the event of any sort of downtime for on-premises infrastructure (which could include scheduled maintenance or any sort of catastrophic event), its data will automatically failover to the cloud using intelligent availability techniques. The ensuing continuity enables both groups of users to continue accessing data so that there is no downtime. Those primary workloads simply transfer to cloud servers, so workloads are still running. This benefit typifies the agility of an intelligent availability approach. Workloads are able to run continuously despite downtime situations. What’s more, they run where users specify them to create the most meaningful competitive advantage. Most high availability methods don’t provide users with the flexibility of choosing between Windows or Linux settings. There’s also a simplicity of management and resiliency for Availability Groups facilitated by intelligent availability solutions, which provision resources where they’re needed without downtime.

Recurring Advantages

Intelligent availability solutions and methodologies enable users to maximize analytic output by creating recurring advantages from what is essentially the same dataset. They allow users to move copies of that data to and between cloud providers for low latency analytics capabilities, with some of the most advanced techniques in use today. What’s more, this approach does so while maintaining critical governance and performance requirements for on-premises deployments. Perhaps best of all, it maintains these benefits while automatically failing over to offsite locations to preserve the continuity of workflows in an era in which information technology is anything but predictable.

Don Boxley is CEO and Co-Founder of DH2i
Share this

The Latest

April 18, 2024

A vast majority (89%) of organizations have rapidly expanded their technology in the past few years and three quarters (76%) say it's brought with it increased "chaos" that they have to manage, according to Situation Report 2024: Managing Technology Chaos from Software AG ...

April 17, 2024

In 2024 the number one challenge facing IT teams is a lack of skilled workers, and many are turning to automation as an answer, according to IT Trends: 2024 Industry Report ...

April 16, 2024

Organizations are continuing to embrace multicloud environments and cloud-native architectures to enable rapid transformation and deliver secure innovation. However, despite the speed, scale, and agility enabled by these modern cloud ecosystems, organizations are struggling to manage the explosion of data they create, according to The state of observability 2024: Overcoming complexity through AI-driven analytics and automation strategies, a report from Dynatrace ...

April 15, 2024

Organizations recognize the value of observability, but only 10% of them are actually practicing full observability of their applications and infrastructure. This is among the key findings from the recently completed Logz.io 2024 Observability Pulse Survey and Report ...

April 11, 2024

Businesses must adopt a comprehensive Internet Performance Monitoring (IPM) strategy, says Enterprise Management Associates (EMA), a leading IT analyst research firm. This strategy is crucial to bridge the significant observability gap within today's complex IT infrastructures. The recommendation is particularly timely, given that 99% of enterprises are expanding their use of the Internet as a primary connectivity conduit while facing challenges due to the inefficiency of multiple, disjointed monitoring tools, according to Modern Enterprises Must Boost Observability with Internet Performance Monitoring, a new report from EMA and Catchpoint ...

April 10, 2024

Choosing the right approach is critical with cloud monitoring in hybrid environments. Otherwise, you may drive up costs with features you don’t need and risk diminishing the visibility of your on-premises IT ...

April 09, 2024

Consumers ranked the marketing strategies and missteps that most significantly impact brand trust, which 73% say is their biggest motivator to share first-party data, according to The Rules of the Marketing Game, a 2023 report from Pantheon ...

April 08, 2024

Digital experience monitoring is the practice of monitoring and analyzing the complete digital user journey of your applications, websites, APIs, and other digital services. It involves tracking the performance of your web application from the perspective of the end user, providing detailed insights on user experience, app performance, and customer satisfaction ...

April 04, 2024
Modern organizations race to launch their high-quality cloud applications as soon as possible. On the other hand, time to market also plays an essential role in determining the application's success. However, without effective testing, it's hard to be confident in the final product ...
April 03, 2024

Enterprises are experiencing a 13% year-over-year increase in customer-facing incidents, reflecting rising levels of complexity and risk as businesses drive operational transformation at scale, according to the 2024 State of Digital Operations study from PagerDuty ...