The Recurring Advantages of Intelligent Availability
July 17, 2018

Don Boxley
DH2i

Share this

The essential value resulting from data-driven processes has become progressively linked with analytics. Once considered a desired complement to intuitive decision-making, analytics has developed into a main focus of mission-critical applications across industries for any number of use cases.

However, as the motives for employing analytics for business processes have increased, so has the intricacy of deployments. Organizations must now habitually confront circumstances in which data is spread across a plenitude of environments, making it arduous, error-prone and time-consuming to try to centralize for a single use case. Perhaps even more widespread is the reality in which it’s beneficial to deploy in multiple settings (such as with Linux platforms, in the cloud, or with containers), but budgetary or technological shortcomings make it unviable. Certainly, application performance oftentimes suffers as well.

The truth is today’s ever-shifting data space warrants enterprise agility for analytics as much as for any other aspect of competitive advantage. Processing is optimized by performing analytics as close to data as possible, which may need to switch locations for disaster recovery (DR), scheduled downtime, or limited-time pricing offers in the cloud.

By embracing an agile approach predicated on what can be called “intelligent availability” organizations can dynamically provision analytics in a plethora of environments to satisfy numerous business use cases, seamlessly and rapidly transferring data between on-premises settings (including both Windows and Linux machines), the cloud and containers.

Consequently, they enjoy decreased infrastructure costs, effective DR, and an overall greater yield for analytics — and that of data in general.

Analytics in the Cloud

One of the more widespread methodologies in which intelligent availability improves analytics is with cloud deployments. There are a number of advantages to going to the cloud for analytics, not the least of which are the pay-per-use pricing model, decreased infrastructure, and elastic scalability of cloud resources. There are also several software-as-a-service (SaaS) and platform-as-a-service (PaaS) options — some of which involve advanced analytics capabilities for machine learning and neural networks — for users without data science experts on staff.

Nonetheless, the most persuasive reason for running analytics in the cloud is facing the alternative: attempting to scale on premises. Customarily, scaling in physical environments involved an exponential curve with numerous unalterable costs which frequently limited application performance and enterprise agility. By scaling in the cloud and with other contemporary measures, however, organizations enjoy a far more affordable linear curve.

This point is best demonstrated by a healthcare example in which a well-known, global healthcare organization was using SQL Server on premises for its OLTP, yet wanted to deploy a cloud model for Business Intelligence (BI). The choice was clear: either ignore budget constraints by indulging in additional physical infrastructure (with all the unavoidable costs for licenses and servers) or deploy to the cloud for real-time data access of their present kit. The latter option decreased costs and maximized operational efficiency, as will the majority of well-implemented analytics solutions in the cloud.

The Upside

In this case and a number of others, optimizing cloud analytics involves continually replicating on-premises data to the cloud. Shrewd organizations minimize these costs by opting for asynchronous replication; the aforementioned healthcare entity did so with approximately a second latency for near real-time access of its healthcare data. Replication to the cloud is often inexpensive or even free, making the data transfer component highly cost-effective. By making this data available for BI in the cloud, this organization effected several advantages. The most prominent was the reproducibility of a single dataset for multiple uses. Business users — in this case physicians, clinicians, nurses, back-office staff, etc. — are able to access this read-only data for intelligence to impact diagnosis or treatment options, as well as for administrative/operational requirements (OLTP).

This latter point is extremely important. With this paradigm, there are no application performance issues compromising the work of those using on-premises data because of reporting — which could occur if each group was provisioning the same copy of the data for their respective uses. Instead, each user benefits mutually from this model.

The healthcare group is assisted by the primary data being stored on premises, which is important for compliance measures in this highly regulated industry. It’s also important to note the flexibility of this architecture, which most immediately affects cloud users. Organizations can establish clusters in any of the major cloud providers such as Amazon Web Services (AWS), Azure, or any private or hybrid clouds they like. They can also readily transition resources between these providers as they see fit: feasibly according to use case or for discounted pricing. Even better, when they no longer need those analytics they can speedily and painlessly halt those deployments — or simply migrate them to other environments involving containers, for example.

Plus Automatic Failovers

The above-mentioned healthcare group also gets a third advantage when utilizing an intelligent availability approach for running analytics in the cloud: automatic failover. In the event of any sort of downtime for on-premises infrastructure (which could include scheduled maintenance or any sort of catastrophic event), its data will automatically failover to the cloud using intelligent availability techniques. The ensuing continuity enables both groups of users to continue accessing data so that there is no downtime. Those primary workloads simply transfer to cloud servers, so workloads are still running. This benefit typifies the agility of an intelligent availability approach. Workloads are able to run continuously despite downtime situations. What’s more, they run where users specify them to create the most meaningful competitive advantage. Most high availability methods don’t provide users with the flexibility of choosing between Windows or Linux settings. There’s also a simplicity of management and resiliency for Availability Groups facilitated by intelligent availability solutions, which provision resources where they’re needed without downtime.

Recurring Advantages

Intelligent availability solutions and methodologies enable users to maximize analytic output by creating recurring advantages from what is essentially the same dataset. They allow users to move copies of that data to and between cloud providers for low latency analytics capabilities, with some of the most advanced techniques in use today. What’s more, this approach does so while maintaining critical governance and performance requirements for on-premises deployments. Perhaps best of all, it maintains these benefits while automatically failing over to offsite locations to preserve the continuity of workflows in an era in which information technology is anything but predictable.

Don Boxley is CEO and Co-Founder of DH2i
Share this

The Latest

March 26, 2020

While remote work policies have been gaining steam for the better part of the past decade across the enterprise space — driven in large part by more agile and scalable, cloud-delivered business solutions — recent events have pushed adoption into overdrive ...

March 25, 2020

Time-critical, unplanned work caused by IT disruptions continues to plague enterprises around the world, leading to lost revenue, significant employee morale problems and missed opportunities to innovate, according to the State of Unplanned Work Report 2020, conducted by Dimensional Research for PagerDuty ...

March 24, 2020

In today's iterative world, development teams care a lot more about how apps are running. There's a demand for fixing actionable items. Developers want to know exactly what's broken, what to fix right now, and what can wait. They want to know, "Do we build or fix?" This trade-off between building new features versus fixing bugs is one of the key factors behind the adoption of Application Stability management tools ...

March 23, 2020

With the rise of mobile apps and iterative development releases, Application Stability has answered the widespread need to monitor applications in a new way, shifting the focus from servers and networks to the customer experience. The emergence of Application Stability has caused some consternation for diehard APM fans. However, these two solutions embody very distinct monitoring focuses, which leads me to believe there's room for both tools, as well as different teams for both ...

March 19, 2020

The 2019 State of E-Commerce Infrastructure Report, from Webscale, analyzes findings from a comprehensive survey of more than 450 ecommerce professionals regarding how their online stores performed during the 2019 holiday season. Some key insights from the report include ...

March 18, 2020

Robinhood is a unicorn startup that has been disrupting the way by which many millennials have been investing and managing their money for the past few years. For Robinhood, the burden of proof was to show that they can provide an infrastructure that is as scalable, reliable and secure as that of major banks who have been developing their trading infrastructure for the last quarter-century. That promise fell flat last week, when the market volatility brought about a set of edge cases that brought Robinhood's trading app to its knees ...

March 17, 2020

Application backend monitoring is the key to acquiring visibility across the enterprise's application stack, from the application layer and underlying infrastructure to third-party API services, web servers and databases, be they on-premises, in a public or private cloud, or in a hybrid model. By tracking and reporting performance in real time, IT teams can ensure applications perform at peak efficiency — and guarantee a seamless customer experience. How can IT operations teams improve application backend monitoring? By embracing artificial intelligence for operations — AIOps ...

March 16, 2020

In 2020, DevOps teams will face heightened expectations for higher speed and frequency of code delivery, which means their IT environments will become even more modular, ephemeral and dynamic — and significantly more complicated to monitor. As a result, AIOps will further cement its position as the most effective technology that DevOps teams can use to see and control what's going on with their applications and their underlying infrastructure, so that they can prevent outages. Here I outline five key trends to watch related to how AIOps will impact DevOps in 2020 and beyond ...

March 12, 2020

With the spread of the coronavirus (COVID-19), CIOs should focus on three short-term actions to increase their organizations' resilience against disruptions and prepare for rebound and growth, according to Gartner ...

March 11, 2020

Whether you consider the first generation of APM or the updates that followed for SOA and microservices, the most basic premise of the tools remains the same — PROVIDE VISIBILITY ...