Achieving Optimum Application Performance with Intelligent Availability
May 31, 2018

Don Boxley
DH2i

Share this

The thought of a dramatic decrease in application performance, let alone a system or application outage, sends shivers down even the hardiest IT professional's spine. That's because they know that in today's pace of business slow response time (or G-d forbid, just a few minutes of downtime) can equate to business loss both immediately and into the future, as well as potentially painful legal and/or regulations compliance consequences. There is no way around it. Availability has cemented itself as one of the most essential elements of any successful data center. However today, what many organizations are beginning to realize — sometimes the hard way — is that traditional methodologies and technologies for high availability (HA) have limits.

What's needed instead is a new approach that enables the dynamic transfer of workloads in IT environments based on optimizing the particular job at hand. Accomplishing this objective necessitates an inherent intelligence, flexibility, lack of downtime, and cost-effective methodology. What's required is intelligent availability, which builds upon some of the basic principles of high availability to provide the previously mentioned advantages — and more.

Intelligent availability is the future of availability and arguably the most critical component in the blueprint for creating business value through digital transformation.

Traditional High Availability

Historically, high availability (HA) has been defined quite simply as the continuous operation of applications and system components. Traditionally, this objective was accomplished in a variety of ways, accompanied by an assortment of drawbacks. One of the most common involves failovers, in which data and operations are transferred to those of a secondary system for scheduled downtime or unplanned failures.

Clustering methodologies are often leveraged with this approach to make resources between systems — including databases, servers, processors and others — available to one another. Clustering is applicable to VMs and physical servers and can help enable resilience for OS, host, and guest failures. Failovers involve a degree of redundancy, which entails maintaining HA by involving backups of system components. Redundant networking and storage options may be leveraged with VMs to encompass system components or data copies.

One of the most serious problems with many of these approaches is cost, especially as there are several instances in which HA is unnecessary. These pertain to the actual use and importance of servers, as well as additional factors pertaining to what virtualization techniques are used. Low priority servers that don't affect end users — such as those for testing — don't need HA, nor do those with recovery time objectives (RTO) significantly greater than their restore times.

Certain HA solutions, such as some of the more comprehensive hypervisor-based platforms, are indiscriminate in this regard. Therefore, users may end up paying for HA for components that don't need them. Also, traditional high availability approaches involve constant testing that can drain human and financial resources. Even worse, neglecting this duty can result in unplanned downtime. Also, arbitrarily implementing redundancy for system components broadens organization's data landscapes, resulting in more copies and potential weaknesses for security and data governance.

The Future: Digital Transformation

Many of these virtualization measures for HA are losing relevance because of digital transformation. To truly transform the way your organization conducts business with digitization technologies, you must deploy them strategically. Traditional HA methods simply do not allow for the fine-grained flexibility needed to optimize business value from digitization. Digital transformation means accounting for the varied computing environments of Linux and Windows operating systems alongside containers. It means integrating an assortment of legacy systems with newer ones specifically designed to handle the flood of big data and modern transactions systems.

Perhaps most importantly, it means aligning that infrastructure for business objectives in an adaptive way for changing domain or customer needs. Such flexibility is essential for optimizing IT processes around end user goals. The reality is most conventional methods of HA simply add to the infrastructural complexity of digital transformation, but don't address the primary need of adapting to evolving business requirements. In the face of digital transformation, organizations need to streamline their various IT systems around domain objectives, as opposed to doing the opposite, which simply decreases efficiency while increasing cost.

Enter Intelligent Availability

Intelligent availability is ideal for digital transformation because it enables workloads to always run on the best execution environment (BEV). It combines this advantage with the continuous operations of HA, but takes a fundamentally different approach in doing so. Intelligent availability takes the base idea of HA, to dedicate resources between systems to prevent downtime, and builds on it — extending it to moving them for maximizing competitive advantage. It allows organizations to move workloads between operating systems, servers, and physical and virtual environments with virtually no downtime.

The core of this approach is in the capacity of technologies that provide availability that includes intelligence to move workloads independent of one another, which is a fundamental limitation of traditional physical or virtualized approaches to workload management. By disengaging an array of system components (containers, application workloads, services and share files) without having to standardize on just one database or OS, these technologies transfer them to the environment which fits best from an IT goal and budgetary standpoint.

It's vital to remember that this judgment call is based on how to best achieve a defined business objective. Furthermore, these technologies provide this flexibility for individual instances to ensure negligible downtime and a smooth transition from one environment to another. The use cases for this instantaneous portability are abundant. Organizations can use these techniques for uninterrupted availability, integration with new or legacy systems, or the incorporation of additional data sources. Most importantly, they can do so with the assurance that the intelligent routing of the underlying technologies are selecting the optimal setting to execute workloads (i.e., BEV). Once suitably architected, the process takes no longer than a simple stop and start of a container or an application.

Intelligent Availability – the Intelligent Choice

Intelligent availability is important for a number of reasons. First, it creates all the advantages of HA, at a lower cost, and with a dramatically greater degree of efficiency. Next, it provides the agility required to capitalize on digital transformation, enabling organizations to quickly and easily move systems, applications, and workloads to where they can create the greatest competitive impact; and then when requirements change, move them back, or to someplace else.

As the saying goes, "The only constant is change." And in today's constantly changing business environment, intelligent availability delivers the agility required to not only survive, but to prevail.

Don Boxley is CEO and Co-Founder of DH2i
Share this

The Latest

March 21, 2019

Achieving audit compliance within your IT ecosystem can be an iterative process, and it doesn't have to be compressed into the five days before the audit is due. Following is a four-step process I use to guide clients through the process of preparing for and successfully completing IT audits ...

March 20, 2019

Network performance issues come in all shapes and sizes, and can require vast amounts of time and resources to solve. Here are three examples of painful network performance issues you're likely to encounter this year, and how NPMD solutions can help you overcome them ...

March 19, 2019

"Scale up" versus "scale out" doesn't just apply to hardware investments, it also has an impact on product features. "Scale up" promotes buying the feature set you think you need now, then adding "feature modules" and licenses as you discover additional feature requirements are needed. Often as networks grow in size they also grow in complexity ...

March 18, 2019

Network Packet Brokers play a critical role in gaining visibility into new complex networks. They deliver the packet data and information IT and security teams need to identify problems, recognize security issues, and ensure overall network performance. However, not all Packet Brokers are created equal when it comes to scalability. Simply "scaling up" your network infrastructure at every growth point is a more complex and more expensive endeavor over time. Let's explore three ways the "scale up" approach to infrastructure growth impedes NetOps and security professionals (and the business as a whole) ...

March 15, 2019

Loyal users are the key to your service desk's success. Happy users want to use your services and they recommend your services in the organization. It takes time and effort to exceed user expectations, but doing so means keeping the promises we make to our users and being careful not to do too much without careful consideration for what's best for the organization and users ...

March 14, 2019

What's the difference between user satisfaction and user loyalty? How can you measure whether your users are satisfied and will keep buying from you? How much effort should you make to offer your users the ultimate experience? If you're a service provider, what matters in the end is whether users will keep coming back to you and will stay loyal ...

March 13, 2019

What if I said that a 95% reduction in the amount of IT noise, 99% reduction in ticket volume and 99% L1 resolution rate are not only possible, but that some of the largest, most complex enterprises in the world see these metrics in their environments every day, thanks to Artificial Intelligence (AI) and Machine Learning (ML)? Would you dismiss that as belonging to the realm of science fiction? ...

March 12, 2019
As a consumer, when you order products online, how do you expect them to get delivered? Some key requirements are: the product must arrive on time, well-packed, and ultimately must give you an easy gateway to return it if it is not as per your expectations. All this has been made possible via a single application. But what if this application doesn't function the way you want or cracks down mid-way, or probably leaks off information about you to some potential hackers? Technical uncertainty and digital chaos are the two double-edged swords dangling over this billion-dollar ecommerce market. Can Quality Assurance and Software Testing save application developers from this endless juggle? ...
March 11, 2019

Of those surveyed, 96% of organizations have a digital transformation strategy, with 57% approaching it as an enterprise-wide priority, with a clear emphasis on speed of business, costs, risk, and customer satisfaction, according to IDC’s Aligning IT Strategies and Business Expectations for Digital Transformation Success, sponsored by EasyVista ...

March 08, 2019

One of my ongoing areas of focus is analytics, AIOps, and the intersection with AI and machine learning more broadly. Within this space, sad to say, semantic confusion surrounding just what these terms mean echoes the confusions surrounding ITSM ...