Should I Stay or Should I Go? A Cloudy Decision
June 15, 2020

Scott Leatherman
Virtana

Share this

If you've been operating in the cloud for some time now, chances are your business has changed since you first made that move and particularly during the current climate. Has your cloud usage grown considerably — and your OpEx costs? Is that just the cost of doing business in the cloud? It doesn't have to be. Here's how you can rationalize your infrastructure and determine if there are cloud expenses you can reclaim and even if it makes sense to move some of your cloud deployments into co-location.

The rush to the public cloud has now slowed as organizations realized that it is not a "one size fits all" solution. The main issue is the lack of deep visibility into the performance of applications provided by the host. Our own research has recently revealed that 32% of public cloud resources are currently under-utilized, and without proper direction and guidance, this will remain the case. What is needed is real-time data and intelligent recommendations to lower costs and assure performance.

The Need for AIOps

In order to optimize cloud resources, a third-party AIOps based resource is needed. This will provide an independent and granular view of how applications are using capacity and if it is right-sized. In addition, it will monitor the performance of the applications in real-time and provide metrics and analytics to eliminate bottlenecks. The allocated capacity can also be monitored to ensure an accurate match to workload requirements via real-time performance data.

Although the major hosts provide cost optimization tools, these are not very accurate. Analysis of billing and how it matches capacity over time as well as in real-time is what is needed for the cloud to remain a vital part in IT infrastructure. Armed with this information you can plan capacity purchases and discover wasted spend. By using a single platform for cloud management, you can monitor your infrastructure, plan capacity, and eliminate performance risks. Performance bottlenecks can be predicted before they affect clients and SLAs with multi-conditional alerting powered by advanced anomaly detection.

Cloud solutions are not only publicly provided by the likes of AWS and Azure. Co-location is also a strong option where your applications are managed on your behalf by a system integrator. This is increasingly becoming a stronger option for more business-critical applications. But to determine which is best for you, you need to start with the facts.

The "Cloud" promises IT organizations unprecedented value in the form of business agility, faster innovation, superior scalability and most importantly — cost savings. For many organizations, it is at the core of their IT digital transformation strategy. It is a disruptive force that requires application workload behavior knowledge, careful planning and collaboration from well-informed, trusted advisors.

2 Paths to the Cloud

As a first step, enterprises frequently target a subset of their less critical on-premises applications for migration to the public cloud. Typically, organizations will take one of two paths to the cloud.

A. Going cloud-native. Rewrite your application to use resources offered by a cloud provider.

B. Lift and shift. Very minimal or zero code changes to the application. Largely, just replicate the application in the cloud.

The faster time-to-production choice is to "lift and shift" the targeted applications to a Cloud Service Provider's Infrastructure as a Service (IaaS). In the lift and shift option, the advantage is a reduction in the cost incurred in the physical infrastructure like hardware, floor space, cooling, security etc. and the management of that infrastructure. Savings will differ depending on your unique computing resource needs, workload refactoring and business models.

Answering the Key Questions

Even in its simplest form, IaaS migrations must be carefully planned requiring answers to some fundamental questions:

1. Will my application perform as expected in a public cloud? (Application Fitness)

2. How much will it cost to run my applications in a public cloud? (OpEx)

3. Which cloud service provider is the best choice for my applications? (Cost and Fit)

IT managers need answers to these questions before the actual migration is performed. As most internal IT organizations don't have deep cloud expertise, the question becomes who you can trust to provide you with the answers — to help you make better business decisions.

As technology and the cloud stands to play an ever-increasing role throughout organizations, ensuring that you're adopting the right type of infrastructure specifically for your business has never been more vital for continued success. Choosing a service that provides the answers to your key questions before the actual migration takes place and prepares you with vital insights into your applications and workloads targeted for cloud migration has to be an important part of the decision-making process.

As organizations continue to battle the COVID-19 storm, understanding the product that will overhaul your IT infrastructure, before you fully buy into it, is going to provide the confidence and assurance you need to make that decision a little less cloudy.

Scott Leatherman is CMO of Virtana
Share this

The Latest

March 18, 2024

Gartner has highlighted the top trends that will impact technology providers in 2024: Generative AI (GenAI) is dominating the technical and product agenda of nearly every tech provider ...

March 15, 2024

In MEAN TIME TO INSIGHT Episode 4 - Part 1, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses artificial intelligence and network management ...

March 14, 2024

The integration and maintenance of AI-enabled Software as a Service (SaaS) applications have emerged as pivotal points in enterprise AI implementation strategies, offering both significant challenges and promising benefits. Despite the enthusiasm surrounding AI's potential impact, the reality of its implementation presents hurdles. Currently, over 90% of enterprises are grappling with limitations in integrating AI into their tech stack ...

March 13, 2024

In the intricate landscape of IT infrastructure, one critical component often relegated to the back burner is Active Directory (AD) forest recovery — an oversight with costly consequences ...

March 12, 2024

eBPF is a technology that allows users to run custom programs inside the Linux kernel, which changes the behavior of the kernel and makes execution up to 10x faster(link is external) and more efficient for key parts of what makes our computing lives work. That includes observability, networking and security ...

March 11, 2024

Data mesh, an increasingly important decentralized approach to data architecture and organizational design, focuses on treating data as a product, emphasizing domain-oriented data ownership, self-service tools and federated governance. The 2024 State of the Data Lakehouse report from Dremio presents evidence of the growing adoption of data mesh architectures in enterprises ... The report highlights that the drive towards data mesh is increasingly becoming a business strategy to enhance agility and speed in problem-solving and innovation ...

March 07, 2024
In this digital era, consumers prefer a seamless user experience, and here, the significance of performance testing cannot be overstated. Application performance testing is essential in ensuring that your software products, websites, or other related systems operate seamlessly under varying conditions. However, the cost of poor performance extends beyond technical glitches and slow load times; it can directly affect customer satisfaction and brand reputation. Understand the tangible and intangible consequences of poor application performance and how it can affect your business ...
March 06, 2024

Too much traffic can crash a website ... That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack ... These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night ...

March 05, 2024

Today, applications serve as the backbone of businesses, and therefore, ensuring optimal performance has never been more critical. This is where application performance monitoring (APM) emerges as an indispensable tool, empowering organizations to safeguard their applications proactively, match user expectations, and drive growth. But APM is not without its challenges. Choosing to implement APM is a path that's not easily realized, even if it offers great benefits. This blog deals with the potential hurdles that may manifest when you actualize your APM strategy in your IT application environment ...

March 04, 2024

This year's Super Bowl drew in viewership of nearly 124 million viewers and made history as the most-watched live broadcast event since the 1969 moon landing. To support this spike in viewership, streaming companies like YouTube TV, Hulu and Paramount+ began preparing their IT infrastructure months in advance to ensure an exceptional viewer experience without outages or major interruptions. New Relic conducted a survey to understand the importance of a seamless viewing experience and the impact of outages during major streaming events such as the Super Bowl ...