Balancing the Rising Costs of Public Cloud
January 23, 2023

Ahsan Siddiqui
Arcserve

Share this

The spiraling cost of energy is forcing public cloud providers to raise their prices significantly. A recent report by Canalys predicted that public cloud prices will jump by around 20% in the US and more than 30% in Europe in 2023. These steep price increases will test the conventional wisdom that moving to the cloud is a cheap computing alternative.

Indeed, many organizations are already looking at their higher cloud bills and assessing whether it still makes sense to keep moving their infrastructure to the cloud. They do have alternatives.

For instance, for solutions used regularly and persistently, it might make financial sense to bring those in-house rather than host them in the cloud. Owning the infrastructure and managing it yourself could be more cost-effective in the long run.

On the other hand, more complex technologies, and solutions with a high entry cost, such as artificial intelligence, remain good candidates for cloud hosting because they require so much infrastructure and personnel to run in-house. The cloud also remains an excellent option for specific services and solutions where more elasticity is required. This includes technologies that need to be scaled up quickly for a defined period, such as the last few days of each month or quarter when closing the books, then scaled back down.

These are just some issues that organizations should assess when determining if they should keep their data and infrastructure in the cloud. Moving them back on-premises or transitioning to a hybrid infrastructure entails keeping some data and applications in the cloud while returning others to an on-premises infrastructure. From now on, all organizations must take a step back and assess what will work best for them to find the right balance.

The Benefits of Hybrid Cloud

A hybrid cloud has a lot of advantages. Organizations adopting a hybrid cloud approach can more easily control costs and manage their data wherever it resides — on-premises, in a public or private cloud. Many organizations now face a range of emerging trends and threats that impact how they run their business and find the flexibility of a hybrid cloud essential.

A hybrid data center is adaptable. It's a viable and practical system that enables companies to meet the growing threat of ransomware attacks while taking on today's evolving business demands — all in real time. A hybrid data center provides strong security, efficient performance, reliability, scalability, agility, and cost-efficiency.

But a hybrid data center requires work. Implementing and operating one presents several IT-management challenges. Yes, a hybrid data center allows a business to efficiently store and shift workloads according to need and better protect its sensitive data. But a hybrid data center brings more complexity to managing servers, networks, storage, and software across the IT landscape.

For instance, organizations running a hybrid cloud must secure their data and applications both on-premises and in the cloud. They also must be able to recover data and applications on-premises or in the cloud, wherever the company initially hosted the data and applications. And they must handle backup and recovery across a hybrid environment. To do all this, they must have a data management and storage solution that meets the needs of a hybrid data center.

The Rise of Data Repatriation

As the cost of the cloud continues to balloon, many companies will take the dramatic step of "repatriating" workloads to preserve precious IT budgets. Already, rising energy prices are forcing organizations to rethink their cloud strategy and start repatriating their data from the cloud to on-premises.

Indeed, market intelligence firm IDC research shows that most organizations are now shifting workloads from the cloud back to on-premises data centers. In the IDC survey, 71% of respondents said they plan to move some or all of the workloads they're now running in public clouds back to on-premises environments in the next two years. A mere 13% said they plan to run all their workloads in the cloud.

There are many reasons why companies are repatriating their workloads from the cloud to on-premises. These include security, performance, regulatory compliance, and a desire for better control of the IT infrastructure. Another reason is cost, which can rise quickly and unexpectedly. Workloads often start small and demand a manageable expenditure, but when workloads jump — which they frequently do — so does the spending, which a company may not have anticipated.

Data volumes in the cloud have increased to a point where they're often not manageable. Moving some of this data back on premises can bring benefits beyond lower costs, such as better security and enhanced performance.

But as companies move their data back on-premises, they face several challenges. They need a data-storage solution that can protect their data wherever it resides — on-premises, offsite, or in the cloud. They also need a storage solution that ensures their data is available 24/7/365, even in unforeseen circumstances.

Ideally, they also need a storage solution that provides analytics that can rapidly decide what sets of data are critical to operations and what sets are not. With these analytics, organizations can efficiently determine which datasets they can place in the cloud, which can be stored locally, and which they should bring back on-premises. Analytics also enables companies to decide which data they must back up and which doesn't. With this, organizations can maintain an intelligent, tiered data architecture that ensures quick access to critical data and saves costs by identifying data they can store in less expensive, less readily accessible media.

Your To-Do List for Cloud Deployment in 2023

As cloud costs rise, organizations must reexamine their data storage systems. They must implement solutions that enable them to manage their workloads cost-effectively and, at the same time, ensure that their data is always accessible and secure.

Ahsan Siddiqui is Director of Product Management at Arcserve
Share this

The Latest

January 26, 2023

As enterprises work to implement or improve their observability practices, tool sprawl is a very real phenomenon ... Tool sprawl can and does happen all across the organization. In this post, though, we'll focus specifically on how and why observability efforts often result in tool sprawl, some of the possible negative consequences of that sprawl, and we'll offer some advice on how to reduce or even avoid sprawl ...

January 25, 2023

As companies generate more data across their network footprints, they need network observability tools to help find meaning in that data for better decision-making and problem solving. It seems many companies believe that adding more tools leads to better and faster insights ... And yet, observability tools aren't meeting many companies' needs. In fact, adding more tools introduces new challenges ...

January 24, 2023

Driven by the need to create scalable, faster, and more agile systems, businesses are adopting cloud native approaches. But cloud native environments also come with an explosion of data and complexity that makes it harder for businesses to detect and remediate issues before everything comes to a screeching halt. Observability, if done right, can make it easier to mitigate these challenges and remediate incidents before they become major customer-impacting problems ...

January 23, 2023

The spiraling cost of energy is forcing public cloud providers to raise their prices significantly. A recent report by Canalys predicted that public cloud prices will jump by around 20% in the US and more than 30% in Europe in 2023. These steep price increases will test the conventional wisdom that moving to the cloud is a cheap computing alternative ...

January 19, 2023

Despite strong interest over the past decade, the actual investment in DX has been recent. While 100% of enterprises are now engaged with DX in some way, most (77%) have begun their DX journey within the past two years. And most are early stage, with a fourth (24%) at the discussion stage and half (49%) currently transforming. Only 27% say they have finished their DX efforts ...

January 18, 2023

While most thought that distraction and motivation would be the main contributors to low productivity in a work-from-home environment, many organizations discovered that it was gaps in their IT systems that created some of the most significant challenges ...

January 17, 2023
The US aviation sector was struggling to return to normal following a nationwide ground stop imposed by Federal Aviation Administration (FAA) early Wednesday over a computer issue ...
January 13, 2023

APMdigest and leading IT research firm Enterprise Management Associates (EMA) are teaming up on the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 1, Dan Twing, President and COO of EMA, discusses Observability and Automation with Will Schoeppner, Research Director covering Application Performance Management and Business Intelligence at EMA ...

January 12, 2023

APMdigest is following up our list of 2023 Application Performance Management Predictions with predictions from industry experts about how the cloud will evolve in 2023 ...

January 11, 2023

As demand for digital services increases and distributed systems become more complex, organizations must collect and process a growing amount of observability data (logs, metrics, and traces). Site reliability engineers (SREs), developers, and security engineers use observability data to learn how their applications and environments are performing so they can successfully respond to issues and mitigate risk ...