Skip to main content

Balancing the Rising Costs of Public Cloud

Ahsan Siddiqui
Arcserve

The spiraling cost of energy is forcing public cloud providers to raise their prices significantly. A recent report by Canalys predicted that public cloud prices will jump by around 20% in the US and more than 30% in Europe in 2023. These steep price increases will test the conventional wisdom that moving to the cloud is a cheap computing alternative.

Indeed, many organizations are already looking at their higher cloud bills and assessing whether it still makes sense to keep moving their infrastructure to the cloud. They do have alternatives.

For instance, for solutions used regularly and persistently, it might make financial sense to bring those in-house rather than host them in the cloud. Owning the infrastructure and managing it yourself could be more cost-effective in the long run.

On the other hand, more complex technologies, and solutions with a high entry cost, such as artificial intelligence, remain good candidates for cloud hosting because they require so much infrastructure and personnel to run in-house. The cloud also remains an excellent option for specific services and solutions where more elasticity is required. This includes technologies that need to be scaled up quickly for a defined period, such as the last few days of each month or quarter when closing the books, then scaled back down.

These are just some issues that organizations should assess when determining if they should keep their data and infrastructure in the cloud. Moving them back on-premises or transitioning to a hybrid infrastructure entails keeping some data and applications in the cloud while returning others to an on-premises infrastructure. From now on, all organizations must take a step back and assess what will work best for them to find the right balance.

The Benefits of Hybrid Cloud

A hybrid cloud has a lot of advantages. Organizations adopting a hybrid cloud approach can more easily control costs and manage their data wherever it resides — on-premises, in a public or private cloud. Many organizations now face a range of emerging trends and threats that impact how they run their business and find the flexibility of a hybrid cloud essential.

A hybrid data center is adaptable. It's a viable and practical system that enables companies to meet the growing threat of ransomware attacks while taking on today's evolving business demands — all in real time. A hybrid data center provides strong security, efficient performance, reliability, scalability, agility, and cost-efficiency.

But a hybrid data center requires work. Implementing and operating one presents several IT-management challenges. Yes, a hybrid data center allows a business to efficiently store and shift workloads according to need and better protect its sensitive data. But a hybrid data center brings more complexity to managing servers, networks, storage, and software across the IT landscape.

For instance, organizations running a hybrid cloud must secure their data and applications both on-premises and in the cloud. They also must be able to recover data and applications on-premises or in the cloud, wherever the company initially hosted the data and applications. And they must handle backup and recovery across a hybrid environment. To do all this, they must have a data management and storage solution that meets the needs of a hybrid data center.

The Rise of Data Repatriation

As the cost of the cloud continues to balloon, many companies will take the dramatic step of "repatriating" workloads to preserve precious IT budgets. Already, rising energy prices are forcing organizations to rethink their cloud strategy and start repatriating their data from the cloud to on-premises.

Indeed, market intelligence firm IDC research shows that most organizations are now shifting workloads from the cloud back to on-premises data centers. In the IDC survey, 71% of respondents said they plan to move some or all of the workloads they're now running in public clouds back to on-premises environments in the next two years. A mere 13% said they plan to run all their workloads in the cloud.

There are many reasons why companies are repatriating their workloads from the cloud to on-premises. These include security, performance, regulatory compliance, and a desire for better control of the IT infrastructure. Another reason is cost, which can rise quickly and unexpectedly. Workloads often start small and demand a manageable expenditure, but when workloads jump — which they frequently do — so does the spending, which a company may not have anticipated.

Data volumes in the cloud have increased to a point where they're often not manageable. Moving some of this data back on premises can bring benefits beyond lower costs, such as better security and enhanced performance.

But as companies move their data back on-premises, they face several challenges. They need a data-storage solution that can protect their data wherever it resides — on-premises, offsite, or in the cloud. They also need a storage solution that ensures their data is available 24/7/365, even in unforeseen circumstances.

Ideally, they also need a storage solution that provides analytics that can rapidly decide what sets of data are critical to operations and what sets are not. With these analytics, organizations can efficiently determine which datasets they can place in the cloud, which can be stored locally, and which they should bring back on-premises. Analytics also enables companies to decide which data they must back up and which doesn't. With this, organizations can maintain an intelligent, tiered data architecture that ensures quick access to critical data and saves costs by identifying data they can store in less expensive, less readily accessible media.

Your To-Do List for Cloud Deployment in 2023

As cloud costs rise, organizations must reexamine their data storage systems. They must implement solutions that enable them to manage their workloads cost-effectively and, at the same time, ensure that their data is always accessible and secure.

Ahsan Siddiqui is Director of Product Management at Arcserve

Hot Topics

The Latest

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Artificial Intelligence (AI) is reshaping observability, and observability is becoming essential for AI. This is a two-way relationship that is increasingly relevant as enterprises scale generative AI ... This dual role makes AI and observability inseparable. In this blog, I cover more details of each side ...

Poor DEX directly costs global businesses an average of 470,000 hours per year, equivalent to around 226 full-time employees, according to a new report from Nexthink, Cracking the DEX Equation: The Annual Workplace Productivity Report. This indicates that digital friction is a vital and underreported element of the global productivity crisis ...

Balancing the Rising Costs of Public Cloud

Ahsan Siddiqui
Arcserve

The spiraling cost of energy is forcing public cloud providers to raise their prices significantly. A recent report by Canalys predicted that public cloud prices will jump by around 20% in the US and more than 30% in Europe in 2023. These steep price increases will test the conventional wisdom that moving to the cloud is a cheap computing alternative.

Indeed, many organizations are already looking at their higher cloud bills and assessing whether it still makes sense to keep moving their infrastructure to the cloud. They do have alternatives.

For instance, for solutions used regularly and persistently, it might make financial sense to bring those in-house rather than host them in the cloud. Owning the infrastructure and managing it yourself could be more cost-effective in the long run.

On the other hand, more complex technologies, and solutions with a high entry cost, such as artificial intelligence, remain good candidates for cloud hosting because they require so much infrastructure and personnel to run in-house. The cloud also remains an excellent option for specific services and solutions where more elasticity is required. This includes technologies that need to be scaled up quickly for a defined period, such as the last few days of each month or quarter when closing the books, then scaled back down.

These are just some issues that organizations should assess when determining if they should keep their data and infrastructure in the cloud. Moving them back on-premises or transitioning to a hybrid infrastructure entails keeping some data and applications in the cloud while returning others to an on-premises infrastructure. From now on, all organizations must take a step back and assess what will work best for them to find the right balance.

The Benefits of Hybrid Cloud

A hybrid cloud has a lot of advantages. Organizations adopting a hybrid cloud approach can more easily control costs and manage their data wherever it resides — on-premises, in a public or private cloud. Many organizations now face a range of emerging trends and threats that impact how they run their business and find the flexibility of a hybrid cloud essential.

A hybrid data center is adaptable. It's a viable and practical system that enables companies to meet the growing threat of ransomware attacks while taking on today's evolving business demands — all in real time. A hybrid data center provides strong security, efficient performance, reliability, scalability, agility, and cost-efficiency.

But a hybrid data center requires work. Implementing and operating one presents several IT-management challenges. Yes, a hybrid data center allows a business to efficiently store and shift workloads according to need and better protect its sensitive data. But a hybrid data center brings more complexity to managing servers, networks, storage, and software across the IT landscape.

For instance, organizations running a hybrid cloud must secure their data and applications both on-premises and in the cloud. They also must be able to recover data and applications on-premises or in the cloud, wherever the company initially hosted the data and applications. And they must handle backup and recovery across a hybrid environment. To do all this, they must have a data management and storage solution that meets the needs of a hybrid data center.

The Rise of Data Repatriation

As the cost of the cloud continues to balloon, many companies will take the dramatic step of "repatriating" workloads to preserve precious IT budgets. Already, rising energy prices are forcing organizations to rethink their cloud strategy and start repatriating their data from the cloud to on-premises.

Indeed, market intelligence firm IDC research shows that most organizations are now shifting workloads from the cloud back to on-premises data centers. In the IDC survey, 71% of respondents said they plan to move some or all of the workloads they're now running in public clouds back to on-premises environments in the next two years. A mere 13% said they plan to run all their workloads in the cloud.

There are many reasons why companies are repatriating their workloads from the cloud to on-premises. These include security, performance, regulatory compliance, and a desire for better control of the IT infrastructure. Another reason is cost, which can rise quickly and unexpectedly. Workloads often start small and demand a manageable expenditure, but when workloads jump — which they frequently do — so does the spending, which a company may not have anticipated.

Data volumes in the cloud have increased to a point where they're often not manageable. Moving some of this data back on premises can bring benefits beyond lower costs, such as better security and enhanced performance.

But as companies move their data back on-premises, they face several challenges. They need a data-storage solution that can protect their data wherever it resides — on-premises, offsite, or in the cloud. They also need a storage solution that ensures their data is available 24/7/365, even in unforeseen circumstances.

Ideally, they also need a storage solution that provides analytics that can rapidly decide what sets of data are critical to operations and what sets are not. With these analytics, organizations can efficiently determine which datasets they can place in the cloud, which can be stored locally, and which they should bring back on-premises. Analytics also enables companies to decide which data they must back up and which doesn't. With this, organizations can maintain an intelligent, tiered data architecture that ensures quick access to critical data and saves costs by identifying data they can store in less expensive, less readily accessible media.

Your To-Do List for Cloud Deployment in 2023

As cloud costs rise, organizations must reexamine their data storage systems. They must implement solutions that enable them to manage their workloads cost-effectively and, at the same time, ensure that their data is always accessible and secure.

Ahsan Siddiqui is Director of Product Management at Arcserve

Hot Topics

The Latest

Organizations that perform regular audits and assessments of AI system performance and compliance are over three times more likely to achieve high GenAI value than organizations that do not, according to a survey by Gartner ...

Kubernetes has become the backbone of cloud infrastructure, but it's also one of its biggest cost drivers. Recent research shows that 98% of senior IT leaders say Kubernetes now drives cloud spend, yet 91% still can't optimize it effectively. After years of adoption, most organizations have moved past discovery. They know container sprawl, idle resources and reactive scaling inflate costs. What they don't know is how to fix it ...

Artificial intelligence is no longer a future investment. It's already embedded in how we work — whether through copilots in productivity apps, real-time transcription tools in meetings, or machine learning models fueling analytics and personalization. But while enterprise adoption accelerates, there's one critical area many leaders have yet to examine: Can your network actually support AI at the speed your users expect? ...

The more technology businesses invest in, the more potential attack surfaces they have that can be exploited. Without the right continuity plans in place, the disruptions caused by these attacks can bring operations to a standstill and cause irreparable damage to an organization. It's essential to take the time now to ensure your business has the right tools, processes, and recovery initiatives in place to weather any type of IT disaster that comes up. Here are some effective strategies you can follow to achieve this ...

In today's fast-paced AI landscape, CIOs, IT leaders, and engineers are constantly challenged to manage increasingly complex and interconnected systems. The sheer scale and velocity of data generated by modern infrastructure can be overwhelming, making it difficult to maintain uptime, prevent outages, and create a seamless customer experience. This complexity is magnified by the industry's shift towards agentic AI ...

In MEAN TIME TO INSIGHT Episode 19, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA explains the cause of the AWS outage in October ... 

The explosion of generative AI and machine learning capabilities has fundamentally changed the conversation around cloud migration. It's no longer just about modernization or cost savings — it's about being able to compete in a market where AI is rapidly becoming table stakes. Companies that can't quickly spin up AI workloads, feed models with data at scale, or experiment with new capabilities are falling behind faster than ever before. But here's what I'm seeing: many organizations want to capitalize on AI, but they're stuck ...

On September 16, the world celebrated the 10th annual IT Pro Day, giving companies a chance to laud the professionals who serve as the backbone to almost every successful business across the globe. Despite the growing importance of their roles, many IT pros still work in the background and often go underappreciated ...

Artificial Intelligence (AI) is reshaping observability, and observability is becoming essential for AI. This is a two-way relationship that is increasingly relevant as enterprises scale generative AI ... This dual role makes AI and observability inseparable. In this blog, I cover more details of each side ...

Poor DEX directly costs global businesses an average of 470,000 hours per year, equivalent to around 226 full-time employees, according to a new report from Nexthink, Cracking the DEX Equation: The Annual Workplace Productivity Report. This indicates that digital friction is a vital and underreported element of the global productivity crisis ...