Is Your Data Backup Plan COVID-Proof?
August 26, 2020

Mike Fuhrman
Flexential

Share this

No matter what year it is, businesses cannot afford, financially or operationally, to be hit by a data breach or system loss. This is an ongoing concern, but in the age of COVID-19, that risk multiplies several fold due to remote data access. Any downtime as companies work to recover lost information could have major consequences.

At the same time, businesses need to democratize data access to remote employees. We've seen this happening with the growth of cloud investment and migration, especially amidst remote work. But, despite the expansion of data access to accommodate remote workers, organizations are not simultaneously training their employees on how to securely maintain those systems.

To preserve open collaboration while keeping their enterprise environments secure, organizations should take this time to do a mid-year check-up on their data backup preparation. There are three areas where organizations should re-examine their operations to ensure data security, flexibility and accessibility.

Is Automatic Backup in Place Across Legacy and Modern Applications?

Because of the constant flow of data throughout the enterprise, if one system is breached all systems are affected. Therefore, data backup cannot be a siloed effort — it should be implemented uniformly across the entire organization's departments and applications. IT leaders should review how data is currently secured in their organization to make any necessary corrections.

First, organizations should have guidelines in place on where employees need to save and manipulate data — from servers to private cloud applications like Office365's SharePoint and OneDrive. Leadership must make sure that employees understand where critical information should live and are taking the correct steps to keep it there.

Second, we can't forget about historical application use. Even if a system is no longer actively used within the organization, it may still house important files that have not yet been carried over into new environments or that need to adhere to strict retention rules. Any backup software put into place should not only support the new, but also the old. Also, we need to ensure retention periods are properly defined and maintained.

Third, in remote scenarios, server access is often limited, making cloud use that much more essential. Backup software should go beyond desktop-only applications and extend into an organization's cloud environment, where a majority of employees are actively working on a daily basis.

Is Your Network Backbone Scalable, Strong and Secure?

A backup solution should not, and cannot, be a "one size fits all" approach. Every organization has unique needs and business demands. As such, it's important that your backup capabilities evolve as your organization grows.

Scalability is key to effectively managing all, not just some, of your data. This is especially true while online data creation is in hyperdrive as workforces collaborate virtually.

After correctly scaling your backup solutions to meet your needs, it's important to also take the time to check in on your network backbone. Bandwidth constraints are unacceptable, especially with employees offsite. If your backup solution goes down due to connection issues, you've lost the entire purpose of initial implementation.

From there, make sure it is all secure. Data should be consistently encrypted and able to be quickly restored, no matter if an employee is on or off site, if an unexpected disaster strikes. Malicious actors know that data access is widening and are seizing the opportunity to attack. In fact, cyber activity has grown exponentially in the last year, with reportsshowing that 82% of organizations have experienced downtime from an attack. Not to mention, employees are largely unaware of how to thwart potential attacks and often the reason for successful breaches. As an added layer of defense against attackers and human error, make sure your security protocol is up-to-date.

Finally, while all organizations are unique, compliance regulations remain standard across the board. Organizations should examine whether their current backup solution meets the data storage requirements to remain in accordance with current laws.

Are You Managing Backup Internally or Externally?

IT leadership must examine the effectiveness of their solution management. IT departments should ask themselves, "is the team of 'experts' we are outsourcing our back up needs to meeting all of our recovery, security and compliance needs?" or "is this allowing our team to focus on other organizational needs?"

If your organization is managing its own data backup, your IT department should already understand whether it is taking away from their other day-to-day activities. If teams are strapped for time and resources, it may prove helpful to experiment with your options. This could mean bringing on additional internal or external team members to support data management. You might consider outsourcing the management of your backups to focus your department's precious resources on other high-priority tasks.

Remain Prepared Despite Uncertain Times

We are in uncertain times where organizations are in remote hyperdrive. The worst case scenario right now is to lose data or access to it because employees cannot go into offices if systems go down and IT teams are not as readily accessible as they once were.

In conducting a mid-year check-in on your systems now, you will save your organization from an unnecessary burden tomorrow.

Mike Fuhrman is COO for Cloud and Managed Services at Flexential
Share this

The Latest

May 25, 2022

Site reliability engineering (SRE) is fast becoming an essential aspect of modern IT operations, particularly in highly scaled, big data environments. As businesses and industries shift to the digital and embrace new IT infrastructures and technologies to remain operational and competitive, the need for a new approach for IT teams to find and manage the balance between launching new systems and features and ensuring these are intuitive, reliable, and friendly for end users has intensified as well ...

May 24, 2022

The most sophisticated observability practitioners (leaders) are able to cut downtime costs by 90%, from an estimated $23.8 million annually to just $2.5 million, compared to observability beginners, according to the State of Observability 2022 from Splunk in collaboration with the Enterprise Strategy Group. What's more, leaders in observability are more innovative and more successful at achieving digital transformation outcomes and other initiatives ...

May 23, 2022

Programmatically tracked service level indicators (SLIs) are foundational to every site reliability engineering practice. When engineering teams have programmatic SLIs in place, they lessen the need to manually track performance and incident data. They're also able to reduce manual toil because our DevOps teams define the capabilities and metrics that define their SLI data, which they collect automatically — hence "programmatic" ...

May 19, 2022

Recently, a regional healthcare organization wanted to retire its legacy monitoring tools and adopt AIOps. The organization asked Windward Consulting to implement an AIOps strategy that would help streamline its outdated and unwieldy IT system management. Our team's AIOps implementation process helped this client and can help others in the industry too. Here's what my team did ...

May 18, 2022

You've likely heard it before: every business is a digital business. However, some businesses and sectors digitize more quickly than others. Healthcare has traditionally been on the slower side of digital transformation and technology adoption, but that's changing. As healthcare organizations roll out innovations at increasing velocity, they must build a long-term strategy for how they will maintain the uptime of their critical apps and services. And there's only one tool that can ensure this continuous availability in our modern IT ecosystems. AIOps can help IT Operations teams ensure the uptime of critical apps and services ...

May 17, 2022

Between 2012 to 2015 all of the hyperscalers attempted to use the legacy APM solutions to improve their own visibility. To no avail. The problem was that none of the previous generations of APM solutions could match the scaling demand, nor could they provide interoperability due to their proprietary and exclusive agentry ...

May 16, 2022

The DevOps journey begins by understanding a team's DevOps flow and identifying precisely what tasks deliver the best return on engineers' time when automated. The rest of this blog will help DevOps team managers by outlining what jobs can — and should be automated ...

May 12, 2022

A survey from Snow Software polled more than 500 IT leaders to determine the current state of cloud infrastructure. Nearly half of the IT leaders who responded agreed that cloud was critical to operations during the pandemic with the majority deploying a hybrid cloud strategy consisting of both public and private clouds. Unsurprisingly, over the last 12 months, the majority of respondents had increased overall cloud spend — a substantial increase over the 2020 findings ...

May 11, 2022

As we all know, the drastic changes in the world have caused the workforce to take a hybrid approach over the last two years. A lot of that time, being fully remote. With the back and forth between home and office, employees need ways to stay productive and access useful information necessary to complete their daily work. The ability to obtain a holistic view of data relevant to the user and get answers to topics, no matter the worker's location, is crucial for a successful and efficient hybrid working environment ...

May 10, 2022

For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not ...