Modern Performant Applications Require Modern Storage
February 17, 2022

Gary Ogasawara
Cloudian

Share this

Modern, cloud-native applications have been steadily expanding beyond development environments to on-premises production workloads. For enterprises, one of the primary drivers for making this move has been to ensure performance and avoid the cost and complexity of moving large workloads to the cloud.

As a result, organizations require a modern storage foundation that can fully support cloud-native environments and emerging technologies, such as Kubernetes, serverless computing and microservices which are significant components of these environments.

The following is an easy-to-follow checklist for building the ideal modern storage foundation:

1. S3 Compatibility

Complete S3 compatibility is critical for today's modern storage foundation as it ensures that applications developed for the public cloud can also work seamlessly on-premises. In addition, S3 compatibility simplifies and streamlines the ability to move applications and data across hybrid cloud environments.

2. Performance

High-level, predictable and scalable performance is a must for today's modern storage foundation. This includes the ability to rapidly complete a read or write operation, execute a substantial number of storage operations per second, and provide high data throughput for storage and retrieval in MB/s or GB/s.

3. Scalability

A modern storage foundation must be highly scalable across four dimensions:

■ Throughput scalability - the ability to run more throughput or process more data per second

■ Client scalability - the ability to increase the number of clients or users accessing the storage system

■ Capacity scalability - the ability to grow storage capacity in a single deployment of storage systems

■ Cluster scalability - the ability to grow a storage cluster by deploying additional components

4. Consistency

Consistency is another key element of modern storage. A storage system can be described as "consistent" if read operations promptly return the correct data after it's written, updated or deleted. If new data is immediately available for read operations by clients after it's been changed, the system is "extremely consistent." However, if there is a lag until read operations return the updated data, the system is just "eventually consistent." In this case, the read delay must be considered against the recovery point objective (RPO) because it represents the maximum amount of data loss in the case of component failure.

5. Durability

A modern storage foundation must be durable and protect against data loss. Truly durable platforms ensure that data can be safely stored for extended periods of time. This requires the inclusion of multiple layers of data protection (including support for numerous backup copies) and multiple levels of redundancy (such as local redundancy, redundancy over regions, redundancy over public cloud availability zones and redundancy to a remote site). To be truly durable, storage platforms must also be capable of identifying data corruption and automatically restoring or reconstructing that data. In addition, the specific storage media that comprises a cloud-native storage platform (e.g., SSDs, spinning disks and tapes) should be inherently physically resilient.

6. Deployability

Cloud-native apps are extremely portable and easily distributed across many locations. As a result, it's critical that the storage foundation supporting such apps be capable of being deployed or provisioned on demand. This requires a software-defined, scale-out approach, which enables organizations to immediately grow storage capacity without adding new systems. A storage architecture that leverages a single namespace is ideal here. Because such an architecture connects all nodes together in a peer-to-peer global data fabric, it's possible to add new nodes (and more capacity) on demand across any location using the existing infrastructure.

7. High Availability (HA)

A modern storage foundation must maintain and deliver uninterrupted access to data in the event of a failure, no matter where that failure occurs. To be considered highly available, storage systems should be able to heal and restore any failed components, maintain redundant data copies on a separate device and handle failover to redundant devices/components.

8. Security

Comprehensive end-to-end security is essential for modern storage. This includes encryption for data in flight and at rest, RBAC/IAM and SAML access controls, integrated firewall and certification with stringent government security requirements such as Common Criteria, Federal Information Processing Standard (FIPS) and SEC Rule 17a-4(f). In addition, modern storage foundations should offer data immutability (i.e., ensure the data cannot be changed/altered/deleted for a designated period of time) to protect data and operations from cyberattacks such as ransomware.

Gary Ogasawara is CTO at Cloudian
Share this

The Latest

May 25, 2022

Site reliability engineering (SRE) is fast becoming an essential aspect of modern IT operations, particularly in highly scaled, big data environments. As businesses and industries shift to the digital and embrace new IT infrastructures and technologies to remain operational and competitive, the need for a new approach for IT teams to find and manage the balance between launching new systems and features and ensuring these are intuitive, reliable, and friendly for end users has intensified as well ...

May 24, 2022

The most sophisticated observability practitioners (leaders) are able to cut downtime costs by 90%, from an estimated $23.8 million annually to just $2.5 million, compared to observability beginners, according to the State of Observability 2022 from Splunk in collaboration with the Enterprise Strategy Group. What's more, leaders in observability are more innovative and more successful at achieving digital transformation outcomes and other initiatives ...

May 23, 2022

Programmatically tracked service level indicators (SLIs) are foundational to every site reliability engineering practice. When engineering teams have programmatic SLIs in place, they lessen the need to manually track performance and incident data. They're also able to reduce manual toil because our DevOps teams define the capabilities and metrics that define their SLI data, which they collect automatically — hence "programmatic" ...

May 19, 2022

Recently, a regional healthcare organization wanted to retire its legacy monitoring tools and adopt AIOps. The organization asked Windward Consulting to implement an AIOps strategy that would help streamline its outdated and unwieldy IT system management. Our team's AIOps implementation process helped this client and can help others in the industry too. Here's what my team did ...

May 18, 2022

You've likely heard it before: every business is a digital business. However, some businesses and sectors digitize more quickly than others. Healthcare has traditionally been on the slower side of digital transformation and technology adoption, but that's changing. As healthcare organizations roll out innovations at increasing velocity, they must build a long-term strategy for how they will maintain the uptime of their critical apps and services. And there's only one tool that can ensure this continuous availability in our modern IT ecosystems. AIOps can help IT Operations teams ensure the uptime of critical apps and services ...

May 17, 2022

Between 2012 to 2015 all of the hyperscalers attempted to use the legacy APM solutions to improve their own visibility. To no avail. The problem was that none of the previous generations of APM solutions could match the scaling demand, nor could they provide interoperability due to their proprietary and exclusive agentry ...

May 16, 2022

The DevOps journey begins by understanding a team's DevOps flow and identifying precisely what tasks deliver the best return on engineers' time when automated. The rest of this blog will help DevOps team managers by outlining what jobs can — and should be automated ...

May 12, 2022

A survey from Snow Software polled more than 500 IT leaders to determine the current state of cloud infrastructure. Nearly half of the IT leaders who responded agreed that cloud was critical to operations during the pandemic with the majority deploying a hybrid cloud strategy consisting of both public and private clouds. Unsurprisingly, over the last 12 months, the majority of respondents had increased overall cloud spend — a substantial increase over the 2020 findings ...

May 11, 2022

As we all know, the drastic changes in the world have caused the workforce to take a hybrid approach over the last two years. A lot of that time, being fully remote. With the back and forth between home and office, employees need ways to stay productive and access useful information necessary to complete their daily work. The ability to obtain a holistic view of data relevant to the user and get answers to topics, no matter the worker's location, is crucial for a successful and efficient hybrid working environment ...

May 10, 2022

For the past decade, Application Performance Management has been a capability provided by a very small and exclusive set of vendors. These vendors provided a bolt-on solution that provided monitoring capabilities without requiring developers to take ownership of instrumentation and monitoring. You may think of this as a benefit, but in reality, it was not ...