Skip to main content

Modern Performant Applications Require Modern Storage

Gary Ogasawara
Cloudian

Modern, cloud-native applications have been steadily expanding beyond development environments to on-premises production workloads. For enterprises, one of the primary drivers for making this move has been to ensure performance and avoid the cost and complexity of moving large workloads to the cloud.

As a result, organizations require a modern storage foundation that can fully support cloud-native environments and emerging technologies, such as Kubernetes, serverless computing and microservices which are significant components of these environments.

The following is an easy-to-follow checklist for building the ideal modern storage foundation:

1. S3 Compatibility

Complete S3 compatibility is critical for today's modern storage foundation as it ensures that applications developed for the public cloud can also work seamlessly on-premises. In addition, S3 compatibility simplifies and streamlines the ability to move applications and data across hybrid cloud environments.

2. Performance

High-level, predictable and scalable performance is a must for today's modern storage foundation. This includes the ability to rapidly complete a read or write operation, execute a substantial number of storage operations per second, and provide high data throughput for storage and retrieval in MB/s or GB/s.

3. Scalability

A modern storage foundation must be highly scalable across four dimensions:

■ Throughput scalability - the ability to run more throughput or process more data per second

■ Client scalability - the ability to increase the number of clients or users accessing the storage system

■ Capacity scalability - the ability to grow storage capacity in a single deployment of storage systems

■ Cluster scalability - the ability to grow a storage cluster by deploying additional components

4. Consistency

Consistency is another key element of modern storage. A storage system can be described as "consistent" if read operations promptly return the correct data after it's written, updated or deleted. If new data is immediately available for read operations by clients after it's been changed, the system is "extremely consistent." However, if there is a lag until read operations return the updated data, the system is just "eventually consistent." In this case, the read delay must be considered against the recovery point objective (RPO) because it represents the maximum amount of data loss in the case of component failure.

5. Durability

A modern storage foundation must be durable and protect against data loss. Truly durable platforms ensure that data can be safely stored for extended periods of time. This requires the inclusion of multiple layers of data protection (including support for numerous backup copies) and multiple levels of redundancy (such as local redundancy, redundancy over regions, redundancy over public cloud availability zones and redundancy to a remote site). To be truly durable, storage platforms must also be capable of identifying data corruption and automatically restoring or reconstructing that data. In addition, the specific storage media that comprises a cloud-native storage platform (e.g., SSDs, spinning disks and tapes) should be inherently physically resilient.

6. Deployability

Cloud-native apps are extremely portable and easily distributed across many locations. As a result, it's critical that the storage foundation supporting such apps be capable of being deployed or provisioned on demand. This requires a software-defined, scale-out approach, which enables organizations to immediately grow storage capacity without adding new systems. A storage architecture that leverages a single namespace is ideal here. Because such an architecture connects all nodes together in a peer-to-peer global data fabric, it's possible to add new nodes (and more capacity) on demand across any location using the existing infrastructure.

7. High Availability (HA)

A modern storage foundation must maintain and deliver uninterrupted access to data in the event of a failure, no matter where that failure occurs. To be considered highly available, storage systems should be able to heal and restore any failed components, maintain redundant data copies on a separate device and handle failover to redundant devices/components.

8. Security

Comprehensive end-to-end security is essential for modern storage. This includes encryption for data in flight and at rest, RBAC/IAM and SAML access controls, integrated firewall and certification with stringent government security requirements such as Common Criteria, Federal Information Processing Standard (FIPS) and SEC Rule 17a-4(f). In addition, modern storage foundations should offer data immutability (i.e., ensure the data cannot be changed/altered/deleted for a designated period of time) to protect data and operations from cyberattacks such as ransomware.

Gary Ogasawara is CTO at Cloudian

Hot Topics

The Latest

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...

Modern Performant Applications Require Modern Storage

Gary Ogasawara
Cloudian

Modern, cloud-native applications have been steadily expanding beyond development environments to on-premises production workloads. For enterprises, one of the primary drivers for making this move has been to ensure performance and avoid the cost and complexity of moving large workloads to the cloud.

As a result, organizations require a modern storage foundation that can fully support cloud-native environments and emerging technologies, such as Kubernetes, serverless computing and microservices which are significant components of these environments.

The following is an easy-to-follow checklist for building the ideal modern storage foundation:

1. S3 Compatibility

Complete S3 compatibility is critical for today's modern storage foundation as it ensures that applications developed for the public cloud can also work seamlessly on-premises. In addition, S3 compatibility simplifies and streamlines the ability to move applications and data across hybrid cloud environments.

2. Performance

High-level, predictable and scalable performance is a must for today's modern storage foundation. This includes the ability to rapidly complete a read or write operation, execute a substantial number of storage operations per second, and provide high data throughput for storage and retrieval in MB/s or GB/s.

3. Scalability

A modern storage foundation must be highly scalable across four dimensions:

■ Throughput scalability - the ability to run more throughput or process more data per second

■ Client scalability - the ability to increase the number of clients or users accessing the storage system

■ Capacity scalability - the ability to grow storage capacity in a single deployment of storage systems

■ Cluster scalability - the ability to grow a storage cluster by deploying additional components

4. Consistency

Consistency is another key element of modern storage. A storage system can be described as "consistent" if read operations promptly return the correct data after it's written, updated or deleted. If new data is immediately available for read operations by clients after it's been changed, the system is "extremely consistent." However, if there is a lag until read operations return the updated data, the system is just "eventually consistent." In this case, the read delay must be considered against the recovery point objective (RPO) because it represents the maximum amount of data loss in the case of component failure.

5. Durability

A modern storage foundation must be durable and protect against data loss. Truly durable platforms ensure that data can be safely stored for extended periods of time. This requires the inclusion of multiple layers of data protection (including support for numerous backup copies) and multiple levels of redundancy (such as local redundancy, redundancy over regions, redundancy over public cloud availability zones and redundancy to a remote site). To be truly durable, storage platforms must also be capable of identifying data corruption and automatically restoring or reconstructing that data. In addition, the specific storage media that comprises a cloud-native storage platform (e.g., SSDs, spinning disks and tapes) should be inherently physically resilient.

6. Deployability

Cloud-native apps are extremely portable and easily distributed across many locations. As a result, it's critical that the storage foundation supporting such apps be capable of being deployed or provisioned on demand. This requires a software-defined, scale-out approach, which enables organizations to immediately grow storage capacity without adding new systems. A storage architecture that leverages a single namespace is ideal here. Because such an architecture connects all nodes together in a peer-to-peer global data fabric, it's possible to add new nodes (and more capacity) on demand across any location using the existing infrastructure.

7. High Availability (HA)

A modern storage foundation must maintain and deliver uninterrupted access to data in the event of a failure, no matter where that failure occurs. To be considered highly available, storage systems should be able to heal and restore any failed components, maintain redundant data copies on a separate device and handle failover to redundant devices/components.

8. Security

Comprehensive end-to-end security is essential for modern storage. This includes encryption for data in flight and at rest, RBAC/IAM and SAML access controls, integrated firewall and certification with stringent government security requirements such as Common Criteria, Federal Information Processing Standard (FIPS) and SEC Rule 17a-4(f). In addition, modern storage foundations should offer data immutability (i.e., ensure the data cannot be changed/altered/deleted for a designated period of time) to protect data and operations from cyberattacks such as ransomware.

Gary Ogasawara is CTO at Cloudian

Hot Topics

The Latest

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

According to Gartner, Inc. the following six trends will shape the future of cloud over the next four years, ultimately resulting in new ways of working that are digital in nature and transformative in impact ...