Skip to main content

Modern Performant Applications Require Modern Storage

Gary Ogasawara
Cloudian

Modern, cloud-native applications have been steadily expanding beyond development environments to on-premises production workloads. For enterprises, one of the primary drivers for making this move has been to ensure performance and avoid the cost and complexity of moving large workloads to the cloud.

As a result, organizations require a modern storage foundation that can fully support cloud-native environments and emerging technologies, such as Kubernetes, serverless computing and microservices which are significant components of these environments.

The following is an easy-to-follow checklist for building the ideal modern storage foundation:

1. S3 Compatibility

Complete S3 compatibility is critical for today's modern storage foundation as it ensures that applications developed for the public cloud can also work seamlessly on-premises. In addition, S3 compatibility simplifies and streamlines the ability to move applications and data across hybrid cloud environments.

2. Performance

High-level, predictable and scalable performance is a must for today's modern storage foundation. This includes the ability to rapidly complete a read or write operation, execute a substantial number of storage operations per second, and provide high data throughput for storage and retrieval in MB/s or GB/s.

3. Scalability

A modern storage foundation must be highly scalable across four dimensions:

■ Throughput scalability - the ability to run more throughput or process more data per second

■ Client scalability - the ability to increase the number of clients or users accessing the storage system

■ Capacity scalability - the ability to grow storage capacity in a single deployment of storage systems

■ Cluster scalability - the ability to grow a storage cluster by deploying additional components

4. Consistency

Consistency is another key element of modern storage. A storage system can be described as "consistent" if read operations promptly return the correct data after it's written, updated or deleted. If new data is immediately available for read operations by clients after it's been changed, the system is "extremely consistent." However, if there is a lag until read operations return the updated data, the system is just "eventually consistent." In this case, the read delay must be considered against the recovery point objective (RPO) because it represents the maximum amount of data loss in the case of component failure.

5. Durability

A modern storage foundation must be durable and protect against data loss. Truly durable platforms ensure that data can be safely stored for extended periods of time. This requires the inclusion of multiple layers of data protection (including support for numerous backup copies) and multiple levels of redundancy (such as local redundancy, redundancy over regions, redundancy over public cloud availability zones and redundancy to a remote site). To be truly durable, storage platforms must also be capable of identifying data corruption and automatically restoring or reconstructing that data. In addition, the specific storage media that comprises a cloud-native storage platform (e.g., SSDs, spinning disks and tapes) should be inherently physically resilient.

6. Deployability

Cloud-native apps are extremely portable and easily distributed across many locations. As a result, it's critical that the storage foundation supporting such apps be capable of being deployed or provisioned on demand. This requires a software-defined, scale-out approach, which enables organizations to immediately grow storage capacity without adding new systems. A storage architecture that leverages a single namespace is ideal here. Because such an architecture connects all nodes together in a peer-to-peer global data fabric, it's possible to add new nodes (and more capacity) on demand across any location using the existing infrastructure.

7. High Availability (HA)

A modern storage foundation must maintain and deliver uninterrupted access to data in the event of a failure, no matter where that failure occurs. To be considered highly available, storage systems should be able to heal and restore any failed components, maintain redundant data copies on a separate device and handle failover to redundant devices/components.

8. Security

Comprehensive end-to-end security is essential for modern storage. This includes encryption for data in flight and at rest, RBAC/IAM and SAML access controls, integrated firewall and certification with stringent government security requirements such as Common Criteria, Federal Information Processing Standard (FIPS) and SEC Rule 17a-4(f). In addition, modern storage foundations should offer data immutability (i.e., ensure the data cannot be changed/altered/deleted for a designated period of time) to protect data and operations from cyberattacks such as ransomware.

Gary Ogasawara is CTO at Cloudian

Hot Topics

The Latest

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 4 covers negative impacts of AI ...

Modern Performant Applications Require Modern Storage

Gary Ogasawara
Cloudian

Modern, cloud-native applications have been steadily expanding beyond development environments to on-premises production workloads. For enterprises, one of the primary drivers for making this move has been to ensure performance and avoid the cost and complexity of moving large workloads to the cloud.

As a result, organizations require a modern storage foundation that can fully support cloud-native environments and emerging technologies, such as Kubernetes, serverless computing and microservices which are significant components of these environments.

The following is an easy-to-follow checklist for building the ideal modern storage foundation:

1. S3 Compatibility

Complete S3 compatibility is critical for today's modern storage foundation as it ensures that applications developed for the public cloud can also work seamlessly on-premises. In addition, S3 compatibility simplifies and streamlines the ability to move applications and data across hybrid cloud environments.

2. Performance

High-level, predictable and scalable performance is a must for today's modern storage foundation. This includes the ability to rapidly complete a read or write operation, execute a substantial number of storage operations per second, and provide high data throughput for storage and retrieval in MB/s or GB/s.

3. Scalability

A modern storage foundation must be highly scalable across four dimensions:

■ Throughput scalability - the ability to run more throughput or process more data per second

■ Client scalability - the ability to increase the number of clients or users accessing the storage system

■ Capacity scalability - the ability to grow storage capacity in a single deployment of storage systems

■ Cluster scalability - the ability to grow a storage cluster by deploying additional components

4. Consistency

Consistency is another key element of modern storage. A storage system can be described as "consistent" if read operations promptly return the correct data after it's written, updated or deleted. If new data is immediately available for read operations by clients after it's been changed, the system is "extremely consistent." However, if there is a lag until read operations return the updated data, the system is just "eventually consistent." In this case, the read delay must be considered against the recovery point objective (RPO) because it represents the maximum amount of data loss in the case of component failure.

5. Durability

A modern storage foundation must be durable and protect against data loss. Truly durable platforms ensure that data can be safely stored for extended periods of time. This requires the inclusion of multiple layers of data protection (including support for numerous backup copies) and multiple levels of redundancy (such as local redundancy, redundancy over regions, redundancy over public cloud availability zones and redundancy to a remote site). To be truly durable, storage platforms must also be capable of identifying data corruption and automatically restoring or reconstructing that data. In addition, the specific storage media that comprises a cloud-native storage platform (e.g., SSDs, spinning disks and tapes) should be inherently physically resilient.

6. Deployability

Cloud-native apps are extremely portable and easily distributed across many locations. As a result, it's critical that the storage foundation supporting such apps be capable of being deployed or provisioned on demand. This requires a software-defined, scale-out approach, which enables organizations to immediately grow storage capacity without adding new systems. A storage architecture that leverages a single namespace is ideal here. Because such an architecture connects all nodes together in a peer-to-peer global data fabric, it's possible to add new nodes (and more capacity) on demand across any location using the existing infrastructure.

7. High Availability (HA)

A modern storage foundation must maintain and deliver uninterrupted access to data in the event of a failure, no matter where that failure occurs. To be considered highly available, storage systems should be able to heal and restore any failed components, maintain redundant data copies on a separate device and handle failover to redundant devices/components.

8. Security

Comprehensive end-to-end security is essential for modern storage. This includes encryption for data in flight and at rest, RBAC/IAM and SAML access controls, integrated firewall and certification with stringent government security requirements such as Common Criteria, Federal Information Processing Standard (FIPS) and SEC Rule 17a-4(f). In addition, modern storage foundations should offer data immutability (i.e., ensure the data cannot be changed/altered/deleted for a designated period of time) to protect data and operations from cyberattacks such as ransomware.

Gary Ogasawara is CTO at Cloudian

Hot Topics

The Latest

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 5, the final installment, covers AI's impacts on IT teams ...

APMdigest's Predictions Series concludes with 2026 AI Predictions — industry experts offer predictions on how AI and related technologies will evolve and impact business in 2026. Part 4 covers negative impacts of AI ...