Skip to main content

Improving Application Performance with NVMe Storage - Part 3

NVMe Storage Use Cases and Summary: Benefits of NVMe storage for AI/ML
Zivan Ori

Start with Part 1: The Rise of AI and ML Driving Parallel Computing Requirements

Start with Part 2: Local versus Shared Storage for Artificial Intelligence (AI) and Machine Learning (ML)

NVMe Storage Use Cases

NVMe storage's strong performance, combined with the capacity and data availability benefits of shared NVMe storage over local SSD, makes it a strong solution for AI / ML infrastructures of any size. There are several AI / ML focused use cases to highlight.

■ Financial Analytics – Financial services and financial technology (FinTech) are increasingly turning to automation and artificial intelligence to fuel their decision making processes for investments. Using a mix of historical data and financial modeling, one platform can provide the horsepower required for predicting future investment strategies for their financial customers.

■ Image Recognition in Manufacturing – Manufacturing has long used automation in their production lines to increase the output capacity of their production systems, scaling from hundreds of units to thousands or even millions of units per hour. The financial impact of a quality issue on the production line can be devastating if not caught in a timely manner. Real-time image recognition of photos of manufactured parts is essential to determining whether a part meets the quality standards required, as well as capturing systematic quality issues in real-time.

■ Car Services – Ride sharing apps have given rise to a new paradigm in public transit, allowing users and drivers to connect quickly and easily as needed. Ride sharing companies use AI / ML for traffic modeling to position drivers where they are most needed based on both past and current ride sharing requests. This increases the drivers' potential revenue by reducing drive times as well as increases customer satisfaction through reduced wait times, both of which improve the revenue potential for the ride sharing company.

Beyond AI / ML, one vendor also provides more generalized computing services for their customers. They provide storage capacity for cloud services, using OpenStack and Kubernetes in conjunction with NVMe storage for high performance storage. In addition, they also leverage NVMe storage for big data analytics, using spark applications to perform multiple types of data analytics tasks, such as SQL, data mining and more.

Summary: Benefits of NVMe storage for AI/ML

NVMe storage is an ideal solution for countless AI / ML workloads, especially machine learning for multiple applications. With NVMe storage, you can:

■ Create and manage larger shared data-sets for training – By separating out storage capacity from the compute nodes, data-sets for machine learning training can scale up to 1PB. As the data-set grows and more NVMe storage is brought online, performance grows as well, rather than being limited by legacy storage controller bottlenecks.

■ Overcome the capacity limitations of local SSDs in GPU nodes – With limited space for SSD media, GPU nodes have limited capacity to manage larger datasets. With NVMe storage, NVMe volumes can be dynamically provisioned over high performance Ethernet or InfiniBand networks.

■ Accelerate epoch time of machine learning by as much as 10x – By leveraging high performance NVMe-oF, NVMe storage eliminates the latency bottlenecks of older storage protocols and unleashes the parallelism inherent to the NVMe protocol. Every GPU node has direct, parallel access to the media at the lowest possible latency.

■ Improve the utilization of GPUs – Having GPUs rest idle due to slow access to data for processing is costly. By offloading storage access to the idle CPUs, and delivering storage performance at the speed of local SSD, NVMe storage ensures that the GPU-nodes are kept busy with fast access to data.

The Latest

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...

Improving Application Performance with NVMe Storage - Part 3

NVMe Storage Use Cases and Summary: Benefits of NVMe storage for AI/ML
Zivan Ori

Start with Part 1: The Rise of AI and ML Driving Parallel Computing Requirements

Start with Part 2: Local versus Shared Storage for Artificial Intelligence (AI) and Machine Learning (ML)

NVMe Storage Use Cases

NVMe storage's strong performance, combined with the capacity and data availability benefits of shared NVMe storage over local SSD, makes it a strong solution for AI / ML infrastructures of any size. There are several AI / ML focused use cases to highlight.

■ Financial Analytics – Financial services and financial technology (FinTech) are increasingly turning to automation and artificial intelligence to fuel their decision making processes for investments. Using a mix of historical data and financial modeling, one platform can provide the horsepower required for predicting future investment strategies for their financial customers.

■ Image Recognition in Manufacturing – Manufacturing has long used automation in their production lines to increase the output capacity of their production systems, scaling from hundreds of units to thousands or even millions of units per hour. The financial impact of a quality issue on the production line can be devastating if not caught in a timely manner. Real-time image recognition of photos of manufactured parts is essential to determining whether a part meets the quality standards required, as well as capturing systematic quality issues in real-time.

■ Car Services – Ride sharing apps have given rise to a new paradigm in public transit, allowing users and drivers to connect quickly and easily as needed. Ride sharing companies use AI / ML for traffic modeling to position drivers where they are most needed based on both past and current ride sharing requests. This increases the drivers' potential revenue by reducing drive times as well as increases customer satisfaction through reduced wait times, both of which improve the revenue potential for the ride sharing company.

Beyond AI / ML, one vendor also provides more generalized computing services for their customers. They provide storage capacity for cloud services, using OpenStack and Kubernetes in conjunction with NVMe storage for high performance storage. In addition, they also leverage NVMe storage for big data analytics, using spark applications to perform multiple types of data analytics tasks, such as SQL, data mining and more.

Summary: Benefits of NVMe storage for AI/ML

NVMe storage is an ideal solution for countless AI / ML workloads, especially machine learning for multiple applications. With NVMe storage, you can:

■ Create and manage larger shared data-sets for training – By separating out storage capacity from the compute nodes, data-sets for machine learning training can scale up to 1PB. As the data-set grows and more NVMe storage is brought online, performance grows as well, rather than being limited by legacy storage controller bottlenecks.

■ Overcome the capacity limitations of local SSDs in GPU nodes – With limited space for SSD media, GPU nodes have limited capacity to manage larger datasets. With NVMe storage, NVMe volumes can be dynamically provisioned over high performance Ethernet or InfiniBand networks.

■ Accelerate epoch time of machine learning by as much as 10x – By leveraging high performance NVMe-oF, NVMe storage eliminates the latency bottlenecks of older storage protocols and unleashes the parallelism inherent to the NVMe protocol. Every GPU node has direct, parallel access to the media at the lowest possible latency.

■ Improve the utilization of GPUs – Having GPUs rest idle due to slow access to data for processing is costly. By offloading storage access to the idle CPUs, and delivering storage performance at the speed of local SSD, NVMe storage ensures that the GPU-nodes are kept busy with fast access to data.

The Latest

In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability... 

While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...

Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...

As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...

Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...

AI is the catalyst for significant investment in data teams as enterprises require higher-quality data to power their AI applications, according to the State of Analytics Engineering Report from dbt Labs ...

Misaligned architecture can lead to business consequences, with 93% of respondents reporting negative outcomes such as service disruptions, high operational costs and security challenges ...

A Gartner analyst recently suggested that GenAI tools could create 25% time savings for network operational teams. Where might these time savings come from? How are GenAI tools helping NetOps teams today, and what other tasks might they take on in the future as models continue improving? In general, these savings come from automating or streamlining manual NetOps tasks ...

IT and line-of-business teams are increasingly aligned in their efforts to close the data gap and drive greater collaboration to alleviate IT bottlenecks and offload growing demands on IT teams, according to The 2025 Automation Benchmark Report: Insights from IT Leaders on Enterprise Automation & the Future of AI-Driven Businesses from Jitterbit ...

A large majority (86%) of data management and AI decision makers cite protecting data privacy as a top concern, with 76% of respondents citing ROI on data privacy and AI initiatives across their organization, according to a new Harris Poll from Collibra ...