Improving Application Performance with NVMe Storage - Part 1
The Rise of AI and ML Driving Parallel Computing Requirements
April 29, 2019

Zivan Ori
E8 Storage

Share this

As computing technology and data algorithms have advanced over the years, the ways in which technology has been applied to real world challenges have grown more automated and autonomous. This has given rise to a completely new set of computing workloads for Machine Learning which drives Artificial Intelligence applications (aka AI / ML).

AI / ML can be applied across a broad spectrum of applications and industries. Financial analysis with real-time analytics is used for predicting investments and drives the FinTech industrys needs for high performance computing. Real-time image recognition is a key enabler for self-driving vehicles, while facial recognition is used by law enforcement across the globe. Manufacturing uses image recognition technology to spot defects in materials, organizations such as NOAA use satellite imagery to spot changes in weather, while social media platforms use image recognition to tag photos of friends and family.

What is common among these uses cases is the need for a high level of parallel computing power, coupled with a high-performance low latency architecture to enable parallel processing of data in real-time across the compute cluster. The "training" phase of machine learning is critical and can take an excessively long time, especially as the training data set grows exponentially to enable deep learning for AI.

With storage performance now recognized as a critical component of AI/ML application performance, the next step is to identify the ideal storage platform. Non-Volatile Memory Express (NVMe) based storage systems have gained traction as the storage media of choice to deliver the best throughput and latency. Shared NVMe storage systems unlock the performance of NVMe, and offer a strong alternative to using local NVMe SSDs inside of GPU nodes.

The Rise of GPUs for AI / ML

GPUs were originally created for high performance image creation, and are very efficient at manipulating computer graphics and image processing. Their highly parallel structure makes them much more efficient than general purpose CPUs for algorithms where the processing of large blocks is done in parallel. For this reason, GPUs have found strong adoption in the AI / ML use case as they allow for a high degree of parallel computing and current AI focused applications have been optimized to run on GPU based computing clusters.

With the powerful compute performance of GPUs, the bottleneck moves to other areas of the AI / ML architecture. For example, the volume of data required to feed machine learning requires massive parallel read access to shared files from the storage subsystem across all nodes in the GPU cluster. This creates a performance challenge that NVMe shared storage systems are ideally suited to address.

Shared NVMe Storage for High Performance Machine Learning (ML)

One of benefits of shared NVMe storage is the ability to create even deeper neural networks due to the inherent high performance of shared storage, opening the door for future models that cannot be achieved today with non-shared NVMe storage solutions.

Today, there are storage solutions that offer patented architectures built from the ground up to leverage NVMe. The key to performance and scalability is the separation of control and data path operations between the the storage controller software and the host-side agents. The storage controller software provides centralized control and management, while the agents manage data path operations with direct access to shared storage volumes.

While AI / ML workloads are run exclusively on the GPUs within the cluster, that doesn't mean that CPUs have been eliminated from the GPU clusters completely. The operating system and drivers still leverage the CPUs, but while the machine learning training is in progress, the CPU is relatively idle. This provides the perfect opportunity for an NVMe based storage architecture to leverage the idle CPU computing capacity for a high performance distributed storage approach.

With NVMe protocol supporting exponentially more connections per SSD, the storage agents use RDMA to give each GPU node a direct connection to the drives. This approach enables the agents to perform up to 90% of the data path operations between the GPU nodes and storage, reducing latency to be on par with local SSDs.

In this scenario, running the NVMe based storage agent on the idle CPU cores of the GPU nodes enables the NVMe based storage to deliver 10x better performance than competing all-flash solutions, while leveraging existing compute resources that are already installed and available to use.

Read Part 2: Local versus Shared Storage for Artificial Intelligence (AI) and Machine Learning (ML)

Zivan Ori is CEO and Co-Founder of E8 Storage
Share this

The Latest

December 12, 2019

Industry experts offer thoughtful, insightful, and often controversial predictions on how APM and related technologies will evolve and impact business in 2020. Part 2 covers AIOps, AI and Machine Learning (ML) ...

December 11, 2019

As the New Year approaches, it is time for APMdigest's 10th annual list of Application Performance Management (APM) predictions. Industry experts offer thoughtful, insightful, and often controversial predictions on how APM and related technologies will evolve and impact business in 2020 ...

December 10, 2019

Enterprises with services operating in the cloud are overspending by millions due to inefficiencies with their apps and runtime environments, according to a poll conducted by Lead to Market, and commissioned by Opsani. 69 Percent of respondents report regularly overspending on their cloud budget by 25 percent or more, leading to a loss of millions on unnecessary cloud spend ...

December 09, 2019

For IT professionals responsible for upgrading users to Windows 10, it's crunch time. End of regular support for Windows 7 is nearly here (January 14, 2020) but as many as 59% say that only a portion of their users have been migrated to Windows 10 ...

December 05, 2019

Application performance monitoring (APM) has become one of the key strategies adopted by IT teams and application owners in today’s era of digital business services. Application downtime has always been considered adverse to business productivity. But in today’s digital economy, what is becoming equally dreadful is application slowdown. When an application is slow, the end user’s experience accessing the application is negatively affected leaving a dent on the business in terms of commercial loss and brand damage ...

December 04, 2019

Useful digital transformation means altering or designing new business processes, and implementing them via the people and technology changes needed to support these new business processes ...

December 03, 2019
The word "digital" is today thrown around in word and phrase like rice at a wedding and never do two utterances thereof have the same meaning. Common phrases like "digital skills" and "digital transformation" are explained in 101 different ways. The outcome of this is a predictable cycle of confusion, especially at business management level where often the answer to business issues is "more technology" ...
December 02, 2019

xMatters recently released the results of its Incident Management in the Age of Customer-Centricity research study to better understand the range of various incident management practices and how the increased focus on customer experience has caused roles across an organization to evolve. Findings highlight the ongoing challenges organizations face as they continue to introduce and rapidly evolve digital services ...

November 26, 2019

The new App Attention Index Report from AppDynamics finds that consumers are using an average 32 digital services every day — more than four times as many as they realize. What's more, their use of digital services has evolved from a conscious decision to carry around a device and use it for a specific task, to an unconscious and automated behavior — a digital reflex. So what does all this mean for the IT teams driving application performance on the backend? Bottom line: delivering seamless and world-class digital experiences is critical if businesses want to stay relevant and ensure long-term customer loyalty. Here are some key considerations for IT leaders and developers to consider ...

November 25, 2019

Through the adoption of agile technologies, financial firms can begin to use software to both operate more effectively and be faster to market with improvements for customer experiences. Making sure there is the necessary software in place to give customers frictionless everyday activities, like remote deposits, business overdraft services and wealth management, is key for a positive customer experience ...