Skip to main content

NetApp Expands Intelligent Data Infrastructure Capabilities to Handle Strategic Cloud Workloads

NetApp announced the introduction of new capabilities designed for strategic cloud workloads including GenAI and VMware.

These enhancements to NetApp data and storage services reduce the resources and risks for customers to manage these strategic workloads across increasingly complex hybrid multicloud environments.

“Strategic workloads, including GenAI and virtualized environments, are driving business innovation and have increasingly complex and resource-intensive infrastructure requirements that are pushing IT teams to the limit,” said Pravjit Tiwana, SVP and GM, Cloud Storage at NetApp. “NetApp is helping customers take back control of their data with intelligent data infrastructure that leverages unified data storage, integrated data services, and automated cloud operations. Even when they are up against specific and nuanced technology requirements for modern workloads, NetApp gives them the tools they need to optimize and simplify their data operations in their environments across the hybrid multicloud.”

To advance intelligent data infrastructure deployments that better support strategic workloads like GenAI and VMware environments, NetApp is announcing new capabilities, including:

- NetApp BlueXP Workload Factory – for AWS: This intelligent data infrastructure service uses defined industry best-practices to automate the planning, provisioning, and management of cloud resources and services for key workloads, including GenAI, VMware cloud environments, and enterprise databases. Customers can use BlueXP workload factory to optimize deployment time, cost, performance, and protection of resources for strategic workloads as well as their associated data. To simplify workload migrations to the cloud, BlueXP workload factory allows users to profile infrastructure requirements for target workloads and compare different resource options for cost and performance needs. Then, the service can provision the selected resources, move any existing workload data to these newly provisioned cloud deployments, and continually optimize the entire environment to ensure the required cost and performance targets. AWS users can read guidance on how to deploy this capability on the AWS Solutions Library.

- NetApp GenAI Toolkit – Microsoft Azure NetApp Files Version: Customers can now include private enterprise data stored in Azure NetApp Files in their retrieval-augmented generation (RAG) workflows in a secure, programmatic manner. The result is an enhanced ability to generate unique, high-quality, and ultra-relevant results from GenAI projects by combining their proprietary data with pre-trained, foundational models (FMs). The integration of the NetApp GenAI Toolkit with Azure NetApp Files represents a powerful synergy that empowers customers to harness advanced language generation capabilities.

- Amazon Bedrock with Amazon FSx for NetApp ONTAP Reference Architecture: Amazon Web Services, Inc. (AWS) and NetApp have released a joint reference architecture which provides guidance for customers on implementing RAG-enabled workflows that bring proprietary data stored on Amazon FSx for NetApp ONTAP into their GenAI data pipelines. Amazon FSx makes it easy and cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. The reference architecture allows developers to use APIs for Amazon Bedrock to connect with Amazon FSx for ONTAP data stores, enabling the secure use of proprietary data with a choice of high-performing FMs that can be customized to unlock new insights and capabilities.

- Amazon FSx for NetApp ONTAP Enhancements: AWS announced the next-generation Amazon FSx for ONTAP cloud storage service with enhanced capabilities to boost scalability and flexibility to provide up to 6 GB per second of throughput for a single highly-available (HA) pair from 512 TiB of SSD storage. Next-gen file systems offer virtualized workloads more room to grow with a 300 percent increase in network burst throughput and a 150 percent boost in disk burst throughput. For large-scale, high-performance workloads like GenAI, second-generation Amazon FSx for ONTAP systems support dynamic scalability by adding HA pairs as needed, up to 24 nodes. This delivers up to 72 GB per second of throughput from 1 PiB of SSD storage, providing greater flexibility and performance for evolving business needs.

- NetApp BlueXP Disaster Recovery Support for VMFS: The BlueXP disaster recovery service, which provides guided workflows to design and execute automated disaster recovery plans for VMware workloads across both on-premises and cloud environments, has been expanded to support VMFS datastores for on-premises to on-premises disaster recovery.

These updates build on NetApp’s existing offerings that support storage and data operations for customers that need to implement and manage high-powered, strategic workloads such as GenAI and VMware environments. For example, NetApp recently announced that its unique BlueXP data classification capability, which automatically classifies and categorizes data for enhanced governance and secure ingest into GenAI and RAG data pipelines, has become a core control plane capability now available free of charge to all NetApp customers.

The Latest

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...

NetApp Expands Intelligent Data Infrastructure Capabilities to Handle Strategic Cloud Workloads

NetApp announced the introduction of new capabilities designed for strategic cloud workloads including GenAI and VMware.

These enhancements to NetApp data and storage services reduce the resources and risks for customers to manage these strategic workloads across increasingly complex hybrid multicloud environments.

“Strategic workloads, including GenAI and virtualized environments, are driving business innovation and have increasingly complex and resource-intensive infrastructure requirements that are pushing IT teams to the limit,” said Pravjit Tiwana, SVP and GM, Cloud Storage at NetApp. “NetApp is helping customers take back control of their data with intelligent data infrastructure that leverages unified data storage, integrated data services, and automated cloud operations. Even when they are up against specific and nuanced technology requirements for modern workloads, NetApp gives them the tools they need to optimize and simplify their data operations in their environments across the hybrid multicloud.”

To advance intelligent data infrastructure deployments that better support strategic workloads like GenAI and VMware environments, NetApp is announcing new capabilities, including:

- NetApp BlueXP Workload Factory – for AWS: This intelligent data infrastructure service uses defined industry best-practices to automate the planning, provisioning, and management of cloud resources and services for key workloads, including GenAI, VMware cloud environments, and enterprise databases. Customers can use BlueXP workload factory to optimize deployment time, cost, performance, and protection of resources for strategic workloads as well as their associated data. To simplify workload migrations to the cloud, BlueXP workload factory allows users to profile infrastructure requirements for target workloads and compare different resource options for cost and performance needs. Then, the service can provision the selected resources, move any existing workload data to these newly provisioned cloud deployments, and continually optimize the entire environment to ensure the required cost and performance targets. AWS users can read guidance on how to deploy this capability on the AWS Solutions Library.

- NetApp GenAI Toolkit – Microsoft Azure NetApp Files Version: Customers can now include private enterprise data stored in Azure NetApp Files in their retrieval-augmented generation (RAG) workflows in a secure, programmatic manner. The result is an enhanced ability to generate unique, high-quality, and ultra-relevant results from GenAI projects by combining their proprietary data with pre-trained, foundational models (FMs). The integration of the NetApp GenAI Toolkit with Azure NetApp Files represents a powerful synergy that empowers customers to harness advanced language generation capabilities.

- Amazon Bedrock with Amazon FSx for NetApp ONTAP Reference Architecture: Amazon Web Services, Inc. (AWS) and NetApp have released a joint reference architecture which provides guidance for customers on implementing RAG-enabled workflows that bring proprietary data stored on Amazon FSx for NetApp ONTAP into their GenAI data pipelines. Amazon FSx makes it easy and cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. The reference architecture allows developers to use APIs for Amazon Bedrock to connect with Amazon FSx for ONTAP data stores, enabling the secure use of proprietary data with a choice of high-performing FMs that can be customized to unlock new insights and capabilities.

- Amazon FSx for NetApp ONTAP Enhancements: AWS announced the next-generation Amazon FSx for ONTAP cloud storage service with enhanced capabilities to boost scalability and flexibility to provide up to 6 GB per second of throughput for a single highly-available (HA) pair from 512 TiB of SSD storage. Next-gen file systems offer virtualized workloads more room to grow with a 300 percent increase in network burst throughput and a 150 percent boost in disk burst throughput. For large-scale, high-performance workloads like GenAI, second-generation Amazon FSx for ONTAP systems support dynamic scalability by adding HA pairs as needed, up to 24 nodes. This delivers up to 72 GB per second of throughput from 1 PiB of SSD storage, providing greater flexibility and performance for evolving business needs.

- NetApp BlueXP Disaster Recovery Support for VMFS: The BlueXP disaster recovery service, which provides guided workflows to design and execute automated disaster recovery plans for VMware workloads across both on-premises and cloud environments, has been expanded to support VMFS datastores for on-premises to on-premises disaster recovery.

These updates build on NetApp’s existing offerings that support storage and data operations for customers that need to implement and manage high-powered, strategic workloads such as GenAI and VMware environments. For example, NetApp recently announced that its unique BlueXP data classification capability, which automatically classifies and categorizes data for enhanced governance and secure ingest into GenAI and RAG data pipelines, has become a core control plane capability now available free of charge to all NetApp customers.

The Latest

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...