Hewlett Packard Enterprise (HPE) announced a container-based software solution, HPE ML Ops, to support the entire machine learning model lifecycle for on-premises, public cloud and hybrid cloud environments.
The new solution introduces a DevOps-like process to standardize machine learning workflows and accelerate AI deployments from months to days.
The new HPE ML Ops solution extends the capabilities of the BlueData EPIC container software platform, providing data science teams with on-demand access to containerized environments for distributed AI / ML and analytics. BlueData was acquired by HPE in November 2018 to bolster its AI, analytics, and container offerings, and complements HPE’s Hybrid IT solutions and HPE Pointnext Services for enterprise AI deployments.
Enterprise AI adoption has more than doubled in the last four years1, and organizations continue to invest significant time and resources in building machine learning and deep learning models for a wide range of AI use cases such as fraud detection, personalized medicine, and predictive customer analytics. However, the biggest challenge faced by technical professionals is operationalizing ML, also known as the “last mile,” to successfully deploy and manage these models, and unlock business value. According to Gartner, by 2021, at least 50 percent of machine learning projects will not be fully deployed due to lack of operationalization.
HPE ML Ops transforms AI initiatives from experimentation and pilot projects to enterprise-grade operations and production by addressing the entire machine learning lifecycle from data preparation and model building, to training, deployment, monitoring, and collaboration.
“Only operational machine learning models deliver business value,” said Kumar Sreekanti, SVP and CTO, Hybrid IT at HPE. “And with HPE ML Ops, we provide the only enterprise-class solution to operationalize the end-to-end machine learning lifecycle for on-premises and hybrid cloud deployments. We’re bringing DevOps speed and agility to machine learning, delivering faster time-to-value for AI in the enterprise.”
“From retail to banking to manufacturing to healthcare and beyond, virtually all industries are adopting or investigating AI/ML to develop innovative products and services and gain a competitive edge. While most businesses are ramping up on the build and train phase of their AI/ML projects, they are struggling to operationalize the entire ML lifecycle from PoC to pilot to production deployment and monitoring,” said Ritu Jyoti, Program VP, Artificial Intelligence (AI) Strategies at IDC. “HPE is closing this gap by addressing the entire ML lifecycle with its container-based, platform-agnostic offering – to support a range of ML operational requirements, accelerate the overall time to insights, and drive superior business outcomes.”
With the HPE ML Ops solution, data science teams involved in building and deploying ML models can benefit from the industry’s most comprehensive operationalization and lifecycle management solution for enterprise AI:
- Model Build: Pre-packaged, self-service sandbox environments for ML tools and data science notebooks
- Model Training: Scalable training environments with secure access to data
- Model Deployment: Flexible and rapid deployment with reproducibility
- Model Monitoring: End-to-end visibility across the ML model lifecycle
- Collaboration: Enable CI/CD workflows with code, model, and project repositories
- Security and Control: Secure multi-tenancy with integration to enterprise authentication mechanisms
- Hybrid Deployment: Support for on-premises, public cloud, or hybrid cloud
The HPE ML Ops solution works with a wide range of open source machine learning and deep learning frameworks including Keras, MXNet, PyTorch, and TensorFlow as well as commercial machine learning applications from ecosystem software partners such as Dataiku and H2O.ai.
HPE ML Ops is generally available now as a software subscription, together with HPE Pointnext Services and customer support.
The Latest
For many B2B and B2C enterprise brands, technology isn't a core strength. Relying on overly complex architectures (like those that follow a pure MACH doctrine) has been flagged by industry leaders as a source of operational slowdown, creating bottlenecks that limit agility in volatile market conditions ...
FinOps champions crucial cross-departmental collaboration, uniting business, finance, technology and engineering leaders to demystify cloud expenses. Yet, too often, critical cost issues are softened into mere "recommendations" or "insights" — easy to ignore. But what if we adopted security's battle-tested strategy and reframed these as the urgent risks they truly are, demanding immediate action? ...
Two in three IT professionals now cite growing complexity as their top challenge — an urgent signal that the modernization curve may be getting too steep, according to the Rising to the Challenge survey from Checkmk ...
While IT leaders are becoming more comfortable and adept at balancing workloads across on-premises, colocation data centers and the public cloud, there's a key component missing: connectivity, according to the 2025 State of the Data Center Report from CoreSite ...
A perfect storm is brewing in cybersecurity — certificate lifespans shrinking to just 47 days while quantum computing threatens today's encryption. Organizations must embrace ephemeral trust and crypto-agility to survive this dual challenge ...
In MEAN TIME TO INSIGHT Episode 14, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses hybrid multi-cloud network observability...
While companies adopt AI at a record pace, they also face the challenge of finding a smart and scalable way to manage its rapidly growing costs. This requires balancing the massive possibilities inherent in AI with the need to control cloud costs, aim for long-term profitability and optimize spending ...
Telecommunications is expanding at an unprecedented pace ... But progress brings complexity. As WanAware's 2025 Telecom Observability Benchmark Report reveals, many operators are discovering that modernization requires more than physical build outs and CapEx — it also demands the tools and insights to manage, secure, and optimize this fast-growing infrastructure in real time ...
As businesses increasingly rely on high-performance applications to deliver seamless user experiences, the demand for fast, reliable, and scalable data storage systems has never been greater. Redis — an open-source, in-memory data structure store — has emerged as a popular choice for use cases ranging from caching to real-time analytics. But with great performance comes the need for vigilant monitoring ...
Kubernetes was not initially designed with AI's vast resource variability in mind, and the rapid rise of AI has exposed Kubernetes limitations, particularly when it comes to cost and resource efficiency. Indeed, AI workloads differ from traditional applications in that they require a staggering amount and variety of compute resources, and their consumption is far less consistent than traditional workloads ... Considering the speed of AI innovation, teams cannot afford to be bogged down by these constant infrastructure concerns. A solution is needed ...