Skip to main content

Juniper Releases Predictive Insights

Juniper unveiled Predictive Insights, empowering operators to see around corners and fix issues before they disrupt your business. 

The first three Predictive Insights applications are System Health, Capacity, and Optics, addressing some of the most common challenges arising in dynamic network environments. With these applications, operations teams can predict when a switch will fail due to a processor or memory problem, when fabric expansion will be needed due to traffic growth, or when an optical module is about to fail and take down a link in the fabric. These insights enable proactive actions, such as adding leaf switches, rerouting traffic, or replacing an optical module, to assure continued high application availability and performance. 

The System Health application is available now, while the Capacity and Optics applications are expected to be available in Q3 of 2025.  

Juniper also announced important additions to their Application Awareness capabilities, first launched in 2024. An expanded integration with VMware products provides visibility into Virtual Machines (VMs) and, more importantly, the applications running on them, and integrates alarms from VMware’s vCenter. Adding this information into Juniper’s powerful network graph database, which already includes multilayer visibility of the network fabric and application flows, results in even greater insights and faster troubleshooting. Application layer alarms can be correlated with network events and alerts, enhancing the ability of NetOps and DevOps teams to collaboratively and rapidly find and fix the root cause of any application problem, whether it is in the network or application layer.  

Another valuable addition is a new visualization tool, called a “Sunburst” graph, that synthesizes all the related information about network anomalies into a single visual representation. The root cause of a problem is identified at the center, with correlated network symptoms in concentric circles around it, and correlated application impacts on the outer ring. This visualization, enabled by the Juniper graph database and AI-native root cause inference, provides powerful insights that enable operators to quickly determine the impacts of any issue and the right actions required to restore normal network operation and application experience. Competitive products drown operators with data, while Juniper delivers clarity - pinpointing root cause and solution in seconds based on understanding the full context of data center operations and the relationships among nodes.  

All the above enhancements to Application Awareness are available now as part of the Apstra Data Center Director Premium license.  

Juniper also announced the availability of Service Level Expectations (SLEs) dashboards, providing summary views of network and application health over time that can be highly valuable in tracking how well the network team is meeting the performance and availability expectations of application owners and end users. SLE dashboards for link health, system health and fabric health synthesize dozens of network parameters over any chosen period to calculate a summary health metric and allow drill-down analysis of what types of issues impacted the health metric during that period. This helps network operations leaders get a clear picture of how well they are meeting the needs of the business and the most important areas for improvement that may need to be addressed through staffing, training, process changes, or investments in tools and technology. These SLE dashboards are available now.  

Juniper also announce that Marvis AI Assistant for Data Center, introduced in early 2024, is expanding to include powerful genAI capabilities that revolutionize the network operator’s experience of interacting with Juniper’s fabric management and assurance. Network operators can now use the Marvis natural-language query interface, driven by an AI large language model (LLM), to ask a vastly expanded range of questions and accomplish a far wider scope of operations tasks. Marvis AI Assistant now has deeper context about the data center system and knowledge base that is updated dynamically, leading to far more accurate and relevant outcomes. Marvis AI is your data center’s partner ready to take the controls and steer you clear of trouble.

Questions such as “Show me all the devices in the network that have exceeded 50% utilization over the past week,” result in a near instantaneous report in whatever format the operator specifies, eliminating potentially hours of effort to search for the right data and then synthesize it and format it. In the future, commands such as, “Add VLAN 123 to switch [IP address] port 18,” will eliminate multiple point-and-click steps in the traditional GUI, allowing new services to be configured in seconds. Over time, we plan to expand the range of capabilities supported, allowing network operators to complete an increasingly large fraction of their day-to-day tasks simply by talking to Marvis AI Assistant.  

The new capabilities in Marvis AI Assistant for Data Center will be available in the cloud-based Data Center Assurance environment integrated with the default cloud-based LLM used by Marvis today. For customers who require or prefer to use only on-premises tools and choose their own LLM for reasons such as security or regulatory compliance, an on-premises AI assistant, with similar capabilities and “bring your own” LLM compatibility, will also be available. Both cloud-based and on-premises versions will be available in late Q3 or early Q4 of 2025. 

 

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

Juniper Releases Predictive Insights

Juniper unveiled Predictive Insights, empowering operators to see around corners and fix issues before they disrupt your business. 

The first three Predictive Insights applications are System Health, Capacity, and Optics, addressing some of the most common challenges arising in dynamic network environments. With these applications, operations teams can predict when a switch will fail due to a processor or memory problem, when fabric expansion will be needed due to traffic growth, or when an optical module is about to fail and take down a link in the fabric. These insights enable proactive actions, such as adding leaf switches, rerouting traffic, or replacing an optical module, to assure continued high application availability and performance. 

The System Health application is available now, while the Capacity and Optics applications are expected to be available in Q3 of 2025.  

Juniper also announced important additions to their Application Awareness capabilities, first launched in 2024. An expanded integration with VMware products provides visibility into Virtual Machines (VMs) and, more importantly, the applications running on them, and integrates alarms from VMware’s vCenter. Adding this information into Juniper’s powerful network graph database, which already includes multilayer visibility of the network fabric and application flows, results in even greater insights and faster troubleshooting. Application layer alarms can be correlated with network events and alerts, enhancing the ability of NetOps and DevOps teams to collaboratively and rapidly find and fix the root cause of any application problem, whether it is in the network or application layer.  

Another valuable addition is a new visualization tool, called a “Sunburst” graph, that synthesizes all the related information about network anomalies into a single visual representation. The root cause of a problem is identified at the center, with correlated network symptoms in concentric circles around it, and correlated application impacts on the outer ring. This visualization, enabled by the Juniper graph database and AI-native root cause inference, provides powerful insights that enable operators to quickly determine the impacts of any issue and the right actions required to restore normal network operation and application experience. Competitive products drown operators with data, while Juniper delivers clarity - pinpointing root cause and solution in seconds based on understanding the full context of data center operations and the relationships among nodes.  

All the above enhancements to Application Awareness are available now as part of the Apstra Data Center Director Premium license.  

Juniper also announced the availability of Service Level Expectations (SLEs) dashboards, providing summary views of network and application health over time that can be highly valuable in tracking how well the network team is meeting the performance and availability expectations of application owners and end users. SLE dashboards for link health, system health and fabric health synthesize dozens of network parameters over any chosen period to calculate a summary health metric and allow drill-down analysis of what types of issues impacted the health metric during that period. This helps network operations leaders get a clear picture of how well they are meeting the needs of the business and the most important areas for improvement that may need to be addressed through staffing, training, process changes, or investments in tools and technology. These SLE dashboards are available now.  

Juniper also announce that Marvis AI Assistant for Data Center, introduced in early 2024, is expanding to include powerful genAI capabilities that revolutionize the network operator’s experience of interacting with Juniper’s fabric management and assurance. Network operators can now use the Marvis natural-language query interface, driven by an AI large language model (LLM), to ask a vastly expanded range of questions and accomplish a far wider scope of operations tasks. Marvis AI Assistant now has deeper context about the data center system and knowledge base that is updated dynamically, leading to far more accurate and relevant outcomes. Marvis AI is your data center’s partner ready to take the controls and steer you clear of trouble.

Questions such as “Show me all the devices in the network that have exceeded 50% utilization over the past week,” result in a near instantaneous report in whatever format the operator specifies, eliminating potentially hours of effort to search for the right data and then synthesize it and format it. In the future, commands such as, “Add VLAN 123 to switch [IP address] port 18,” will eliminate multiple point-and-click steps in the traditional GUI, allowing new services to be configured in seconds. Over time, we plan to expand the range of capabilities supported, allowing network operators to complete an increasingly large fraction of their day-to-day tasks simply by talking to Marvis AI Assistant.  

The new capabilities in Marvis AI Assistant for Data Center will be available in the cloud-based Data Center Assurance environment integrated with the default cloud-based LLM used by Marvis today. For customers who require or prefer to use only on-premises tools and choose their own LLM for reasons such as security or regulatory compliance, an on-premises AI assistant, with similar capabilities and “bring your own” LLM compatibility, will also be available. Both cloud-based and on-premises versions will be available in late Q3 or early Q4 of 2025. 

 

The Latest

In live financial environments, capital markets software cannot pause for rebuilds. New capabilities are introduced as stacked technology layers to meet evolving demands while systems remain active, data keeps moving, and controls stay intact. AI is no exception, and its opportunities are significant: accelerated decision cycles, compressed manual workflows, and more effective operations across complex environments. The constraint isn't the models themselves, but the architectural environments they enter ...

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.