
Virtana announced support for AWS Bedrock Guardrails within Virtana AI Factory Observability (AIFO), extending behavioral observability across enterprise LLM deployments on AWS Bedrock.
As organizations adopt generative AI for mission-critical workflows, the operational challenge shifts from deploying models to operating them securely at scale. AWS Bedrock Guardrails provides the enforcement layer, blocking harmful content, masking PII, and defending against prompt injection. Virtana AIFO delivers the intelligence layer, making Guardrails activity observable and surfacing the behavioral patterns that distinguish legitimate workloads from active adversarial campaigns. Together, they give enterprises the defense in depth required to operate AI with confidence in production, part of Virtana's continued expansion of AI Factory Observability across the environments where enterprises deploy, run, and secure AI at scale.
“Enterprises are making significant investments in generative AI across an expanding range of environments, and the governance expectations around those investments are rising fast,” said Paul Appleby, CEO of Virtana. “Running AI in production means being accountable for how it behaves wherever it is deployed. Virtana AIFO gives security and operations teams the operational intelligence to meet that standard across infrastructure, platforms and LLMs and services like AWS Bedrock.”
AWS Bedrock Guardrails addresses content-level risk with a comprehensive, configurable set of safeguards that integrate directly into the generative AI workflow, filtering harmful content, masking PII, enforcing denied topics, validating contextual grounding, and running automated reasoning checks. These controls operate consistently across model inference, agents, knowledge bases, and multi-step flows, giving organizations a model-agnostic enforcement layer across their Bedrock environment. As enterprises run multiple foundation models through Bedrock for distinct workflows, maintaining consistent governance across each becomes an operational requirement in its own right.
Virtana AIFO monitors LLM behavioral patterns across AWS Bedrock deployments, treating every token pattern, utilization shift, and request anomaly as a potential security signal. When Guardrails intervention rates spike, when prompt token volume surges outside normal operating bounds, when request failure rates climb against a specific model, those patterns carry intelligence. They signal whether an anomaly reflects a configuration issue, a performance degradation, or an organized adversarial campaign testing the boundaries of enterprise AI defenses. Virtana AIFO surfaces that signal in real time, connecting Guardrails activity to the full behavioral context of the LLM environment.
Virtana addresses this operational requirement by delivering:
- Guardrails intervention monitoring tracks trigger frequency, blocked-topic patterns, and intervention rate trends by model, so security teams can detect active adversarial campaigns, not just individual blocked events
- Token-level behavioral analysis monitors prompt and completion token volumes, Time to First Token (TTFT), and request throughput to surface anomalous consumption patterns that indicate adversarial probing or data exfiltration attempts
- Request failure rate tracking identifies elevated failure rates as signals of credential misuse, adversarial probing, or Guardrails evasion activity across foundation model deployments
- Historical trend analysis correlates token volume spikes and Guardrails trigger patterns with known events or surfacing unknown anomalies, giving operations teams the context to distinguish legitimate workloads from active threats
- Cross-model visibility provides a unified operational view across all foundation models in the Bedrock environment to detect behavioral anomalies and support consistent AI governance across the enterprise LLM estate
- On-premises deployment with tenant-level data segregation and support for customer-managed LLM models meets the data sovereignty and compliance requirements of regulated industries including healthcare, financial services, and government
“Agentic AI systems introduce attack surfaces that content-level enforcement alone cannot address,” said Amitkumar Rathi, Chief Product Officer at Virtana. “By extending AI Factory Observability into AWS Bedrock environments, we give organizations visibility into the behavioral layer that sits above content filtering, such as token consumption patterns, Guardrails intervention rates and request anomalies, so security and platform teams can identify active threat campaigns and understand the full operational context of their LLM estate in production.”
The Latest
In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ...
Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...
Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...
Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...
The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...
The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...
In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...
AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.
The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...
The gap is widening between what teams spend on observability tools and the value they receive amid surging data volumes and budget pressures, according to The Breaking Point for Observability Leaders, a report from Imply ...