Skip to main content

Automating While Keeping Humans in the Loop

Gurjeet Arora
Observo AI

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms.

CISOs get a ton of security data from each of their security applications— so much that a good portion has been traditionally discarded. Storing it costs money and looking through it for useful information has been prohibitively time consuming. But now, with the arrival of agentic AI, security specialists are using new platforms to find needles in haystacks that result in faster threat detection and sharper insight. So why not just set the agents loose?

Artificial intelligence can make security and operations more efficient, but algorithms that run without human oversight can expose sensitive data and trigger critical mistakes like granting unauthorized access to production systems, deleting audit logs required for regulatory compliance, or sharing proprietary telemetry with external models that might train on it. Security analysts also need to be sure that algorithms are not discarding critical telemetry or exposing proprietary information. Balancing machine efficiency with human judgment calls experts in control from first connection to final decision — these systems need to be designed to include humans in the loop.

Transparency Before Action

Good governance starts with AI that informs before it acts. Instead of executing silent drop rules, modern observability platforms surface recommended transformations alongside "before, after, and delta" views of the data stream. Security analysts can inspect the logic, test the impact, and approve or refine changes. That workflow flips the traditional black-box pattern on its head and builds an auditable trail that satisfies compliance teams and board committees.

The cost of discarding important security data was significant after the October 2023 credential-stuffing breach at genetic-testing provider 23andMe. Hackers compromised a tiny slice of user accounts, yet ultimately exposed ancestry data on 6.9 million people because abnormal access patterns were not flagged quickly enough. Theoretically, an explainable AI layer that highlighted the surge in calls to high-value records, and let analysts drill into that anomaly using natural language search might have tightened the response window.

Ensuring a Human-in-the-Loop Architecture

A practical human-in-the-loop (HITL) pattern has three pillars:

1. Advisory algorithms that monitor velocity, completeness, and pattern shifts, but generate recommendations rather than direct edits.
2. Granular approvals that ensure security or observability staff can accept, reject, or fine-tune each suggestion, ensuring the tool never outruns policy.
3. Continuous learning feedback loops relay the choices people make back into the model, so future suggestions align better with enterprise priorities.

By delegating repetitive parsing and noise reduction to machines while preserving expert veto power, enterprises can cut data volumes by double digits yet retain the evidence chain needed for incident response. The reduction also fights alert fatigue, freeing analysts for hunting and architecture work that adds strategic value.

Keep Your Data, Keep Your Edge

Protecting enterprise telemetry is essential to preserving competitive advantage. This data is far more than operational exhaust. It's a proprietary asset that reveals the inner logic of business processes and the behavior of users. When a vendor or a public model is allowed to ingest and train on that asset, the organization risks handing competitors a map of its most valuable insights.

CIOs can guard against that outcome by insisting on three interconnected safeguards. They should start with data-plane sovereignty, keeping models inside their own virtual private cloud or on-premises so that raw events never cross into multi-tenant territory. They must then secure strong contractual protection that blocks providers from pooling or fine-tuning foundation models with customer logs, metrics, or prompts. Finally, they need portable intelligence: every transformation rule and enrichment step should be exportable in human-readable form, ensuring that teams can migrate or rebuild their workflows without being trapped by a single vendor's ecosystem.

Forward-looking organizations are reinforcing these measures with internal retrieval-augmented generation services. By storing long-term memory behind the firewall and sharing only transient embeddings with external APIs, they reduce exposure while still benefiting from large-language-model capabilities. Complementary retention policies keep traces and audit artifacts searchable for compliance purposes but out of reach for external model training, sealing off yet another avenue for unintended data leakage.

From Pilot to Production

Moving from lab to line of business requires the same discipline applied to any change-management initiative. Start with a target domain, such as DNS logs, where excessive chatter drives up cost. Baseline current volume, false-positive rates, and analyst hours. Introduce the HITL workflow, capture approvals, and measure the deltas over 30 days. Early wins both fund expansion and educate stakeholders on the transparency controls that keep risk in check.

Metrics worth tracking include reduction percentage by source, median query latency, and mean time to detect true positives. Dashboards that visualize these figures against cost and staffing levels help non-technical executives see the connection between explainable automation and business outcomes.

Action Plan for CIOs

1. Map data classifications and designate which streams can be candidates for automated optimization.
2. Mandate that any AI vendor supply an explanation interface and approval gate before data mutations.
3. Require on-premise or single-tenant deployment options for sensitive workloads, with no secondary use of telemetry for model training.
4. Establish a feedback mechanism so analysts can label false positives and guide model evolution.
5. Report quarterly on volume savings, alert accuracy, and response time to demonstrate ROI.

Conclusion

Agentic security automation's success hinges on trust. CIOs need clear evidence that every algorithmic choice can be traced, explained, and aligned with corporate risk tolerance and growth priorities. Transparent models that route recommendations through human approval paths and keep sensitive telemetry within controlled boundaries give leaders the assurance that AI is being applied responsibly. When that transparency is paired with contractual and technical safeguards that prevent external training on proprietary data, organizations gain a defense posture that learns at machine speed while remaining governed by human intent. Such an approach advances three top-level objectives at once: stronger resilience against evolving threats, lower operational costs through efficient data handling, and faster innovation that turns security insights into strategic advantage.

Gurjeet Arora is CEO and Co-Founder of Observo AI

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Automating While Keeping Humans in the Loop

Gurjeet Arora
Observo AI

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms.

CISOs get a ton of security data from each of their security applications— so much that a good portion has been traditionally discarded. Storing it costs money and looking through it for useful information has been prohibitively time consuming. But now, with the arrival of agentic AI, security specialists are using new platforms to find needles in haystacks that result in faster threat detection and sharper insight. So why not just set the agents loose?

Artificial intelligence can make security and operations more efficient, but algorithms that run without human oversight can expose sensitive data and trigger critical mistakes like granting unauthorized access to production systems, deleting audit logs required for regulatory compliance, or sharing proprietary telemetry with external models that might train on it. Security analysts also need to be sure that algorithms are not discarding critical telemetry or exposing proprietary information. Balancing machine efficiency with human judgment calls experts in control from first connection to final decision — these systems need to be designed to include humans in the loop.

Transparency Before Action

Good governance starts with AI that informs before it acts. Instead of executing silent drop rules, modern observability platforms surface recommended transformations alongside "before, after, and delta" views of the data stream. Security analysts can inspect the logic, test the impact, and approve or refine changes. That workflow flips the traditional black-box pattern on its head and builds an auditable trail that satisfies compliance teams and board committees.

The cost of discarding important security data was significant after the October 2023 credential-stuffing breach at genetic-testing provider 23andMe. Hackers compromised a tiny slice of user accounts, yet ultimately exposed ancestry data on 6.9 million people because abnormal access patterns were not flagged quickly enough. Theoretically, an explainable AI layer that highlighted the surge in calls to high-value records, and let analysts drill into that anomaly using natural language search might have tightened the response window.

Ensuring a Human-in-the-Loop Architecture

A practical human-in-the-loop (HITL) pattern has three pillars:

1. Advisory algorithms that monitor velocity, completeness, and pattern shifts, but generate recommendations rather than direct edits.
2. Granular approvals that ensure security or observability staff can accept, reject, or fine-tune each suggestion, ensuring the tool never outruns policy.
3. Continuous learning feedback loops relay the choices people make back into the model, so future suggestions align better with enterprise priorities.

By delegating repetitive parsing and noise reduction to machines while preserving expert veto power, enterprises can cut data volumes by double digits yet retain the evidence chain needed for incident response. The reduction also fights alert fatigue, freeing analysts for hunting and architecture work that adds strategic value.

Keep Your Data, Keep Your Edge

Protecting enterprise telemetry is essential to preserving competitive advantage. This data is far more than operational exhaust. It's a proprietary asset that reveals the inner logic of business processes and the behavior of users. When a vendor or a public model is allowed to ingest and train on that asset, the organization risks handing competitors a map of its most valuable insights.

CIOs can guard against that outcome by insisting on three interconnected safeguards. They should start with data-plane sovereignty, keeping models inside their own virtual private cloud or on-premises so that raw events never cross into multi-tenant territory. They must then secure strong contractual protection that blocks providers from pooling or fine-tuning foundation models with customer logs, metrics, or prompts. Finally, they need portable intelligence: every transformation rule and enrichment step should be exportable in human-readable form, ensuring that teams can migrate or rebuild their workflows without being trapped by a single vendor's ecosystem.

Forward-looking organizations are reinforcing these measures with internal retrieval-augmented generation services. By storing long-term memory behind the firewall and sharing only transient embeddings with external APIs, they reduce exposure while still benefiting from large-language-model capabilities. Complementary retention policies keep traces and audit artifacts searchable for compliance purposes but out of reach for external model training, sealing off yet another avenue for unintended data leakage.

From Pilot to Production

Moving from lab to line of business requires the same discipline applied to any change-management initiative. Start with a target domain, such as DNS logs, where excessive chatter drives up cost. Baseline current volume, false-positive rates, and analyst hours. Introduce the HITL workflow, capture approvals, and measure the deltas over 30 days. Early wins both fund expansion and educate stakeholders on the transparency controls that keep risk in check.

Metrics worth tracking include reduction percentage by source, median query latency, and mean time to detect true positives. Dashboards that visualize these figures against cost and staffing levels help non-technical executives see the connection between explainable automation and business outcomes.

Action Plan for CIOs

1. Map data classifications and designate which streams can be candidates for automated optimization.
2. Mandate that any AI vendor supply an explanation interface and approval gate before data mutations.
3. Require on-premise or single-tenant deployment options for sensitive workloads, with no secondary use of telemetry for model training.
4. Establish a feedback mechanism so analysts can label false positives and guide model evolution.
5. Report quarterly on volume savings, alert accuracy, and response time to demonstrate ROI.

Conclusion

Agentic security automation's success hinges on trust. CIOs need clear evidence that every algorithmic choice can be traced, explained, and aligned with corporate risk tolerance and growth priorities. Transparent models that route recommendations through human approval paths and keep sensitive telemetry within controlled boundaries give leaders the assurance that AI is being applied responsibly. When that transparency is paired with contractual and technical safeguards that prevent external training on proprietary data, organizations gain a defense posture that learns at machine speed while remaining governed by human intent. Such an approach advances three top-level objectives at once: stronger resilience against evolving threats, lower operational costs through efficient data handling, and faster innovation that turns security insights into strategic advantage.

Gurjeet Arora is CEO and Co-Founder of Observo AI

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...