Skip to main content

Automating While Keeping Humans in the Loop

Gurjeet Arora
Observo AI

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms.

CISOs get a ton of security data from each of their security applications— so much that a good portion has been traditionally discarded. Storing it costs money and looking through it for useful information has been prohibitively time consuming. But now, with the arrival of agentic AI, security specialists are using new platforms to find needles in haystacks that result in faster threat detection and sharper insight. So why not just set the agents loose?

Artificial intelligence can make security and operations more efficient, but algorithms that run without human oversight can expose sensitive data and trigger critical mistakes like granting unauthorized access to production systems, deleting audit logs required for regulatory compliance, or sharing proprietary telemetry with external models that might train on it. Security analysts also need to be sure that algorithms are not discarding critical telemetry or exposing proprietary information. Balancing machine efficiency with human judgment calls experts in control from first connection to final decision — these systems need to be designed to include humans in the loop.

Transparency Before Action

Good governance starts with AI that informs before it acts. Instead of executing silent drop rules, modern observability platforms surface recommended transformations alongside "before, after, and delta" views of the data stream. Security analysts can inspect the logic, test the impact, and approve or refine changes. That workflow flips the traditional black-box pattern on its head and builds an auditable trail that satisfies compliance teams and board committees.

The cost of discarding important security data was significant after the October 2023 credential-stuffing breach at genetic-testing provider 23andMe. Hackers compromised a tiny slice of user accounts, yet ultimately exposed ancestry data on 6.9 million people because abnormal access patterns were not flagged quickly enough. Theoretically, an explainable AI layer that highlighted the surge in calls to high-value records, and let analysts drill into that anomaly using natural language search might have tightened the response window.

Ensuring a Human-in-the-Loop Architecture

A practical human-in-the-loop (HITL) pattern has three pillars:

1. Advisory algorithms that monitor velocity, completeness, and pattern shifts, but generate recommendations rather than direct edits.
2. Granular approvals that ensure security or observability staff can accept, reject, or fine-tune each suggestion, ensuring the tool never outruns policy.
3. Continuous learning feedback loops relay the choices people make back into the model, so future suggestions align better with enterprise priorities.

By delegating repetitive parsing and noise reduction to machines while preserving expert veto power, enterprises can cut data volumes by double digits yet retain the evidence chain needed for incident response. The reduction also fights alert fatigue, freeing analysts for hunting and architecture work that adds strategic value.

Keep Your Data, Keep Your Edge

Protecting enterprise telemetry is essential to preserving competitive advantage. This data is far more than operational exhaust. It's a proprietary asset that reveals the inner logic of business processes and the behavior of users. When a vendor or a public model is allowed to ingest and train on that asset, the organization risks handing competitors a map of its most valuable insights.

CIOs can guard against that outcome by insisting on three interconnected safeguards. They should start with data-plane sovereignty, keeping models inside their own virtual private cloud or on-premises so that raw events never cross into multi-tenant territory. They must then secure strong contractual protection that blocks providers from pooling or fine-tuning foundation models with customer logs, metrics, or prompts. Finally, they need portable intelligence: every transformation rule and enrichment step should be exportable in human-readable form, ensuring that teams can migrate or rebuild their workflows without being trapped by a single vendor's ecosystem.

Forward-looking organizations are reinforcing these measures with internal retrieval-augmented generation services. By storing long-term memory behind the firewall and sharing only transient embeddings with external APIs, they reduce exposure while still benefiting from large-language-model capabilities. Complementary retention policies keep traces and audit artifacts searchable for compliance purposes but out of reach for external model training, sealing off yet another avenue for unintended data leakage.

From Pilot to Production

Moving from lab to line of business requires the same discipline applied to any change-management initiative. Start with a target domain, such as DNS logs, where excessive chatter drives up cost. Baseline current volume, false-positive rates, and analyst hours. Introduce the HITL workflow, capture approvals, and measure the deltas over 30 days. Early wins both fund expansion and educate stakeholders on the transparency controls that keep risk in check.

Metrics worth tracking include reduction percentage by source, median query latency, and mean time to detect true positives. Dashboards that visualize these figures against cost and staffing levels help non-technical executives see the connection between explainable automation and business outcomes.

Action Plan for CIOs

1. Map data classifications and designate which streams can be candidates for automated optimization.
2. Mandate that any AI vendor supply an explanation interface and approval gate before data mutations.
3. Require on-premise or single-tenant deployment options for sensitive workloads, with no secondary use of telemetry for model training.
4. Establish a feedback mechanism so analysts can label false positives and guide model evolution.
5. Report quarterly on volume savings, alert accuracy, and response time to demonstrate ROI.

Conclusion

Agentic security automation's success hinges on trust. CIOs need clear evidence that every algorithmic choice can be traced, explained, and aligned with corporate risk tolerance and growth priorities. Transparent models that route recommendations through human approval paths and keep sensitive telemetry within controlled boundaries give leaders the assurance that AI is being applied responsibly. When that transparency is paired with contractual and technical safeguards that prevent external training on proprietary data, organizations gain a defense posture that learns at machine speed while remaining governed by human intent. Such an approach advances three top-level objectives at once: stronger resilience against evolving threats, lower operational costs through efficient data handling, and faster innovation that turns security insights into strategic advantage.

Gurjeet Arora is CEO and Co-Founder of Observo AI

The Latest

While 87% of manufacturing leaders and technical specialists report that ROI from their AIOps initiatives has met or exceeded expectations, only 37% say they are fully prepared to operationalize AI at scale, according to The Future of IT Operations in the AI Era, a report from Riverbed ...

Many organizations rely on cloud-first architectures to aggregate, analyze, and act on their operational data ... However, not all environments are conducive to cloud-first architectures ... There are limitations to cloud-first architectures that render them ineffective in mission-critical situations where responsiveness, cost control, and data sovereignty are non-negotiable; these limitations include ...

For years, cybersecurity was built around a simple assumption: protect the physical network and trust everything inside it. That model made sense when employees worked in offices, applications lived in data centers, and devices rarely left the building. Today's reality is fluid: people work from everywhere, applications run across multiple clouds, and AI-driven agents are beginning to act on behalf of users. But while the old perimeter dissolved, a new one quietly emerged ...

For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...

Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...

Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...

Automating While Keeping Humans in the Loop

Gurjeet Arora
Observo AI

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms.

CISOs get a ton of security data from each of their security applications— so much that a good portion has been traditionally discarded. Storing it costs money and looking through it for useful information has been prohibitively time consuming. But now, with the arrival of agentic AI, security specialists are using new platforms to find needles in haystacks that result in faster threat detection and sharper insight. So why not just set the agents loose?

Artificial intelligence can make security and operations more efficient, but algorithms that run without human oversight can expose sensitive data and trigger critical mistakes like granting unauthorized access to production systems, deleting audit logs required for regulatory compliance, or sharing proprietary telemetry with external models that might train on it. Security analysts also need to be sure that algorithms are not discarding critical telemetry or exposing proprietary information. Balancing machine efficiency with human judgment calls experts in control from first connection to final decision — these systems need to be designed to include humans in the loop.

Transparency Before Action

Good governance starts with AI that informs before it acts. Instead of executing silent drop rules, modern observability platforms surface recommended transformations alongside "before, after, and delta" views of the data stream. Security analysts can inspect the logic, test the impact, and approve or refine changes. That workflow flips the traditional black-box pattern on its head and builds an auditable trail that satisfies compliance teams and board committees.

The cost of discarding important security data was significant after the October 2023 credential-stuffing breach at genetic-testing provider 23andMe. Hackers compromised a tiny slice of user accounts, yet ultimately exposed ancestry data on 6.9 million people because abnormal access patterns were not flagged quickly enough. Theoretically, an explainable AI layer that highlighted the surge in calls to high-value records, and let analysts drill into that anomaly using natural language search might have tightened the response window.

Ensuring a Human-in-the-Loop Architecture

A practical human-in-the-loop (HITL) pattern has three pillars:

1. Advisory algorithms that monitor velocity, completeness, and pattern shifts, but generate recommendations rather than direct edits.
2. Granular approvals that ensure security or observability staff can accept, reject, or fine-tune each suggestion, ensuring the tool never outruns policy.
3. Continuous learning feedback loops relay the choices people make back into the model, so future suggestions align better with enterprise priorities.

By delegating repetitive parsing and noise reduction to machines while preserving expert veto power, enterprises can cut data volumes by double digits yet retain the evidence chain needed for incident response. The reduction also fights alert fatigue, freeing analysts for hunting and architecture work that adds strategic value.

Keep Your Data, Keep Your Edge

Protecting enterprise telemetry is essential to preserving competitive advantage. This data is far more than operational exhaust. It's a proprietary asset that reveals the inner logic of business processes and the behavior of users. When a vendor or a public model is allowed to ingest and train on that asset, the organization risks handing competitors a map of its most valuable insights.

CIOs can guard against that outcome by insisting on three interconnected safeguards. They should start with data-plane sovereignty, keeping models inside their own virtual private cloud or on-premises so that raw events never cross into multi-tenant territory. They must then secure strong contractual protection that blocks providers from pooling or fine-tuning foundation models with customer logs, metrics, or prompts. Finally, they need portable intelligence: every transformation rule and enrichment step should be exportable in human-readable form, ensuring that teams can migrate or rebuild their workflows without being trapped by a single vendor's ecosystem.

Forward-looking organizations are reinforcing these measures with internal retrieval-augmented generation services. By storing long-term memory behind the firewall and sharing only transient embeddings with external APIs, they reduce exposure while still benefiting from large-language-model capabilities. Complementary retention policies keep traces and audit artifacts searchable for compliance purposes but out of reach for external model training, sealing off yet another avenue for unintended data leakage.

From Pilot to Production

Moving from lab to line of business requires the same discipline applied to any change-management initiative. Start with a target domain, such as DNS logs, where excessive chatter drives up cost. Baseline current volume, false-positive rates, and analyst hours. Introduce the HITL workflow, capture approvals, and measure the deltas over 30 days. Early wins both fund expansion and educate stakeholders on the transparency controls that keep risk in check.

Metrics worth tracking include reduction percentage by source, median query latency, and mean time to detect true positives. Dashboards that visualize these figures against cost and staffing levels help non-technical executives see the connection between explainable automation and business outcomes.

Action Plan for CIOs

1. Map data classifications and designate which streams can be candidates for automated optimization.
2. Mandate that any AI vendor supply an explanation interface and approval gate before data mutations.
3. Require on-premise or single-tenant deployment options for sensitive workloads, with no secondary use of telemetry for model training.
4. Establish a feedback mechanism so analysts can label false positives and guide model evolution.
5. Report quarterly on volume savings, alert accuracy, and response time to demonstrate ROI.

Conclusion

Agentic security automation's success hinges on trust. CIOs need clear evidence that every algorithmic choice can be traced, explained, and aligned with corporate risk tolerance and growth priorities. Transparent models that route recommendations through human approval paths and keep sensitive telemetry within controlled boundaries give leaders the assurance that AI is being applied responsibly. When that transparency is paired with contractual and technical safeguards that prevent external training on proprietary data, organizations gain a defense posture that learns at machine speed while remaining governed by human intent. Such an approach advances three top-level objectives at once: stronger resilience against evolving threats, lower operational costs through efficient data handling, and faster innovation that turns security insights into strategic advantage.

Gurjeet Arora is CEO and Co-Founder of Observo AI

The Latest

While 87% of manufacturing leaders and technical specialists report that ROI from their AIOps initiatives has met or exceeded expectations, only 37% say they are fully prepared to operationalize AI at scale, according to The Future of IT Operations in the AI Era, a report from Riverbed ...

Many organizations rely on cloud-first architectures to aggregate, analyze, and act on their operational data ... However, not all environments are conducive to cloud-first architectures ... There are limitations to cloud-first architectures that render them ineffective in mission-critical situations where responsiveness, cost control, and data sovereignty are non-negotiable; these limitations include ...

For years, cybersecurity was built around a simple assumption: protect the physical network and trust everything inside it. That model made sense when employees worked in offices, applications lived in data centers, and devices rarely left the building. Today's reality is fluid: people work from everywhere, applications run across multiple clouds, and AI-driven agents are beginning to act on behalf of users. But while the old perimeter dissolved, a new one quietly emerged ...

For years, infrastructure teams have treated compute as a relatively stable input. Capacity was provisioned, costs were forecasted, and performance expectations were set based on the assumption that identical resources behaved identically. That mental model is starting to break down. AI infrastructure is no longer behaving like static cloud capacity. It is increasingly behaving like a market ...

Resilience can no longer be defined by how quickly an organization recovers from an incident or disruption. The effectiveness of any resilience strategy is dependent on its ability to anticipate change, operate under continuous stress, and adapt confidently amid uncertainty ...

Mobile users are less tolerant of app instability than ever before. According to a new report from Luciq, No Margin for Error: What Mobile Users Expect and What Mobile Leaders Must Deliver in 2026, even minor performance issues now result in immediate abandonment, lost purchases, and long-term brand impact ...

Artificial intelligence (AI) has become the dominant force shaping enterprise data strategies. Boards expect progress. Executives expect returns. And data leaders are under pressure to prove that their organizations are "AI-ready" ...

Agentic AI is a major buzzword for 2026. Many tech companies are making bold promises about this technology, but many aren't grounded in reality, at least not yet. This coming year will likely be shaped by reality checks for IT teams, and progress will only come from a focus on strong foundations and disciplined execution ...

AI systems are still prone to hallucinations and misjudgments ... To build the trust needed for adoption, AI must be paired with human-in-the-loop (HITL) oversight, or checkpoints where humans verify, guide, and decide what actions are taken. The balance between autonomy and accountability is what will allow AI to deliver on its promise without sacrificing human trust ...

More data center leaders are reducing their reliance on utility grids by investing in onsite power for rapidly scaling data centers, according to the Data Center Power Report from Bloom Energy ...