Skip to main content

Automating While Keeping Humans in the Loop

Gurjeet Arora
Observo AI

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms.

CISOs get a ton of security data from each of their security applications— so much that a good portion has been traditionally discarded. Storing it costs money and looking through it for useful information has been prohibitively time consuming. But now, with the arrival of agentic AI, security specialists are using new platforms to find needles in haystacks that result in faster threat detection and sharper insight. So why not just set the agents loose?

Artificial intelligence can make security and operations more efficient, but algorithms that run without human oversight can expose sensitive data and trigger critical mistakes like granting unauthorized access to production systems, deleting audit logs required for regulatory compliance, or sharing proprietary telemetry with external models that might train on it. Security analysts also need to be sure that algorithms are not discarding critical telemetry or exposing proprietary information. Balancing machine efficiency with human judgment calls experts in control from first connection to final decision — these systems need to be designed to include humans in the loop.

Transparency Before Action

Good governance starts with AI that informs before it acts. Instead of executing silent drop rules, modern observability platforms surface recommended transformations alongside "before, after, and delta" views of the data stream. Security analysts can inspect the logic, test the impact, and approve or refine changes. That workflow flips the traditional black-box pattern on its head and builds an auditable trail that satisfies compliance teams and board committees.

The cost of discarding important security data was significant after the October 2023 credential-stuffing breach at genetic-testing provider 23andMe. Hackers compromised a tiny slice of user accounts, yet ultimately exposed ancestry data on 6.9 million people because abnormal access patterns were not flagged quickly enough. Theoretically, an explainable AI layer that highlighted the surge in calls to high-value records, and let analysts drill into that anomaly using natural language search might have tightened the response window.

Ensuring a Human-in-the-Loop Architecture

A practical human-in-the-loop (HITL) pattern has three pillars:

1. Advisory algorithms that monitor velocity, completeness, and pattern shifts, but generate recommendations rather than direct edits.
2. Granular approvals that ensure security or observability staff can accept, reject, or fine-tune each suggestion, ensuring the tool never outruns policy.
3. Continuous learning feedback loops relay the choices people make back into the model, so future suggestions align better with enterprise priorities.

By delegating repetitive parsing and noise reduction to machines while preserving expert veto power, enterprises can cut data volumes by double digits yet retain the evidence chain needed for incident response. The reduction also fights alert fatigue, freeing analysts for hunting and architecture work that adds strategic value.

Keep Your Data, Keep Your Edge

Protecting enterprise telemetry is essential to preserving competitive advantage. This data is far more than operational exhaust. It's a proprietary asset that reveals the inner logic of business processes and the behavior of users. When a vendor or a public model is allowed to ingest and train on that asset, the organization risks handing competitors a map of its most valuable insights.

CIOs can guard against that outcome by insisting on three interconnected safeguards. They should start with data-plane sovereignty, keeping models inside their own virtual private cloud or on-premises so that raw events never cross into multi-tenant territory. They must then secure strong contractual protection that blocks providers from pooling or fine-tuning foundation models with customer logs, metrics, or prompts. Finally, they need portable intelligence: every transformation rule and enrichment step should be exportable in human-readable form, ensuring that teams can migrate or rebuild their workflows without being trapped by a single vendor's ecosystem.

Forward-looking organizations are reinforcing these measures with internal retrieval-augmented generation services. By storing long-term memory behind the firewall and sharing only transient embeddings with external APIs, they reduce exposure while still benefiting from large-language-model capabilities. Complementary retention policies keep traces and audit artifacts searchable for compliance purposes but out of reach for external model training, sealing off yet another avenue for unintended data leakage.

From Pilot to Production

Moving from lab to line of business requires the same discipline applied to any change-management initiative. Start with a target domain, such as DNS logs, where excessive chatter drives up cost. Baseline current volume, false-positive rates, and analyst hours. Introduce the HITL workflow, capture approvals, and measure the deltas over 30 days. Early wins both fund expansion and educate stakeholders on the transparency controls that keep risk in check.

Metrics worth tracking include reduction percentage by source, median query latency, and mean time to detect true positives. Dashboards that visualize these figures against cost and staffing levels help non-technical executives see the connection between explainable automation and business outcomes.

Action Plan for CIOs

1. Map data classifications and designate which streams can be candidates for automated optimization.
2. Mandate that any AI vendor supply an explanation interface and approval gate before data mutations.
3. Require on-premise or single-tenant deployment options for sensitive workloads, with no secondary use of telemetry for model training.
4. Establish a feedback mechanism so analysts can label false positives and guide model evolution.
5. Report quarterly on volume savings, alert accuracy, and response time to demonstrate ROI.

Conclusion

Agentic security automation's success hinges on trust. CIOs need clear evidence that every algorithmic choice can be traced, explained, and aligned with corporate risk tolerance and growth priorities. Transparent models that route recommendations through human approval paths and keep sensitive telemetry within controlled boundaries give leaders the assurance that AI is being applied responsibly. When that transparency is paired with contractual and technical safeguards that prevent external training on proprietary data, organizations gain a defense posture that learns at machine speed while remaining governed by human intent. Such an approach advances three top-level objectives at once: stronger resilience against evolving threats, lower operational costs through efficient data handling, and faster innovation that turns security insights into strategic advantage.

Gurjeet Arora is CEO and Co-Founder of Observo AI

The Latest

A new wave of tariffs, some exceeding 100%, is sending shockwaves across the technology industry. Enterprises are grappling with sudden, dramatic cost increases that threaten to disrupt carefully planned budgets, sourcing strategies, and deployment plans. For CIOs and CTOs, this isn't just an economic setback; it's a wake-up call. The era of predictable cloud pricing and stable global supply chains is over ...

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Automating While Keeping Humans in the Loop

Gurjeet Arora
Observo AI

We're inching ever closer toward a long-held goal: technology infrastructure that is so automated that it can protect itself. But as IT leaders aggressively employ automation across our enterprises, we need to continuously reassess what AI is ready to manage autonomously and what can not yet be trusted to algorithms.

CISOs get a ton of security data from each of their security applications— so much that a good portion has been traditionally discarded. Storing it costs money and looking through it for useful information has been prohibitively time consuming. But now, with the arrival of agentic AI, security specialists are using new platforms to find needles in haystacks that result in faster threat detection and sharper insight. So why not just set the agents loose?

Artificial intelligence can make security and operations more efficient, but algorithms that run without human oversight can expose sensitive data and trigger critical mistakes like granting unauthorized access to production systems, deleting audit logs required for regulatory compliance, or sharing proprietary telemetry with external models that might train on it. Security analysts also need to be sure that algorithms are not discarding critical telemetry or exposing proprietary information. Balancing machine efficiency with human judgment calls experts in control from first connection to final decision — these systems need to be designed to include humans in the loop.

Transparency Before Action

Good governance starts with AI that informs before it acts. Instead of executing silent drop rules, modern observability platforms surface recommended transformations alongside "before, after, and delta" views of the data stream. Security analysts can inspect the logic, test the impact, and approve or refine changes. That workflow flips the traditional black-box pattern on its head and builds an auditable trail that satisfies compliance teams and board committees.

The cost of discarding important security data was significant after the October 2023 credential-stuffing breach at genetic-testing provider 23andMe. Hackers compromised a tiny slice of user accounts, yet ultimately exposed ancestry data on 6.9 million people because abnormal access patterns were not flagged quickly enough. Theoretically, an explainable AI layer that highlighted the surge in calls to high-value records, and let analysts drill into that anomaly using natural language search might have tightened the response window.

Ensuring a Human-in-the-Loop Architecture

A practical human-in-the-loop (HITL) pattern has three pillars:

1. Advisory algorithms that monitor velocity, completeness, and pattern shifts, but generate recommendations rather than direct edits.
2. Granular approvals that ensure security or observability staff can accept, reject, or fine-tune each suggestion, ensuring the tool never outruns policy.
3. Continuous learning feedback loops relay the choices people make back into the model, so future suggestions align better with enterprise priorities.

By delegating repetitive parsing and noise reduction to machines while preserving expert veto power, enterprises can cut data volumes by double digits yet retain the evidence chain needed for incident response. The reduction also fights alert fatigue, freeing analysts for hunting and architecture work that adds strategic value.

Keep Your Data, Keep Your Edge

Protecting enterprise telemetry is essential to preserving competitive advantage. This data is far more than operational exhaust. It's a proprietary asset that reveals the inner logic of business processes and the behavior of users. When a vendor or a public model is allowed to ingest and train on that asset, the organization risks handing competitors a map of its most valuable insights.

CIOs can guard against that outcome by insisting on three interconnected safeguards. They should start with data-plane sovereignty, keeping models inside their own virtual private cloud or on-premises so that raw events never cross into multi-tenant territory. They must then secure strong contractual protection that blocks providers from pooling or fine-tuning foundation models with customer logs, metrics, or prompts. Finally, they need portable intelligence: every transformation rule and enrichment step should be exportable in human-readable form, ensuring that teams can migrate or rebuild their workflows without being trapped by a single vendor's ecosystem.

Forward-looking organizations are reinforcing these measures with internal retrieval-augmented generation services. By storing long-term memory behind the firewall and sharing only transient embeddings with external APIs, they reduce exposure while still benefiting from large-language-model capabilities. Complementary retention policies keep traces and audit artifacts searchable for compliance purposes but out of reach for external model training, sealing off yet another avenue for unintended data leakage.

From Pilot to Production

Moving from lab to line of business requires the same discipline applied to any change-management initiative. Start with a target domain, such as DNS logs, where excessive chatter drives up cost. Baseline current volume, false-positive rates, and analyst hours. Introduce the HITL workflow, capture approvals, and measure the deltas over 30 days. Early wins both fund expansion and educate stakeholders on the transparency controls that keep risk in check.

Metrics worth tracking include reduction percentage by source, median query latency, and mean time to detect true positives. Dashboards that visualize these figures against cost and staffing levels help non-technical executives see the connection between explainable automation and business outcomes.

Action Plan for CIOs

1. Map data classifications and designate which streams can be candidates for automated optimization.
2. Mandate that any AI vendor supply an explanation interface and approval gate before data mutations.
3. Require on-premise or single-tenant deployment options for sensitive workloads, with no secondary use of telemetry for model training.
4. Establish a feedback mechanism so analysts can label false positives and guide model evolution.
5. Report quarterly on volume savings, alert accuracy, and response time to demonstrate ROI.

Conclusion

Agentic security automation's success hinges on trust. CIOs need clear evidence that every algorithmic choice can be traced, explained, and aligned with corporate risk tolerance and growth priorities. Transparent models that route recommendations through human approval paths and keep sensitive telemetry within controlled boundaries give leaders the assurance that AI is being applied responsibly. When that transparency is paired with contractual and technical safeguards that prevent external training on proprietary data, organizations gain a defense posture that learns at machine speed while remaining governed by human intent. Such an approach advances three top-level objectives at once: stronger resilience against evolving threats, lower operational costs through efficient data handling, and faster innovation that turns security insights into strategic advantage.

Gurjeet Arora is CEO and Co-Founder of Observo AI

The Latest

A new wave of tariffs, some exceeding 100%, is sending shockwaves across the technology industry. Enterprises are grappling with sudden, dramatic cost increases that threaten to disrupt carefully planned budgets, sourcing strategies, and deployment plans. For CIOs and CTOs, this isn't just an economic setback; it's a wake-up call. The era of predictable cloud pricing and stable global supply chains is over ...

As artificial intelligence (AI) adoption gains momentum, network readiness is emerging as a critical success factor. AI workloads generate unpredictable bursts of traffic, demanding high-speed connectivity that is low latency and lossless. AI adoption will require upgrades and optimizations in data center networks and wide-area networks (WANs). This is prompting enterprise IT teams to rethink, re-architect, and upgrade their data center and WANs to support AI-driven operations ...

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...