Skip to main content

Turning Foresight into Resilience: Reclaiming Prevention in the Age of Exposure

Garrett Hamilton
Reach Security

Cloudflare's recent outage is a stark reminder of how concentrated the internet has become. When a single infrastructure provider experiences disruption, the impact is immediate and global. In this case, a faulty internal database configuration bloated a key file, disrupting services worldwide until engineers rolled back the change. While there was no evidence of malicious activity, the incident underscores a broader issue: even routine anomalies can create outsized operational risk.

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter. Centralization delivers convenience and protection, but it also creates single points of failure that amplify the fallout.

You can't avoid these dependencies, but you can understand them. Continuous visibility, configuration awareness, and clarity around where infrastructure is fragile are now essential parts of modern resilience. Whether it's an outage or an attack, the question remains the same: where are you exposed, when the platforms you rely on stumble?

From Hindsight to Foresight

Exposure isn't limited to external providers, it often exists inside the enterprise itself. Security teams are often told to assume breaches have already occurred and focus on detection, investigation, and recovery. Yet postmortems frequently reveal that organizations already owned tools capable of preventing the incident — they simply weren't configured properly or maintained.

Rather than relying on hindsight, the industry must turn foresight into action. That means shifting security "left of boom" and helping businesses optimize the investments they've already made. The challenge lies in understanding complex environments and overcoming governance issues that hinder proactive defense.

Why Exposure Management Matters

Exposure management has become essential because modern organizations face an everexpanding attack surface. Businesses now operate across onpremises systems, cloud platforms, mobile devices, and thirdparty services, each introducing potential entry points for attackers. The sheer scale and diversity of these environments make it increasingly difficult to maintain visibility and control using traditional methods.

Older vulnerability management approaches, which focused narrowly on patching known flaws, are no longer sufficient. Exposure management goes further by continuously monitoring misconfigurations, identity gaps, and overlooked assets. This broader scope ensures that risks beyond simple vulnerabilities are identified and addressed, helping organizations stay ahead of adversaries who exploit weaknesses quickly.

Another problem is the complexity of today's tool environments. Security architects manage sprawling stacks, often with dozens of point solutions added over time. It's not unusual for a single organization to run 75 different tools, each with constant patches and updates. In 2024 alone, we counted the top 20 security tools released 380 new features. This fragmentation leaves valuable data locked away and risks hidden from view. With each tool offering multiple independent controls, the combinations are overwhelming. Teams risk burnout, mistakes, or paralysis, leaving businesses exposed despite heavy investment.

Visibility compounds the problem. Tools often operate in siloes, preventing data from being shared to strengthen defenses. Ownership issues add another layer: identity and access management (IAM) may sit with IT, limiting security architects' insight into configurations or licensing and eroding their authority to request changes for security reasons. Tracking coverage and configurations becomes a neverending task, akin to painting the Golden Gate Bridge. Reporting meaningful risk reduction to boards in such fragmented environments is equally difficult.

The result is a reactive posture that lags behind adversaries. To shift toward prevention, organizations must maximize value from existing tools, gain timely visibility into exposures, and establish measurable risk reduction strategies. Exposure assessment platforms (EAPs) help by identifying misconfigurations, but they often lack context, prioritization, and actionable fixes.

The Role of Agentic AI

Agentic AI introduces a new approach to managing exposures. Unlike static reporting, AI can contextualize exposures, prioritize them by risk, and generate actionable tickets specifying how and where fixes should occur. In advanced environments, AI agents could even implement staged fixes automatically, leaving teams to validate before deployment.

By addressing tool sprawl and configuration drift, this approach enables continuous monitoring and proactive remediation. It helps security architects move beyond surfacing risks to actually resolving them, ensuring systems remain in an optimal state even as they evolve.

Prevention Reclaimed

The next era of cybersecurity must leverage existing investments more intelligently. Prevention should once again be central, not overshadowed by detection and response. Agentic AI provides a pathway to proactive defense, helping organizations harden systems, close exploitable gaps, and stem the tide of preventable breaches.

Cloudflare's outage may have been caused by a simple misconfiguration, but its ripple effects demonstrate the scale of exposure in today's interconnected world. Organizations that embrace exposure management will be better positioned to withstand both routine anomalies and deliberate attacks, turning foresight into resilience.

Garrett Hamilton is CEO and Co-Founder of Reach Security

The Latest

Outages aren't new. What's new is how quickly they spread across systems, vendors, regions and customer workflows. The moment that performance degrades, expectations escalate fast. In today's always-on environment, an outage isn't just a technical event. It's a trust event ...

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

Turning Foresight into Resilience: Reclaiming Prevention in the Age of Exposure

Garrett Hamilton
Reach Security

Cloudflare's recent outage is a stark reminder of how concentrated the internet has become. When a single infrastructure provider experiences disruption, the impact is immediate and global. In this case, a faulty internal database configuration bloated a key file, disrupting services worldwide until engineers rolled back the change. While there was no evidence of malicious activity, the incident underscores a broader issue: even routine anomalies can create outsized operational risk.

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter. Centralization delivers convenience and protection, but it also creates single points of failure that amplify the fallout.

You can't avoid these dependencies, but you can understand them. Continuous visibility, configuration awareness, and clarity around where infrastructure is fragile are now essential parts of modern resilience. Whether it's an outage or an attack, the question remains the same: where are you exposed, when the platforms you rely on stumble?

From Hindsight to Foresight

Exposure isn't limited to external providers, it often exists inside the enterprise itself. Security teams are often told to assume breaches have already occurred and focus on detection, investigation, and recovery. Yet postmortems frequently reveal that organizations already owned tools capable of preventing the incident — they simply weren't configured properly or maintained.

Rather than relying on hindsight, the industry must turn foresight into action. That means shifting security "left of boom" and helping businesses optimize the investments they've already made. The challenge lies in understanding complex environments and overcoming governance issues that hinder proactive defense.

Why Exposure Management Matters

Exposure management has become essential because modern organizations face an everexpanding attack surface. Businesses now operate across onpremises systems, cloud platforms, mobile devices, and thirdparty services, each introducing potential entry points for attackers. The sheer scale and diversity of these environments make it increasingly difficult to maintain visibility and control using traditional methods.

Older vulnerability management approaches, which focused narrowly on patching known flaws, are no longer sufficient. Exposure management goes further by continuously monitoring misconfigurations, identity gaps, and overlooked assets. This broader scope ensures that risks beyond simple vulnerabilities are identified and addressed, helping organizations stay ahead of adversaries who exploit weaknesses quickly.

Another problem is the complexity of today's tool environments. Security architects manage sprawling stacks, often with dozens of point solutions added over time. It's not unusual for a single organization to run 75 different tools, each with constant patches and updates. In 2024 alone, we counted the top 20 security tools released 380 new features. This fragmentation leaves valuable data locked away and risks hidden from view. With each tool offering multiple independent controls, the combinations are overwhelming. Teams risk burnout, mistakes, or paralysis, leaving businesses exposed despite heavy investment.

Visibility compounds the problem. Tools often operate in siloes, preventing data from being shared to strengthen defenses. Ownership issues add another layer: identity and access management (IAM) may sit with IT, limiting security architects' insight into configurations or licensing and eroding their authority to request changes for security reasons. Tracking coverage and configurations becomes a neverending task, akin to painting the Golden Gate Bridge. Reporting meaningful risk reduction to boards in such fragmented environments is equally difficult.

The result is a reactive posture that lags behind adversaries. To shift toward prevention, organizations must maximize value from existing tools, gain timely visibility into exposures, and establish measurable risk reduction strategies. Exposure assessment platforms (EAPs) help by identifying misconfigurations, but they often lack context, prioritization, and actionable fixes.

The Role of Agentic AI

Agentic AI introduces a new approach to managing exposures. Unlike static reporting, AI can contextualize exposures, prioritize them by risk, and generate actionable tickets specifying how and where fixes should occur. In advanced environments, AI agents could even implement staged fixes automatically, leaving teams to validate before deployment.

By addressing tool sprawl and configuration drift, this approach enables continuous monitoring and proactive remediation. It helps security architects move beyond surfacing risks to actually resolving them, ensuring systems remain in an optimal state even as they evolve.

Prevention Reclaimed

The next era of cybersecurity must leverage existing investments more intelligently. Prevention should once again be central, not overshadowed by detection and response. Agentic AI provides a pathway to proactive defense, helping organizations harden systems, close exploitable gaps, and stem the tide of preventable breaches.

Cloudflare's outage may have been caused by a simple misconfiguration, but its ripple effects demonstrate the scale of exposure in today's interconnected world. Organizations that embrace exposure management will be better positioned to withstand both routine anomalies and deliberate attacks, turning foresight into resilience.

Garrett Hamilton is CEO and Co-Founder of Reach Security

The Latest

Outages aren't new. What's new is how quickly they spread across systems, vendors, regions and customer workflows. The moment that performance degrades, expectations escalate fast. In today's always-on environment, an outage isn't just a technical event. It's a trust event ...

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...