Skip to main content

Staying Ahead in the Game of Distributed Denial of Service Attacks

Steve Persch
Pantheon

Too much traffic can crash a website. I learned that hard lesson relatively early in my web development career. Web teams recoil in horror when they realize their own success has crashed their site. Remember when Coinbase spent millions of dollars on a Super Bowl commercial that successfully drove traffic to their site and app? Their infrastructure got run over.

That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack. I count my lucky stars that in my previous jobs of building and running sites I never went head-to-head with a determined attacker. I would have lost. Most web teams would if they were playing the game of Distributed Denial of Service (DDoS) on their own.

These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night.

There's no one easy fix for DDoS attacks. DDoS isn't a bug — it's more like a never-ending game. But to understand the nature of the problem, we need to start from the basics.

Opening Play: Simple Servers Serving Websites

The game can start simple enough. Web teams put websites on the internet with servers. Whether those servers are in a basement office, on some virtual machine, or part of shared hosting, they are largely good enough to send out some HTTP responses.

Now it's the hackers' turn. Even though those servers are intended to only serve HTTP responses, they are still computers on the internet. So they're vulnerable to all kinds of asymmetrical networking attacks that exhaust their resources. How about a UDP flood? Game over.

Add a Firewall

Well, the game is never over. Get a firewall. That can keep out network-level attacks and you can block specific IP addresses. You're winning the game now!

Wait a second … Do you even want to be playing this cat-and-mouse game? While you're thinking about that, the hackers move on to attacking your DNS provider.

Looking for Weak Links

As you're scouring logs and blocking IPs, you're also on the phone with your DNS provider asking what's going on over there. Maybe it's time to switch DNS providers? Ugh, that'll eat up a ton of time and effort and that yields zero positive value to your stakeholders. They're asking for actual improvements to the site that they can see, not the switching of invisible building blocks.

That tension propelled the growth of extremely large services like Cloudflare, which consolidated some of these concerns. Lots of sites moved DNS there to get their free CDN service. Cloudflare withstood low-level network attacks that could overwhelm via sheer volume even a firewalled website. Still, the internet never sleeps. Hackers don't seem to sleep much either because they are finding more ways to slide through the protections of these platforms.

The Street Finds Its Own Use For Things

Many of the technological advances in the 2010s that seemed so useful for benevolent purposes like browser automation are also really handy for generating fake traffic that seems real. The capacity to script browsers that we leverage for visual regression testing can also trick a CDN into thinking that fake traffic is real traffic. The street finds its own use for things, as the writer William Gibson once put it.

When the attack is coming in the form of a lot of web browsers making legitimate-seeming requests, the current state of the art is either an expensive WAF solution, which still requires some ongoing maintenance, or an "I'm under attack" mode. That can keep your site up by adding a CAPTCHA test. However, it isn't acceptable for most teams over the long term to leverage a CDN layer, which is supposed to make the site faster while also making the overall experience slower by forcing the real visitors to pass through some kind of virtual security line. Ugh.

The Winning Move Is Not to Play Alone

Back to the same question from earlier. Do you want to be playing this game at all?

I don't personally want to play in the game, so it's key to identify a platform solution that accelerates and eases management by taking whole classes of problems off the table. Any given web team could do the toiling work of updating PHP versions, but the modern sophistication of DDOS has evolved to require a sizable platform WebOps team that can hold the line.

Steve Persch is Director of Developer Experience at Pantheon

The Latest

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...

Staying Ahead in the Game of Distributed Denial of Service Attacks

Steve Persch
Pantheon

Too much traffic can crash a website. I learned that hard lesson relatively early in my web development career. Web teams recoil in horror when they realize their own success has crashed their site. Remember when Coinbase spent millions of dollars on a Super Bowl commercial that successfully drove traffic to their site and app? Their infrastructure got run over.

That stampede of traffic is even more horrifying when it's part of a malicious denial of service attack. I count my lucky stars that in my previous jobs of building and running sites I never went head-to-head with a determined attacker. I would have lost. Most web teams would if they were playing the game of Distributed Denial of Service (DDoS) on their own.

These attacks are becoming more common, more sophisticated and increasingly tied to ransomware-style demands. So it's no wonder that the threat of DDoS remains one of the many things that keep IT and marketing leaders up at night.

There's no one easy fix for DDoS attacks. DDoS isn't a bug — it's more like a never-ending game. But to understand the nature of the problem, we need to start from the basics.

Opening Play: Simple Servers Serving Websites

The game can start simple enough. Web teams put websites on the internet with servers. Whether those servers are in a basement office, on some virtual machine, or part of shared hosting, they are largely good enough to send out some HTTP responses.

Now it's the hackers' turn. Even though those servers are intended to only serve HTTP responses, they are still computers on the internet. So they're vulnerable to all kinds of asymmetrical networking attacks that exhaust their resources. How about a UDP flood? Game over.

Add a Firewall

Well, the game is never over. Get a firewall. That can keep out network-level attacks and you can block specific IP addresses. You're winning the game now!

Wait a second … Do you even want to be playing this cat-and-mouse game? While you're thinking about that, the hackers move on to attacking your DNS provider.

Looking for Weak Links

As you're scouring logs and blocking IPs, you're also on the phone with your DNS provider asking what's going on over there. Maybe it's time to switch DNS providers? Ugh, that'll eat up a ton of time and effort and that yields zero positive value to your stakeholders. They're asking for actual improvements to the site that they can see, not the switching of invisible building blocks.

That tension propelled the growth of extremely large services like Cloudflare, which consolidated some of these concerns. Lots of sites moved DNS there to get their free CDN service. Cloudflare withstood low-level network attacks that could overwhelm via sheer volume even a firewalled website. Still, the internet never sleeps. Hackers don't seem to sleep much either because they are finding more ways to slide through the protections of these platforms.

The Street Finds Its Own Use For Things

Many of the technological advances in the 2010s that seemed so useful for benevolent purposes like browser automation are also really handy for generating fake traffic that seems real. The capacity to script browsers that we leverage for visual regression testing can also trick a CDN into thinking that fake traffic is real traffic. The street finds its own use for things, as the writer William Gibson once put it.

When the attack is coming in the form of a lot of web browsers making legitimate-seeming requests, the current state of the art is either an expensive WAF solution, which still requires some ongoing maintenance, or an "I'm under attack" mode. That can keep your site up by adding a CAPTCHA test. However, it isn't acceptable for most teams over the long term to leverage a CDN layer, which is supposed to make the site faster while also making the overall experience slower by forcing the real visitors to pass through some kind of virtual security line. Ugh.

The Winning Move Is Not to Play Alone

Back to the same question from earlier. Do you want to be playing this game at all?

I don't personally want to play in the game, so it's key to identify a platform solution that accelerates and eases management by taking whole classes of problems off the table. Any given web team could do the toiling work of updating PHP versions, but the modern sophistication of DDOS has evolved to require a sizable platform WebOps team that can hold the line.

Steve Persch is Director of Developer Experience at Pantheon

The Latest

Artificial intelligence (AI) is core to observability practices, with some 41% of respondents reporting AI adoption as a core driver of observability, according to the State of Observability for Financial Services and Insurance report from New Relic ...

Application performance monitoring (APM) is a game of catching up — building dashboards, setting thresholds, tuning alerts, and manually correlating metrics to root causes. In the early days, this straightforward model worked as applications were simpler, stacks more predictable, and telemetry was manageable. Today, the landscape has shifted, and more assertive tools are needed ...

Cloud adoption has accelerated, but backup strategies haven't always kept pace. Many organizations continue to rely on backup strategies that were either lifted directly from on-prem environments or use cloud-native tools in limited, DR-focused ways ... Eon uncovered a handful of critical gaps regarding how organizations approach cloud backup. To capture these prevailing winds, we gathered insights from 150+ IT and cloud leaders at the recent Google Cloud Next conference, which we've compiled into the 2025 State of Cloud Data Backup ...

Private clouds are no longer playing catch-up, and public clouds are no longer the default as organizations recalibrate their cloud strategies, according to the Private Cloud Outlook 2025 report from Broadcom. More than half (53%) of survey respondents say private cloud is their top priority for deploying new workloads over the next three years, while 69% are considering workload repatriation from public to private cloud, with one-third having already done so ...

As organizations chase productivity gains from generative AI, teams are overwhelmingly focused on improving delivery speed (45%) over enhancing software quality (13%), according to the Quality Transformation Report from Tricentis ...

Back in March of this year ... MongoDB's stock price took a serious tumble ... In my opinion, it reflects a deeper structural issue in enterprise software economics altogether — vendor lock-in ...

In MEAN TIME TO INSIGHT Episode 15, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses Do-It-Yourself Network Automation ... 

Zero-day vulnerabilities — security flaws that are exploited before developers even know they exist — pose one of the greatest risks to modern organizations. Recently, such vulnerabilities have been discovered in well-known VPN systems like Ivanti and Fortinet, highlighting just how outdated these legacy technologies have become in defending against fast-evolving cyber threats ... To protect digital assets and remote workers in today's environment, companies need more than patchwork solutions. They need architecture that is secure by design ...

Traditional observability requires users to leap across different platforms or tools for metrics, logs, or traces and related issues manually, which is very time-consuming, so as to reasonably ascertain the root cause. Observability 2.0 fixes this by unifying all telemetry data, logs, metrics, and traces into a single, context-rich pipeline that flows into one smart platform. But this is far from just having a bunch of additional data; this data is actionable, predictive, and tied to revenue realization ...

64% of enterprise networking teams use internally developed software or scripts for network automation, but 61% of those teams spend six or more hours per week debugging and maintaining them, according to From Scripts to Platforms: Why Homegrown Tools Dominate Network Automation and How Vendors Can Help, my latest EMA report ...