Skip to main content

6 Lessons the ACLU.org Web Team Can Learn from Online Retailers

Tammy Everts

If the events of this past weekend are anything to go by, the American Civil Liberties Union (ACLU) and similar organizations should take their cues from the retail industry.

I’m not talking about tactics when it comes to responding in court to the current administration's executive orders. I’m talking about how they manage online traffic.

Consider what happened this weekend after President Trump signed a controversial executive order. As a result of this action, the ACLU received more than $24 million in online donations – seven times the amount it receives in an entire year – from 356,306 people. Not surprisingly, this traffic spike caused the site to briefly go down.

Traffic surges can happen when you least expect them. Political events can have a huge impact on people’s online behavior, as the ACLU’s website outage clearly demonstrates. While the ACLU might reasonably have expected online donations to increase in light of recent events, they had no way of knowing they were about to experience the largest surge in donations in their 97-year history.

Online retailers know they need to ensure they’re “always open” in today's 24/7 on-demand world. Fortunately, thanks to modern load testing and performance monitoring technologies, site owners can load test at massive scale via the cloud to ensure their sites can handle immense traffic, along with unprecedented visibility into the real-time speed and availability of their sites.

Here are 7 tips that organizations like the ACLU can borrow from the retail world:

1. Remember that every site fails eventually

There’s no such thing as 100% uptime. When a site goes down, it isn’t because someone forgot to flip a switch. It’s because modern websites are complex mechanisms. Any complex system will fail eventually.

2. Accept that you can’t performance test for every contingency

Performance tests are a reliable way to guarantee that your site won’t go down — as long as it’s subjected to the same conditions defined within your test parameters. But you can’t test every single variation of every single parameter. When loads are different from what you modeled in your tests, you may have problems.

3. Know that the past is not a predictor of the future

Load patterns are unpredictable. Yes, you can and should take past load patterns into account when preparing your site, but this won’t cover you for every contingency. Just because you experienced certain load patterns for one event doesn’t mean that load pattern will be consistent for other events.

Over time — even very short periods of time — your site changes, your visitors change and your visitors’ behavior changes. There are no constants. Surprises happen.

4. See failure as an opportunity

Outages suck. There’s no sugarcoating that. But if you must experience one, then you should learn everything you can from it. Make it your mission to get to the root cause of the problem and develop new testing processes to prevent the issue from recurring.

5. Embrace continuous improvement

The web is a dynamic space, which means none of us ever get to stand back, dust off our hands and exclaim: “There! It’s finished!” Instead we build, we evolve, we fail (sometimes), we learn, we evolve some more, so on. We value small evolutionary steps—adding new tools and processes gradually — versus huge overnight changes. We recognize that rigorous performance testing and monitoring don’t guarantee 100% uptime, but they do allow us to fail faster and iterate sooner.

6. Be aware that page slowdowns can cause as much — or more — damage to your business as outages

Outages are stressful, but they’re not the worst performance issue that most sites face. If a site goes down, you’ll probably just try it again a few hours later. Most of us accept that these blips happen. But if a site is consistently slow, people could eventually stop visiting altogether.

Ultimately, the ACLU and similar organizations need to realize that the Trump administration will make the news cycle a perpetual “Cyber Monday." They will need to be prepared. Following the example of online retailers will help them be ready for their moment in the spotlight.

Tammy Everts is Director of Content and Editorial at SOASTA.

The Latest

Outages aren't new. What's new is how quickly they spread across systems, vendors, regions and customer workflows. The moment that performance degrades, expectations escalate fast. In today's always-on environment, an outage isn't just a technical event. It's a trust event ...

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...

6 Lessons the ACLU.org Web Team Can Learn from Online Retailers

Tammy Everts

If the events of this past weekend are anything to go by, the American Civil Liberties Union (ACLU) and similar organizations should take their cues from the retail industry.

I’m not talking about tactics when it comes to responding in court to the current administration's executive orders. I’m talking about how they manage online traffic.

Consider what happened this weekend after President Trump signed a controversial executive order. As a result of this action, the ACLU received more than $24 million in online donations – seven times the amount it receives in an entire year – from 356,306 people. Not surprisingly, this traffic spike caused the site to briefly go down.

Traffic surges can happen when you least expect them. Political events can have a huge impact on people’s online behavior, as the ACLU’s website outage clearly demonstrates. While the ACLU might reasonably have expected online donations to increase in light of recent events, they had no way of knowing they were about to experience the largest surge in donations in their 97-year history.

Online retailers know they need to ensure they’re “always open” in today's 24/7 on-demand world. Fortunately, thanks to modern load testing and performance monitoring technologies, site owners can load test at massive scale via the cloud to ensure their sites can handle immense traffic, along with unprecedented visibility into the real-time speed and availability of their sites.

Here are 7 tips that organizations like the ACLU can borrow from the retail world:

1. Remember that every site fails eventually

There’s no such thing as 100% uptime. When a site goes down, it isn’t because someone forgot to flip a switch. It’s because modern websites are complex mechanisms. Any complex system will fail eventually.

2. Accept that you can’t performance test for every contingency

Performance tests are a reliable way to guarantee that your site won’t go down — as long as it’s subjected to the same conditions defined within your test parameters. But you can’t test every single variation of every single parameter. When loads are different from what you modeled in your tests, you may have problems.

3. Know that the past is not a predictor of the future

Load patterns are unpredictable. Yes, you can and should take past load patterns into account when preparing your site, but this won’t cover you for every contingency. Just because you experienced certain load patterns for one event doesn’t mean that load pattern will be consistent for other events.

Over time — even very short periods of time — your site changes, your visitors change and your visitors’ behavior changes. There are no constants. Surprises happen.

4. See failure as an opportunity

Outages suck. There’s no sugarcoating that. But if you must experience one, then you should learn everything you can from it. Make it your mission to get to the root cause of the problem and develop new testing processes to prevent the issue from recurring.

5. Embrace continuous improvement

The web is a dynamic space, which means none of us ever get to stand back, dust off our hands and exclaim: “There! It’s finished!” Instead we build, we evolve, we fail (sometimes), we learn, we evolve some more, so on. We value small evolutionary steps—adding new tools and processes gradually — versus huge overnight changes. We recognize that rigorous performance testing and monitoring don’t guarantee 100% uptime, but they do allow us to fail faster and iterate sooner.

6. Be aware that page slowdowns can cause as much — or more — damage to your business as outages

Outages are stressful, but they’re not the worst performance issue that most sites face. If a site goes down, you’ll probably just try it again a few hours later. Most of us accept that these blips happen. But if a site is consistently slow, people could eventually stop visiting altogether.

Ultimately, the ACLU and similar organizations need to realize that the Trump administration will make the news cycle a perpetual “Cyber Monday." They will need to be prepared. Following the example of online retailers will help them be ready for their moment in the spotlight.

Tammy Everts is Director of Content and Editorial at SOASTA.

The Latest

Outages aren't new. What's new is how quickly they spread across systems, vendors, regions and customer workflows. The moment that performance degrades, expectations escalate fast. In today's always-on environment, an outage isn't just a technical event. It's a trust event ...

Most organizations approach OpenTelemetry as a collection of individual tools they need to assemble from scratch. This view misses the bigger picture. OpenTelemetry is a complete telemetry framework with composable components that address specific problems at different stages of organizational maturity. You start with what you need today and adopt additional pieces as your observability practices evolve ...

One of the earliest lessons I learned from architecting throughput-heavy services is that simplicity wins repeatedly: fewer moving parts, loosely coupled execution (fewer synchronous calls), and precise timing metering. You want data and decisions to travel the shortest possible path. The goal is to build a system where every strategy and each line of code (contention is the key metric) complements the decision trees ...

As discussions around AI "autonomous coworkers" accelerate, many industry projections assume that agents will soon operate alongside human staff in making decisions, taking actions, and managing tasks with minimal oversight. But a growing number of critics (including some of the developers building these systems) argue that the industry still has a long way to go to be able to treat AI agents like fully trusted teammates ...

Enterprise AI has entered a transformational phase where, according to Digitate's recently released survey, Agentic AI and the Future of Enterprise IT, companies are moving beyond traditional automation toward Agentic AI systems designed to reason, adapt, and collaborate alongside human teams ...

The numbers back this urgency up. A recent Zapier survey shows that 92% of enterprises now treat AI as a top priority. Leaders want it, and teams are clamoring for it. But if you look closer at the operations of these companies, you see a different picture. The rollout is slow. The results are often delayed. There's a disconnect between what leaders want and what their technical infrastructure can handle ...

Kyndryl's 2025 Readiness Report revealed that 61% of global business and technology leaders report increasing pressure from boards and regulators to prove AI's ROI. As the technology evolves and expectations continue to rise, leaders are compelled to generate and prove impact before scaling further. This will lead to a decisive turning point in 2026 ...

Cloudflare's disruption illustrates how quickly a single provider's issue cascades into widespread exposure. Many organizations don't fully realize how tightly their systems are coupled to thirdparty services, or how quickly availability and security concerns align when those services falter ... You can't avoid these dependencies, but you can understand them ...

If you work with AI, you know this story. A model performs during testing, looks great in early reviews, works perfectly in production and then slowly loses relevance after operating for a while. Everything on the surface looks perfect — pipelines are running, predictions or recommendations are error-free, data quality checks show green; yet outcomes don't meet the ground reality. This pattern often repeats across enterprise AI programs. Take for example, a mid-sized retail banking and wealth-management firm with heavy investments in AI-powered risk analytics, fraud detection and personalized credit-decisioning systems. The model worked well for a while, but transactions increased, so did false positives by 18% ...

Basic uptime is no longer the gold standard. By 2026, network monitoring must do more than report status, it must explain performance in a hybrid-first world. Networks are no longer just static support systems; they are agile, distributed architectures that sit at the very heart of the customer experience and the business outcomes ... The following five trends represent the new standard for network health, providing a blueprint for teams to move from reactive troubleshooting to a proactive, integrated future ...