Skip to main content

6 Lessons the ACLU.org Web Team Can Learn from Online Retailers

Tammy Everts

If the events of this past weekend are anything to go by, the American Civil Liberties Union (ACLU) and similar organizations should take their cues from the retail industry.

I’m not talking about tactics when it comes to responding in court to the current administration's executive orders. I’m talking about how they manage online traffic.

Consider what happened this weekend after President Trump signed a controversial executive order. As a result of this action, the ACLU received more than $24 million in online donations – seven times the amount it receives in an entire year – from 356,306 people. Not surprisingly, this traffic spike caused the site to briefly go down.

Traffic surges can happen when you least expect them. Political events can have a huge impact on people’s online behavior, as the ACLU’s website outage clearly demonstrates. While the ACLU might reasonably have expected online donations to increase in light of recent events, they had no way of knowing they were about to experience the largest surge in donations in their 97-year history.

Online retailers know they need to ensure they’re “always open” in today's 24/7 on-demand world. Fortunately, thanks to modern load testing and performance monitoring technologies, site owners can load test at massive scale via the cloud to ensure their sites can handle immense traffic, along with unprecedented visibility into the real-time speed and availability of their sites.

Here are 7 tips that organizations like the ACLU can borrow from the retail world:

1. Remember that every site fails eventually

There’s no such thing as 100% uptime. When a site goes down, it isn’t because someone forgot to flip a switch. It’s because modern websites are complex mechanisms. Any complex system will fail eventually.

2. Accept that you can’t performance test for every contingency

Performance tests are a reliable way to guarantee that your site won’t go down — as long as it’s subjected to the same conditions defined within your test parameters. But you can’t test every single variation of every single parameter. When loads are different from what you modeled in your tests, you may have problems.

3. Know that the past is not a predictor of the future

Load patterns are unpredictable. Yes, you can and should take past load patterns into account when preparing your site, but this won’t cover you for every contingency. Just because you experienced certain load patterns for one event doesn’t mean that load pattern will be consistent for other events.

Over time — even very short periods of time — your site changes, your visitors change and your visitors’ behavior changes. There are no constants. Surprises happen.

4. See failure as an opportunity

Outages suck. There’s no sugarcoating that. But if you must experience one, then you should learn everything you can from it. Make it your mission to get to the root cause of the problem and develop new testing processes to prevent the issue from recurring.

5. Embrace continuous improvement

The web is a dynamic space, which means none of us ever get to stand back, dust off our hands and exclaim: “There! It’s finished!” Instead we build, we evolve, we fail (sometimes), we learn, we evolve some more, so on. We value small evolutionary steps—adding new tools and processes gradually — versus huge overnight changes. We recognize that rigorous performance testing and monitoring don’t guarantee 100% uptime, but they do allow us to fail faster and iterate sooner.

6. Be aware that page slowdowns can cause as much — or more — damage to your business as outages

Outages are stressful, but they’re not the worst performance issue that most sites face. If a site goes down, you’ll probably just try it again a few hours later. Most of us accept that these blips happen. But if a site is consistently slow, people could eventually stop visiting altogether.

Ultimately, the ACLU and similar organizations need to realize that the Trump administration will make the news cycle a perpetual “Cyber Monday." They will need to be prepared. Following the example of online retailers will help them be ready for their moment in the spotlight.

Tammy Everts is Director of Content and Editorial at SOASTA.

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

6 Lessons the ACLU.org Web Team Can Learn from Online Retailers

Tammy Everts

If the events of this past weekend are anything to go by, the American Civil Liberties Union (ACLU) and similar organizations should take their cues from the retail industry.

I’m not talking about tactics when it comes to responding in court to the current administration's executive orders. I’m talking about how they manage online traffic.

Consider what happened this weekend after President Trump signed a controversial executive order. As a result of this action, the ACLU received more than $24 million in online donations – seven times the amount it receives in an entire year – from 356,306 people. Not surprisingly, this traffic spike caused the site to briefly go down.

Traffic surges can happen when you least expect them. Political events can have a huge impact on people’s online behavior, as the ACLU’s website outage clearly demonstrates. While the ACLU might reasonably have expected online donations to increase in light of recent events, they had no way of knowing they were about to experience the largest surge in donations in their 97-year history.

Online retailers know they need to ensure they’re “always open” in today's 24/7 on-demand world. Fortunately, thanks to modern load testing and performance monitoring technologies, site owners can load test at massive scale via the cloud to ensure their sites can handle immense traffic, along with unprecedented visibility into the real-time speed and availability of their sites.

Here are 7 tips that organizations like the ACLU can borrow from the retail world:

1. Remember that every site fails eventually

There’s no such thing as 100% uptime. When a site goes down, it isn’t because someone forgot to flip a switch. It’s because modern websites are complex mechanisms. Any complex system will fail eventually.

2. Accept that you can’t performance test for every contingency

Performance tests are a reliable way to guarantee that your site won’t go down — as long as it’s subjected to the same conditions defined within your test parameters. But you can’t test every single variation of every single parameter. When loads are different from what you modeled in your tests, you may have problems.

3. Know that the past is not a predictor of the future

Load patterns are unpredictable. Yes, you can and should take past load patterns into account when preparing your site, but this won’t cover you for every contingency. Just because you experienced certain load patterns for one event doesn’t mean that load pattern will be consistent for other events.

Over time — even very short periods of time — your site changes, your visitors change and your visitors’ behavior changes. There are no constants. Surprises happen.

4. See failure as an opportunity

Outages suck. There’s no sugarcoating that. But if you must experience one, then you should learn everything you can from it. Make it your mission to get to the root cause of the problem and develop new testing processes to prevent the issue from recurring.

5. Embrace continuous improvement

The web is a dynamic space, which means none of us ever get to stand back, dust off our hands and exclaim: “There! It’s finished!” Instead we build, we evolve, we fail (sometimes), we learn, we evolve some more, so on. We value small evolutionary steps—adding new tools and processes gradually — versus huge overnight changes. We recognize that rigorous performance testing and monitoring don’t guarantee 100% uptime, but they do allow us to fail faster and iterate sooner.

6. Be aware that page slowdowns can cause as much — or more — damage to your business as outages

Outages are stressful, but they’re not the worst performance issue that most sites face. If a site goes down, you’ll probably just try it again a few hours later. Most of us accept that these blips happen. But if a site is consistently slow, people could eventually stop visiting altogether.

Ultimately, the ACLU and similar organizations need to realize that the Trump administration will make the news cycle a perpetual “Cyber Monday." They will need to be prepared. Following the example of online retailers will help them be ready for their moment in the spotlight.

Tammy Everts is Director of Content and Editorial at SOASTA.

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...