Skip to main content

Super Bowl Sunday 2013 – The Lights Weren’t the Only Thing That Went Out!

Stephen Pierzchala

For advertisers, the Super Bowl represents much more than a football game. It’s the pinnacle event in the advertising industry’s year, where companies spend millions of dollars for 30 second and one-minute ad blocks as well as millions for the creation of the ads. Most companies try to link their TV ads to their online properties to create a “second screen” correlation, relying on consumers to simultaneously use online media on tablets and phones to augment the traditional TV brand interaction. Remember, if there’s one thing we learned from Thanksgiving 2012 it’s that the “couch commerce” trend – consumers browsing on tablets from the comfort of their couches – is alive and well.

Now, imagine spending all that money only to send your online site visitors to a competitor’s site. Seems absurd, doesn’t it? Well, that’s the risk several companies – including Coca-Cola, Audi and Universal Pictures – faced this year, as their online properties did not perform nearly as well as competitors – including Pepsi, Mercedez-Benz and Paramount – who were also advertising during the Super Bowl.

For example, for the period from 5PM EST until 11PM EST on Sunday, February 3, Audi had an average response time of 3.487 seconds, while Mercedez-Benz had an average response time of 3.060 seconds. This difference may seem miniscule, but in a world where a few lost milliseconds is all it takes to route your customers to the competition, it’s huge.

In another example, Coca-Cola had an average response time of 9.894 seconds, while Pepsi’s was 5.744 seconds. When you consider Google’s recent finding – that slowing search response times by just four-tenths of a second reduces the number of searches by eight million per day – you can bet that many users didn’t stick around on the Coca-Cola site.

While some advertisers’ sites experienced minimal or no impact from increased traffic volumes, several (like Coca-Cola) had increases in page load times and a noticeable effect on availability, with a few sites crashing during the critical peak period. The measurement results from the Compuware network in the periods leading up to, and during, the Super Bowl showed some clear winners and losers in page load time.

Events like the Super Bowl require high-frequency measurements, so we set our locations to collect data every five minutes to catch every variation in performance, no matter how fleeting.

For the period from 5PM EST until 11PM EST on Sunday, February 3, the top and bottom three sites were:

Top Three Performers

1. Go Daddy

2. Paramount

3. Lincoln Motor Cars

Bottom Three Performers

1. Doritos

2. Coca-Cola

3. Universal Pictures

What we found through our analysis is that the issues causing performance lags almost perfectly aligned with those that Compuware finds during every major online event, including the following.

You’re Not Alone

The Super Bowl is often referred to as the perfect storm for web performance – a six-hour window, with the spotlight on your company for 30-60 seconds. However, the halo effect sees traffic to your site increase astronomically for the entire six hours while people prepare for your big unveiling, or wish to view your commercial again.

But your company isn’t the only one doing the same thing. And many (if not all) of the infrastructure components, datacenters, CDNs, ad providers, web analytics and video streaming platforms you use are being used by other companies advertising during the Super Bowl.

So, even if you have tested your entire site to what you think is your peak traffic volume – and beyond – remember that these shared services are all running at their maximum volume during the Super Bowl. All of the testing you did on your site can be undone by a third-party that can’t handle a peak load coming from two, three, or more customers simultaneously.

Lesson: Verify that your third-party services can effectively handle the maximum load from all of their customers all at once without degrading the performance of any of them.

Lose a Few Pounds

The performance solution isn’t just on the third-parties. It also relies on companies taking steps to focus on the most important aspect of Super Bowl Sunday online campaigns – getting people to your site. Sometimes this means that you have to make some compromises, perhaps streamline the delivery a little more than you otherwise would.

While the total amount of content is a key indicator of potential trouble – yes, big pages do tend to load more slowly than small pages – Compuware data showed that two of the three slowest sites drew content from more than 20 hosts and had more than 100 objects on the page, with the slowest having over 200! This complexity increases the likelihood that something will go wrong; and that if that happens, it could lead to a serious degradation in performance.

Lesson: While having a cool, interactive site for customers to come to is a big win for a massive marketing event like the Super Bowl, keeping a laser focus on delivering a successful experience sometimes means leaving stuff out.

Have a Plan B (and Plan C, and Plan D ...)

If you plan for a problem, when it happens, it’s not a problem.

If your selected CDN becomes congested due to a massive traffic influx that was not expected, have the ability to dynamically balance load between CDN providers.

If an ad service or messaging platform begins to choke your site, have the ability to easily disable the offending hosts.

If your cloud provider begins to rain on your parade, transfer load to the secondary provider you set up “just in case.”

If your dynamic page creation begins to crash your application servers, switch to a static HTML version that can be more easily delivered by your infrastructure.

And, if you have fallen back to Plan J, have an amusing error message that allows your customers to participate in the failure of your success. Heck, create a Twitter hashtag that says “#[your company]GoesBoom” and realize that any publicity is better than not being talked about at all.

Lesson: Don’t put all your eggs in one basket. Test your plans. Then plan again. And test again. Wash, rinse, repeat until you have caught 95 percent of the possible scenarios. Then, have a plan to handle the remaining five percent.
Now what?

So, what have we learned from Super Bowl 2013? We have learned that during periods of peak traffic and high online interest, the performance issues that sites encounter are very consistent and predictable. But by taking some preventative steps, and having an emergency response plan, most of the performance issues can be predicted, planned for and responded to when (not if) they appear.

So, when your company goes into the next big event, be it the Super Bowl or that one-day online sale, planning for the three items listed here will likely make you better prepared to bask in the success of the moment.

Stephen Pierzchala, Technology Strategist, Compuware APM's Center of Excellence.

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...

Super Bowl Sunday 2013 – The Lights Weren’t the Only Thing That Went Out!

Stephen Pierzchala

For advertisers, the Super Bowl represents much more than a football game. It’s the pinnacle event in the advertising industry’s year, where companies spend millions of dollars for 30 second and one-minute ad blocks as well as millions for the creation of the ads. Most companies try to link their TV ads to their online properties to create a “second screen” correlation, relying on consumers to simultaneously use online media on tablets and phones to augment the traditional TV brand interaction. Remember, if there’s one thing we learned from Thanksgiving 2012 it’s that the “couch commerce” trend – consumers browsing on tablets from the comfort of their couches – is alive and well.

Now, imagine spending all that money only to send your online site visitors to a competitor’s site. Seems absurd, doesn’t it? Well, that’s the risk several companies – including Coca-Cola, Audi and Universal Pictures – faced this year, as their online properties did not perform nearly as well as competitors – including Pepsi, Mercedez-Benz and Paramount – who were also advertising during the Super Bowl.

For example, for the period from 5PM EST until 11PM EST on Sunday, February 3, Audi had an average response time of 3.487 seconds, while Mercedez-Benz had an average response time of 3.060 seconds. This difference may seem miniscule, but in a world where a few lost milliseconds is all it takes to route your customers to the competition, it’s huge.

In another example, Coca-Cola had an average response time of 9.894 seconds, while Pepsi’s was 5.744 seconds. When you consider Google’s recent finding – that slowing search response times by just four-tenths of a second reduces the number of searches by eight million per day – you can bet that many users didn’t stick around on the Coca-Cola site.

While some advertisers’ sites experienced minimal or no impact from increased traffic volumes, several (like Coca-Cola) had increases in page load times and a noticeable effect on availability, with a few sites crashing during the critical peak period. The measurement results from the Compuware network in the periods leading up to, and during, the Super Bowl showed some clear winners and losers in page load time.

Events like the Super Bowl require high-frequency measurements, so we set our locations to collect data every five minutes to catch every variation in performance, no matter how fleeting.

For the period from 5PM EST until 11PM EST on Sunday, February 3, the top and bottom three sites were:

Top Three Performers

1. Go Daddy

2. Paramount

3. Lincoln Motor Cars

Bottom Three Performers

1. Doritos

2. Coca-Cola

3. Universal Pictures

What we found through our analysis is that the issues causing performance lags almost perfectly aligned with those that Compuware finds during every major online event, including the following.

You’re Not Alone

The Super Bowl is often referred to as the perfect storm for web performance – a six-hour window, with the spotlight on your company for 30-60 seconds. However, the halo effect sees traffic to your site increase astronomically for the entire six hours while people prepare for your big unveiling, or wish to view your commercial again.

But your company isn’t the only one doing the same thing. And many (if not all) of the infrastructure components, datacenters, CDNs, ad providers, web analytics and video streaming platforms you use are being used by other companies advertising during the Super Bowl.

So, even if you have tested your entire site to what you think is your peak traffic volume – and beyond – remember that these shared services are all running at their maximum volume during the Super Bowl. All of the testing you did on your site can be undone by a third-party that can’t handle a peak load coming from two, three, or more customers simultaneously.

Lesson: Verify that your third-party services can effectively handle the maximum load from all of their customers all at once without degrading the performance of any of them.

Lose a Few Pounds

The performance solution isn’t just on the third-parties. It also relies on companies taking steps to focus on the most important aspect of Super Bowl Sunday online campaigns – getting people to your site. Sometimes this means that you have to make some compromises, perhaps streamline the delivery a little more than you otherwise would.

While the total amount of content is a key indicator of potential trouble – yes, big pages do tend to load more slowly than small pages – Compuware data showed that two of the three slowest sites drew content from more than 20 hosts and had more than 100 objects on the page, with the slowest having over 200! This complexity increases the likelihood that something will go wrong; and that if that happens, it could lead to a serious degradation in performance.

Lesson: While having a cool, interactive site for customers to come to is a big win for a massive marketing event like the Super Bowl, keeping a laser focus on delivering a successful experience sometimes means leaving stuff out.

Have a Plan B (and Plan C, and Plan D ...)

If you plan for a problem, when it happens, it’s not a problem.

If your selected CDN becomes congested due to a massive traffic influx that was not expected, have the ability to dynamically balance load between CDN providers.

If an ad service or messaging platform begins to choke your site, have the ability to easily disable the offending hosts.

If your cloud provider begins to rain on your parade, transfer load to the secondary provider you set up “just in case.”

If your dynamic page creation begins to crash your application servers, switch to a static HTML version that can be more easily delivered by your infrastructure.

And, if you have fallen back to Plan J, have an amusing error message that allows your customers to participate in the failure of your success. Heck, create a Twitter hashtag that says “#[your company]GoesBoom” and realize that any publicity is better than not being talked about at all.

Lesson: Don’t put all your eggs in one basket. Test your plans. Then plan again. And test again. Wash, rinse, repeat until you have caught 95 percent of the possible scenarios. Then, have a plan to handle the remaining five percent.
Now what?

So, what have we learned from Super Bowl 2013? We have learned that during periods of peak traffic and high online interest, the performance issues that sites encounter are very consistent and predictable. But by taking some preventative steps, and having an emergency response plan, most of the performance issues can be predicted, planned for and responded to when (not if) they appear.

So, when your company goes into the next big event, be it the Super Bowl or that one-day online sale, planning for the three items listed here will likely make you better prepared to bask in the success of the moment.

Stephen Pierzchala, Technology Strategist, Compuware APM's Center of Excellence.

Hot Topics

The Latest

Like most digital transformation shifts, organizations often prioritize productivity and leave security and observability to keep pace. This usually translates to both the mass implementation of new technology and fragmented monitoring and observability (M&O) tooling. In the era of AI and varied cloud architecture, a disparate observability function can be dangerous. IT teams will lack a complete picture of their IT environment, making it harder to diagnose issues while slowing down mean time to resolve (MTTR). In fact, according to recent data from the SolarWinds State of Monitoring & Observability Report, 77% of IT personnel said the lack of visibility across their on-prem and cloud architecture was an issue ...

In MEAN TIME TO INSIGHT Episode 23, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at EMA discusses the NetOps labor shortage ... 

Technology management is evolving, and in turn, so is the scope of FinOps. The FinOps Foundation recently updated their mission statement from "advancing the people who manage the value of cloud" to "advancing the people who manage the value of technology." This seemingly small change solidifies a larger evolution: FinOps practitioners have organically expanded to be focused on more than just cloud cost optimization. Today, FinOps teams are largely — and quickly — expanding their job descriptions, evolving into a critical function for managing the full value of technology ...

Enterprises are under pressure to scale AI quickly. Yet despite considerable investment, adoption continues to stall. One of the most overlooked reasons is vendor sprawl ... In reality, no organization deliberately sets out to create sprawling vendor ecosystems. More often, complexity accumulates over time through well-intentioned initiatives, such as enterprise-wide digital transformation efforts, point solutions, or decentralized sourcing strategies ...

Nearly every conversation about AI eventually circles back to compute. GPUs dominate the headlines while cloud platforms compete for workloads and model benchmarks drive investment decisions. But underneath that noise, a quieter infrastructure challenge is taking shape. The real bottleneck in enterprise AI is not processing power, it is the ability to store, manage and retrieve the relentless volumes of data that AI systems generate, consume and multiply ...

The 2026 Observability Survey from Grafana Labs paints a vivid picture of an industry maturing fast, where AI is welcomed with careful conditions, SaaS economics are reshaping spending decisions, complexity remains a defining challenge, and open standards continue to underpin it all ...

The observability industry has an evolving relationship with AI. We're not skeptics, but it's clear that trust in AI must be earned ... In Grafana Labs' annual Observability Survey, 92% said they see real value in AI surfacing anomalies before they cause downtime. Another 91% endorsed AI for forecasting and root cause analysis. So while the demand is there, customers need it to be trustworthy, as the survey also found that the practitioners most enthusiastic about AI are also the most insistent on explainability ...

In the modern enterprise, the conversation around AI has moved past skepticism toward a stage of active adoption. According to our 2026 State of IT Trends Report: The Human Side of Autonomous AI, nearly 90% of IT professionals view AI as a net positive, and this optimism is well-founded. We are seeing agentic AI move beyond simple automation to actively streamlining complex data insights and eliminating the manual toil that has long hindered innovation. However, as we integrate these autonomous agents into our ecosystems, the fundamental DNA of the IT role is evolving ...

AI workloads require an enormous amount of computing power ... What's also becoming abundantly clear is just how quickly AI's computing needs are leading to enterprise systems failure. According to Cockroach Labs' State of AI Infrastructure 2026 report, enterprise systems are much closer to failure than their organizations realize. The report ... suggests AI scale could cause widespread failures in as little as one year — making it a clear risk for business performance and reliability.

The quietest week your engineering team has ever had might also be its best. No alarms going off. No escalations. No frantic Teams or Slack threads at 2 a.m. Everything humming along exactly as it should. And somewhere in a leadership meeting, someone looks at the metrics dashboard, sees a flat line of incidents and says: "Seems like things are pretty calm over there. Do we really need all those people?" ... I've spent many years in engineering, and this pattern keeps repeating ...