For advertisers, the Super Bowl represents much more than a football game. It’s the pinnacle event in the advertising industry’s year, where companies spend millions of dollars for 30 second and one-minute ad blocks as well as millions for the creation of the ads. Most companies try to link their TV ads to their online properties to create a “second screen” correlation, relying on consumers to simultaneously use online media on tablets and phones to augment the traditional TV brand interaction. Remember, if there’s one thing we learned from Thanksgiving 2012 it’s that the “couch commerce” trend – consumers browsing on tablets from the comfort of their couches – is alive and well.
Now, imagine spending all that money only to send your online site visitors to a competitor’s site. Seems absurd, doesn’t it? Well, that’s the risk several companies – including Coca-Cola, Audi and Universal Pictures – faced this year, as their online properties did not perform nearly as well as competitors – including Pepsi, Mercedez-Benz and Paramount – who were also advertising during the Super Bowl.
For example, for the period from 5PM EST until 11PM EST on Sunday, February 3, Audi had an average response time of 3.487 seconds, while Mercedez-Benz had an average response time of 3.060 seconds. This difference may seem miniscule, but in a world where a few lost milliseconds is all it takes to route your customers to the competition, it’s huge.
In another example, Coca-Cola had an average response time of 9.894 seconds, while Pepsi’s was 5.744 seconds. When you consider Google’s recent finding – that slowing search response times by just four-tenths of a second reduces the number of searches by eight million per day – you can bet that many users didn’t stick around on the Coca-Cola site.
While some advertisers’ sites experienced minimal or no impact from increased traffic volumes, several (like Coca-Cola) had increases in page load times and a noticeable effect on availability, with a few sites crashing during the critical peak period. The measurement results from the Compuware network in the periods leading up to, and during, the Super Bowl showed some clear winners and losers in page load time.
Events like the Super Bowl require high-frequency measurements, so we set our locations to collect data every five minutes to catch every variation in performance, no matter how fleeting.
For the period from 5PM EST until 11PM EST on Sunday, February 3, the top and bottom three sites were:
Top Three Performers
1. Go Daddy
2. Paramount
3. Lincoln Motor Cars
Bottom Three Performers
1. Doritos
2. Coca-Cola
3. Universal Pictures
What we found through our analysis is that the issues causing performance lags almost perfectly aligned with those that Compuware finds during every major online event, including the following.
You’re Not Alone
The Super Bowl is often referred to as the perfect storm for web performance – a six-hour window, with the spotlight on your company for 30-60 seconds. However, the halo effect sees traffic to your site increase astronomically for the entire six hours while people prepare for your big unveiling, or wish to view your commercial again.
But your company isn’t the only one doing the same thing. And many (if not all) of the infrastructure components, datacenters, CDNs, ad providers, web analytics and video streaming platforms you use are being used by other companies advertising during the Super Bowl.
So, even if you have tested your entire site to what you think is your peak traffic volume – and beyond – remember that these shared services are all running at their maximum volume during the Super Bowl. All of the testing you did on your site can be undone by a third-party that can’t handle a peak load coming from two, three, or more customers simultaneously.
Lesson: Verify that your third-party services can effectively handle the maximum load from all of their customers all at once without degrading the performance of any of them.
Lose a Few Pounds
The performance solution isn’t just on the third-parties. It also relies on companies taking steps to focus on the most important aspect of Super Bowl Sunday online campaigns – getting people to your site. Sometimes this means that you have to make some compromises, perhaps streamline the delivery a little more than you otherwise would.
While the total amount of content is a key indicator of potential trouble – yes, big pages do tend to load more slowly than small pages – Compuware data showed that two of the three slowest sites drew content from more than 20 hosts and had more than 100 objects on the page, with the slowest having over 200! This complexity increases the likelihood that something will go wrong; and that if that happens, it could lead to a serious degradation in performance.
Lesson: While having a cool, interactive site for customers to come to is a big win for a massive marketing event like the Super Bowl, keeping a laser focus on delivering a successful experience sometimes means leaving stuff out.
Have a Plan B (and Plan C, and Plan D ...)
If you plan for a problem, when it happens, it’s not a problem.
If your selected CDN becomes congested due to a massive traffic influx that was not expected, have the ability to dynamically balance load between CDN providers.
If an ad service or messaging platform begins to choke your site, have the ability to easily disable the offending hosts.
If your cloud provider begins to rain on your parade, transfer load to the secondary provider you set up “just in case.”
If your dynamic page creation begins to crash your application servers, switch to a static HTML version that can be more easily delivered by your infrastructure.
And, if you have fallen back to Plan J, have an amusing error message that allows your customers to participate in the failure of your success. Heck, create a Twitter hashtag that says “#[your company]GoesBoom” and realize that any publicity is better than not being talked about at all.
Lesson: Don’t put all your eggs in one basket. Test your plans. Then plan again. And test again. Wash, rinse, repeat until you have caught 95 percent of the possible scenarios. Then, have a plan to handle the remaining five percent.
Now what?
So, what have we learned from Super Bowl 2013? We have learned that during periods of peak traffic and high online interest, the performance issues that sites encounter are very consistent and predictable. But by taking some preventative steps, and having an emergency response plan, most of the performance issues can be predicted, planned for and responded to when (not if) they appear.
So, when your company goes into the next big event, be it the Super Bowl or that one-day online sale, planning for the three items listed here will likely make you better prepared to bask in the success of the moment.
Stephen Pierzchala, Technology Strategist, Compuware APM's Center of Excellence.
The Latest
To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.
An array of tools purport to maintain availability — the trick is sorting through the noise to find the right one. Let us discuss why availability is so important and then unpack the ROI of deploying Artificial Intelligence for IT Operations (AIOps) during an economic downturn ...