Avoiding Digital Disaster: How the US Census Can Deliver a Smooth Digital Experience
April 23, 2020

Tal Weiss
OverOps

Share this

Most of us have personally felt the effects of an application failing on us on some level. Sometimes, the impact a software outage has on us is trivial — we are temporarily unable to book a flight for vacation, we are forced to call a customer service rep rather than using an online portal, or we're unable to watch an on-demand movie. While frustrating, these experiences are often inconvenient rather than catastrophic.

But what about when the application failure has much larger implications? For example, the site you use to refill important prescriptions malfunctions and delays access to medication, your banking app fails on the day rent is due, or, in the recent case of the Iowa caucus app, a software outage affects your ability to participate in our nation's democratic process.

As more organizations go digital, including critical government agencies, reliable software is paramount. The recent Iowa caucus voting app failure is a case study in software testing and delivery mistakes, with many key learnings for those tasked with building and managing mission-critical applications — like the US Census Bureau.


The 2020 US Census, which is collecting responses beginning April 1, has the potential to significantly influence fair political representation, the allocation of vital federal funds, and more. This year marks the first time in our nation’s history that participants have the option to fill out an online questionnaire rather than mailing in their responses. While this is an exciting digital milestone for the US Census Bureau, experience tells us that the course of digital transformation rarely does run smooth.

In order to ensure the accuracy of results that will impact our nation for the next decade, the census software needs to operate seamlessly. But considering what happened with the Iowa caucus, how confident are we that the census app isn’t going to fail?

Below are two key takeaways from the Iowa Caucus app disaster that should serve as a valuable lesson not only for the IT team supporting the US Census Bureau, but for any engineering team tasked with delivering a mission-critical application with minimal room for error.

Takeaway #1: Test Early. Test Often.

There is a rule in software known as the Rule of Ten: the cost of finding and fixing software defects increases 10X the further you are in the software delivery lifecycle. When pushing out an important new release that will be highly trafficked and highly visible, the more proactive you can be about preventing errors from reaching production, the better. In the case of the US Census app, responses are only collected within a short window, so any unexpected production issues could waste precious minutes or hours.

To address this, many organizations are starting to understand the merits of adopting a "Shift Left" approach to quality. By increasing quality measures taken in the development and testing phases of software delivery, you can significantly reduce the odds of production issues. Of course, you can’t fully anticipate all potential production failure scenarios, but the more testing you do up front, the more confidence you’ll have in your release, rather than relying solely on production monitoring.

Takeaway #2: Automate, Automate, Automate

As the process of writing code remains a very human driven process (AI has yet to pass a coding "Turing test"), companies will need to find ways of automating the way by which they test, deliver and operate their software to ensure speed and reliability.

A decade ago, when Test-Driven Development (TDD) just started gaining traction, it promised to improve productivity and quality. Since then, release cycles shortened, CI/CD is no longer a buzzword, and new companies that develop pipeline automation products are mature enough to IPO.

Building on the points above, testing is more relevant than ever, but when moving fast is table stakes, relying on traditional tests alone in your shift left strategy is no longer an option. Building in automated quality gates and feedback loops will allow for thorough, fast testing that doesn’t hold up release timelines. This can be done by leveraging a variety of automated testing methods within your CI/CD pipeline, such as static and dynamic code analysis.

Further, even with a sophisticated testing pipeline, the occasional error will inevitably reach production from time to time, and your ability to detect, troubleshoot and recover quickly will make all the difference to your users. Developers are great at writing code but inherently limited in their ability to foresee where it will break down later. For this reason, not to mention the massive operational data volume and noise which high scale environments produce, the task of detecting software issues and gathering the information on them in production should be automated. The 30% of time and resources traditionally allocated to manual identification, routing and reproduction of issues during the software delivery lifecycle will most likely become a thing of the past.

As the US Census Bureau takes this high-stakes step toward innovation, it’s my sincere hope that they’ve been able to put some of these methodologies and tools in place. The more proactive you can be about ensuring quality, and the more tasks you can automate throughout the process, the less you will have to fear when it’s time to put your software to the true test — your users.

Tal Weiss is Co-Founder and CTO of OverOps
Share this

The Latest

September 25, 2020

Michael Olson on the AI+ITOPS Podcast: "I really see AIOps as being a core requirement for observability because it ... applies intelligence to your telemetry data and your incident data ... to potentially predict problems before they happen."

September 24, 2020

Enterprise ITOM and ITSM teams have been welcoming of AIOps, believing that it has the potential to deliver great value to them as their IT environments become more distributed, hybrid and complex. Not so with DevOps teams. It's safe to say they've kept AIOps at arm's length, because they don't think it's relevant nor useful for what they do. Instead, to manage the software code they develop and deploy, they've focused on observability ...

September 23, 2020

The post-pandemic environment has resulted in a major shift on where SREs will be located, with nearly 50% of SREs believing they will be working remotely post COVID-19, as compared to only 19% prior to the pandemic, according to the 2020 SRE Survey Report from Catchpoint and the DevOps Institute ...

September 22, 2020

All application traffic travels across the network. While application performance management tools can offer insight into how critical applications are functioning, they do not provide visibility into the broader network environment. In order to optimize application performance, you need a few key capabilities. Let's explore three steps that can help NetOps teams better support the critical applications upon which your business depends ...

September 21, 2020

In Episode 8, Michael Olson, Director of Product Marketing at New Relic, joins the AI+ITOPS Podcast to discuss how AIOps provides real benefits to IT teams ...