Even as the federal government appears to have untangled the worst of HealthCare.gov's problems, the finger pointing and agonizing about what went wrong with the Affordable Care Act's centerpiece website are unlikely to die down any time soon.
The political dimensions aside, there's a persistent curiosity about how such a high-profile project could have failed so spectacularly. It was possibly the world's most important IT project of the moment, yet it performed as if it were rolled out the door without so much as a cursory kick of the tires.
That's because it probably was – and that's far from unusual.
A recent LinkedIn/Empirix survey found that at most companies and public agencies, pre-deployment testing is half-hearted at best and non-existent at worst. Public agencies and private companies alike have abysmal records for testing customer-facing IT projects, such as customer service and e-commerce portals.
This is despite the importance that most organizations place on creating a consistently positive customer experience; almost 60 percent of the contact center executives interviewed for Dimension Data's 2012 Contact Center Benchmarking Report named customer satisfaction as their most important metric.
It's not that IT doesn't test anything before they roll out a project. It's that they don't test the system the way customers will interact with it. They test the individual components — web interfaces, fulfillment systems, Interactive Voice Recognition systems (IVRs), call routing systems — but not the system as a whole under real-world loads. This almost guarantees that customers will encounter problems that will reflect on the company or agency.
Empirix and LinkedIn surveyed more than 1,000 executives and managers in a variety of industries. The survey asked how companies:
- tested new customer contact technology before it was implemented
- evaluated the voice quality of customer/service agent calls
- monitored overall contact center performance to maintain post-implementation quality
The results are a series of contradictions. While it appears from overall numbers that pre-deployment testing rates are high — 80 percent or better — the numbers are actually much less impressive than they appear.
In truth, the overall picture isn't good. More than 80 percent of respondents said their companies do not test contact center technology under real-world conditions before go-live. They do some form of testing, but it's not comprehensive enough to reveal all of the issues that can affect customer service.
They're a little bit better about testing upgrades to existing systems: 82 percent reported testing upgrades. There's grade inflation in this number, however. Sixty-two percent use comparatively inaccurate manual testing methods.
While better than not testing at all, manual testing does not accurately reflect real-world conditions. Manual tests usually occur during off-peak times, which do not accurately predict how systems will work at full capacity. Because manual testing is difficult to repeat, it is usually done only once or twice. That makes it harder to pinpoint problems — and ensure they are resolved — even if they are detected pre-deployment.
Another 20 percent don't test new technology at all; they just "pray that it works” (14 percent) or react to customer complaints (3 percent). The remaining 3 percent are included with the non-testers because they only test major upgrades. They're included with the non-testers because of the obvious flaw in their reasoning that only major upgrades are test-worthy. A small change can erode performance or cause a system crash just as easily as a major upgrade. In fact, small upgrades can create performance drags that are harder to pinpoint because unlike large upgrades, they do not have the IT organization's full attention.
Only about 18 percent of respondents said that their companies use automated testing for all contact center upgrades. That's the second-largest block of users after the manual testing group, but a low overall percentage of the total. These companies use testing software to evaluate the performance of new functionality, equipment, applications and system upgrades under realistic traffic conditions. This approach yields the most accurate results and rapid understanding of where and why problems are occurring.
The Spoken Afterthought
HealthCare.gov's problems highlighted shortcomings with web portal testing, but voice applications face similar neglect. Indeed, when the President advised people to use their phone to call and apply for healthcare, many of the call centers set up to field applicants also had trouble handling the spike in caller traffic.
Voice quality can be a significant drag on short- and long-term call center ROI. Contact center agents who must ask customers to repeat themselves because of poor voice connections — or worse, ask customers to hang up and call in again — are less productive than those who can hear customers clearly. In the long term, repetition and multiple calls erode customer satisfaction levels.
The vast majority of professionals who responded to the LinkedIn/Empirix survey — 68 percent reported that their companies never monitor contact center voice quality. Only 14 percent continuously monitor voice quality, while the remaining 17 percent periodically monitor on a daily, weekly or monthly basis.
This failure carries heavy risks. Globally, 79 percent of consumers replying to a Customer Experience Foundation survey said they experienced poor voice quality on contact center calls. Almost as many — 68 percent — said they will hang up if they experience poor voice quality. If they are calling about a new product or service, they will likely call a competing company instead.
Between misdirected efforts and testing rates like these, it's no wonder people aren't surprised when a major initiative like online healthcare enrollment goes off the rails, or customers calling a contact center get funneled down a blind alley in the IVR system. Customers who run into obstacles like those are on a fast track to becoming former customers.
Testing and performance monitoring can effectively stem those losses. Businesses that test and monitor customer service systems are better able to achieve maximum ROI on their customer service systems (CSS) by identifying and remediating problems quickly. An end-to-end monitoring solution provides organizations with deep visibility into complex customer service technology environments, enabling businesses to reduce the time it takes to understand the source of a problem — and fix it — before customers ever notice the glitch.
ABOUT Matthew Ainsworth
Matthew Ainsworth is Senior Vice President, Americas and Japan at Empirix. He has 15 years of experience in contact centers and unified communications solutions.
Related Links:
The Latest
This blog presents the case for a radical new approach to basic information technology (IT) education. This conclusion is based on a study of courses and other forms of IT education which purport to cover IT "fundamentals" ...
To achieve maximum availability, IT leaders must employ domain-agnostic solutions that identify and escalate issues across all telemetry points. These technologies, which we refer to as Artificial Intelligence for IT Operations, create convergence — in other words, they provide IT and DevOps teams with the full picture of event management and downtime ...
APMdigest and leading IT research firm Enterprise Management Associates (EMA) are partnering to bring you the EMA-APMdigest Podcast, a new podcast focused on the latest technologies impacting IT Operations. In Episode 2 - Part 1 Pete Goldin, Editor and Publisher of APMdigest, discusses Network Observability with Shamus McGillicuddy, Vice President of Research, Network Infrastructure and Operations, at EMA ...
CIOs have stepped into the role of digital leader and strategic advisor, according to the 2023 Global CIO Survey from Logicalis ...
Synthetic monitoring is crucial to deploy code with confidence as catching bugs with E2E tests on staging is becoming increasingly difficult. It isn't trivial to provide realistic staging systems, especially because today's apps are intertwined with many third-party APIs ...
Recent EMA field research found that ServiceOps is either an active effort or a formal initiative in 78% of the organizations represented by a global panel of 400+ IT leaders. It is relatively early but gaining momentum across industries and organizations of all sizes globally ...
Managing availability and performance within SAP environments has long been a challenge for IT teams. But as IT environments grow more complex and dynamic, and the speed of innovation in almost every industry continues to accelerate, this situation is becoming a whole lot worse ...
Harnessing the power of network-derived intelligence and insights is critical in detecting today's increasingly sophisticated security threats across hybrid and multi-cloud infrastructure, according to a new research study from IDC ...
Recent research suggests that many organizations are paying for more software than they need. If organizations are looking to reduce IT spend, leaders should take a closer look at the tools being offered to employees, as not all software is essential ...
Organizations are challenged by tool sprawl and data source overload, according to the Grafana Labs Observability Survey 2023, with 52% of respondents reporting that their companies use 6 or more observability tools, including 11% that use 16 or more.