The Deeper Problem Under HealthCare.gov's Rocky Start
January 08, 2014
Matthew Ainsworth
Share this

Even as the federal government appears to have untangled the worst of HealthCare.gov's problems, the finger pointing and agonizing about what went wrong with the Affordable Care Act's centerpiece website are unlikely to die down any time soon.

The political dimensions aside, there's a persistent curiosity about how such a high-profile project could have failed so spectacularly. It was possibly the world's most important IT project of the moment, yet it performed as if it were rolled out the door without so much as a cursory kick of the tires.

That's because it probably was – and that's far from unusual.

A recent LinkedIn/Empirix survey found that at most companies and public agencies, pre-deployment testing is half-hearted at best and non-existent at worst. Public agencies and private companies alike have abysmal records for testing customer-facing IT projects, such as customer service and e-commerce portals.

This is despite the importance that most organizations place on creating a consistently positive customer experience; almost 60 percent of the contact center executives interviewed for Dimension Data's 2012 Contact Center Benchmarking Report named customer satisfaction as their most important metric.

It's not that IT doesn't test anything before they roll out a project. It's that they don't test the system the way customers will interact with it. They test the individual components — web interfaces, fulfillment systems, Interactive Voice Recognition systems (IVRs), call routing systems — but not the system as a whole under real-world loads. This almost guarantees that customers will encounter problems that will reflect on the company or agency.

Empirix and LinkedIn surveyed more than 1,000 executives and managers in a variety of industries. The survey asked how companies:

- tested new customer contact technology before it was implemented

- evaluated the voice quality of customer/service agent calls

- monitored overall contact center performance to maintain post-implementation quality

The results are a series of contradictions. While it appears from overall numbers that pre-deployment testing rates are high — 80 percent or better — the numbers are actually much less impressive than they appear.

In truth, the overall picture isn't good. More than 80 percent of respondents said their companies do not test contact center technology under real-world conditions before go-live. They do some form of testing, but it's not comprehensive enough to reveal all of the issues that can affect customer service.

They're a little bit better about testing upgrades to existing systems: 82 percent reported testing upgrades. There's grade inflation in this number, however. Sixty-two percent use comparatively inaccurate manual testing methods.

While better than not testing at all, manual testing does not accurately reflect real-world conditions. Manual tests usually occur during off-peak times, which do not accurately predict how systems will work at full capacity. Because manual testing is difficult to repeat, it is usually done only once or twice. That makes it harder to pinpoint problems — and ensure they are resolved — even if they are detected pre-deployment.

Another 20 percent don't test new technology at all; they just "pray that it works” (14 percent) or react to customer complaints (3 percent). The remaining 3 percent are included with the non-testers because they only test major upgrades. They're included with the non-testers because of the obvious flaw in their reasoning that only major upgrades are test-worthy. A small change can erode performance or cause a system crash just as easily as a major upgrade. In fact, small upgrades can create performance drags that are harder to pinpoint because unlike large upgrades, they do not have the IT organization's full attention.

Only about 18 percent of respondents said that their companies use automated testing for all contact center upgrades. That's the second-largest block of users after the manual testing group, but a low overall percentage of the total. These companies use testing software to evaluate the performance of new functionality, equipment, applications and system upgrades under realistic traffic conditions. This approach yields the most accurate results and rapid understanding of where and why problems are occurring.

The Spoken Afterthought

HealthCare.gov's problems highlighted shortcomings with web portal testing, but voice applications face similar neglect. Indeed, when the President advised people to use their phone to call and apply for healthcare, many of the call centers set up to field applicants also had trouble handling the spike in caller traffic.

Voice quality can be a significant drag on short- and long-term call center ROI. Contact center agents who must ask customers to repeat themselves because of poor voice connections — or worse, ask customers to hang up and call in again — are less productive than those who can hear customers clearly. In the long term, repetition and multiple calls erode customer satisfaction levels.

The vast majority of professionals who responded to the LinkedIn/Empirix survey — 68 percent reported that their companies never monitor contact center voice quality. Only 14 percent continuously monitor voice quality, while the remaining 17 percent periodically monitor on a daily, weekly or monthly basis.

This failure carries heavy risks. Globally, 79 percent of consumers replying to a Customer Experience Foundation survey said they experienced poor voice quality on contact center calls. Almost as many — 68 percent — said they will hang up if they experience poor voice quality. If they are calling about a new product or service, they will likely call a competing company instead.

Between misdirected efforts and testing rates like these, it's no wonder people aren't surprised when a major initiative like online healthcare enrollment goes off the rails, or customers calling a contact center get funneled down a blind alley in the IVR system. Customers who run into obstacles like those are on a fast track to becoming former customers.

Testing and performance monitoring can effectively stem those losses. Businesses that test and monitor customer service systems are better able to achieve maximum ROI on their customer service systems (CSS) by identifying and remediating problems quickly. An end-to-end monitoring solution provides organizations with deep visibility into complex customer service technology environments, enabling businesses to reduce the time it takes to understand the source of a problem — and fix it — before customers ever notice the glitch.

ABOUT Matthew Ainsworth

Matthew Ainsworth is Senior Vice President, Americas and Japan at Empirix. He has 15 years of experience in contact centers and unified communications solutions.

Related Links:

www.empirix.com

Share this

The Latest

October 10, 2019

The requirements of an APM tool are now much more complex than they've ever been. Not only do they need to trace a user transaction across numerous microservices on the same system, but they also need to happen pretty fast ...

October 09, 2019

Performance monitoring is an old problem. As technology has advanced, we've had to evolve how we monitor applications. Initially, performance monitoring largely involved sending ICMP messages to start troubleshooting a down or slow application. Applications have gotten much more complex, so this is no longer enough. Now we need to know not just whether an application is broken, but why it broke. So APM has had to evolve over the years for us to get there. But how did this evolution take place, and what happens next? Let's find out ...

October 08, 2019

There are some IT organizations that are using DevOps methodology but are wary of getting bogged down in ITSM procedures. But without at least some ITSM controls in place, organizations lose their focus on systematic customer engagement, making it harder for them to scale ...

October 07, 2019
OK, I admit it. "Service modeling" is an awkward term, especially when you're trying to frame three rather controversial acronyms in the same overall place: CMDB, CMS and DDM. Nevertheless, that's exactly what we did in EMA's most recent research: <span style="font-style: italic;">Service Modeling in the Age of Cloud and Containers</span>. The goal was to establish a more holistic context for looking at the synergies and differences across all these areas ...
October 03, 2019

If you have deployed a Java application in production, you've probably encountered a situation where the application suddenly starts to take up a large amount of CPU. When this happens, application response becomes sluggish and users begin to complain about slow response. Often the solution to this problem is to restart the application and, lo and behold, the problem goes away — only to reappear a few days later. A key question then is: how to troubleshoot high CPU usage of a Java application? ...

October 02, 2019

Operations are no longer tethered tightly to a main office, as the headquarters-centric model has been retired in favor of a more decentralized enterprise structure. Rather than focus the business around a single location, enterprises are now comprised of a web of remote offices and individuals, where network connectivity has broken down the geographic barriers that in the past limited the availability of talent and resources. Key to the success of the decentralized enterprise model is a new generation of collaboration and communication tools ...

October 01, 2019

To better understand the AI maturity of businesses, Dotscience conducted a survey of 500 industry professionals. Research findings indicate that although enterprises are dedicating significant time and resources towards their AI deployments, many data science and ML teams don't have the adequate tools needed to properly collaborate on, build and deploy AI models efficiently ...

September 30, 2019

Digital transformation, migration to the enterprise cloud and increasing customer demands are creating a surge in IT complexity and the associated costs of managing it. Technical leaders around the world are concerned about the effect this has on IT performance and ultimately, their business according to a new report from Dynatrace, based on an independent global survey of 800 CIOs, Top Challenges for CIOs in a Software-Driven, Hybrid, Multi-Cloud World ...

September 26, 2019

APM tools are your window into your application's performance — its capacity and levels of service. However, traditional APM tools are now struggling due to the mismatch between their specifications and expectations. Modern application architectures are multi-faceted; they contain hybrid components across a variety of on-premise and cloud applications. Modern enterprises often generate data in silos with each outflow having its own data structure. This data comes from several tools over different periods of time. Such diversity in sources, structure, and formats present unique challenges for traditional enterprise tools ...

September 25, 2019

Today's organizations clearly understand the value of digital transformation and its ability to spark innovation. It's surprising that fewer than half of organizations have undertaken a digital transformation project. Workfront has identified five of the top challenges that IT teams face in digital transformation — and how to overcome them ...