The Deeper Problem Under's Rocky Start
January 08, 2014
Matthew Ainsworth
Share this

Even as the federal government appears to have untangled the worst of's problems, the finger pointing and agonizing about what went wrong with the Affordable Care Act's centerpiece website are unlikely to die down any time soon.

The political dimensions aside, there's a persistent curiosity about how such a high-profile project could have failed so spectacularly. It was possibly the world's most important IT project of the moment, yet it performed as if it were rolled out the door without so much as a cursory kick of the tires.

That's because it probably was – and that's far from unusual.

A recent LinkedIn/Empirix survey found that at most companies and public agencies, pre-deployment testing is half-hearted at best and non-existent at worst. Public agencies and private companies alike have abysmal records for testing customer-facing IT projects, such as customer service and e-commerce portals.

This is despite the importance that most organizations place on creating a consistently positive customer experience; almost 60 percent of the contact center executives interviewed for Dimension Data's 2012 Contact Center Benchmarking Report named customer satisfaction as their most important metric.

It's not that IT doesn't test anything before they roll out a project. It's that they don't test the system the way customers will interact with it. They test the individual components — web interfaces, fulfillment systems, Interactive Voice Recognition systems (IVRs), call routing systems — but not the system as a whole under real-world loads. This almost guarantees that customers will encounter problems that will reflect on the company or agency.

Empirix and LinkedIn surveyed more than 1,000 executives and managers in a variety of industries. The survey asked how companies:

- tested new customer contact technology before it was implemented

- evaluated the voice quality of customer/service agent calls

- monitored overall contact center performance to maintain post-implementation quality

The results are a series of contradictions. While it appears from overall numbers that pre-deployment testing rates are high — 80 percent or better — the numbers are actually much less impressive than they appear.

In truth, the overall picture isn't good. More than 80 percent of respondents said their companies do not test contact center technology under real-world conditions before go-live. They do some form of testing, but it's not comprehensive enough to reveal all of the issues that can affect customer service.

They're a little bit better about testing upgrades to existing systems: 82 percent reported testing upgrades. There's grade inflation in this number, however. Sixty-two percent use comparatively inaccurate manual testing methods.

While better than not testing at all, manual testing does not accurately reflect real-world conditions. Manual tests usually occur during off-peak times, which do not accurately predict how systems will work at full capacity. Because manual testing is difficult to repeat, it is usually done only once or twice. That makes it harder to pinpoint problems — and ensure they are resolved — even if they are detected pre-deployment.

Another 20 percent don't test new technology at all; they just "pray that it works” (14 percent) or react to customer complaints (3 percent). The remaining 3 percent are included with the non-testers because they only test major upgrades. They're included with the non-testers because of the obvious flaw in their reasoning that only major upgrades are test-worthy. A small change can erode performance or cause a system crash just as easily as a major upgrade. In fact, small upgrades can create performance drags that are harder to pinpoint because unlike large upgrades, they do not have the IT organization's full attention.

Only about 18 percent of respondents said that their companies use automated testing for all contact center upgrades. That's the second-largest block of users after the manual testing group, but a low overall percentage of the total. These companies use testing software to evaluate the performance of new functionality, equipment, applications and system upgrades under realistic traffic conditions. This approach yields the most accurate results and rapid understanding of where and why problems are occurring.

The Spoken Afterthought's problems highlighted shortcomings with web portal testing, but voice applications face similar neglect. Indeed, when the President advised people to use their phone to call and apply for healthcare, many of the call centers set up to field applicants also had trouble handling the spike in caller traffic.

Voice quality can be a significant drag on short- and long-term call center ROI. Contact center agents who must ask customers to repeat themselves because of poor voice connections — or worse, ask customers to hang up and call in again — are less productive than those who can hear customers clearly. In the long term, repetition and multiple calls erode customer satisfaction levels.

The vast majority of professionals who responded to the LinkedIn/Empirix survey — 68 percent reported that their companies never monitor contact center voice quality. Only 14 percent continuously monitor voice quality, while the remaining 17 percent periodically monitor on a daily, weekly or monthly basis.

This failure carries heavy risks. Globally, 79 percent of consumers replying to a Customer Experience Foundation survey said they experienced poor voice quality on contact center calls. Almost as many — 68 percent — said they will hang up if they experience poor voice quality. If they are calling about a new product or service, they will likely call a competing company instead.

Between misdirected efforts and testing rates like these, it's no wonder people aren't surprised when a major initiative like online healthcare enrollment goes off the rails, or customers calling a contact center get funneled down a blind alley in the IVR system. Customers who run into obstacles like those are on a fast track to becoming former customers.

Testing and performance monitoring can effectively stem those losses. Businesses that test and monitor customer service systems are better able to achieve maximum ROI on their customer service systems (CSS) by identifying and remediating problems quickly. An end-to-end monitoring solution provides organizations with deep visibility into complex customer service technology environments, enabling businesses to reduce the time it takes to understand the source of a problem — and fix it — before customers ever notice the glitch.

ABOUT Matthew Ainsworth

Matthew Ainsworth is Senior Vice President, Americas and Japan at Empirix. He has 15 years of experience in contact centers and unified communications solutions.

Related Links:

Share this

The Latest

September 30, 2022

For businesses with vast and distributed computing infrastructures, one of the main objectives of IT and network operations is to locate the cause of a service condition that is having an impact. The more human resources are put into the task of gathering, processing, and finally visual monitoring the massive volumes of event and log data that serve as the main source of symptomatic indications for emerging crises, the closer the service is to the company's source of revenue ...

September 29, 2022

Our digital economy is intolerant of downtime. But consumers haven't just come to expect always-on digital apps and services. They also expect continuous innovation, new functionality and lightening fast response times. Organizations have taken note, investing heavily in teams and tools that supposedly increase uptime and free resources for innovation. But leaders have not realized this "throw money at the problem" approach to monitoring is burning through resources without much improvement in availability outcomes ...

September 28, 2022

Although 83% of businesses are concerned about a recession in 2023, B2B tech marketers can look forward to growth — 51% of organizations plan to increase IT budgets in 2023 vs. a narrow 6% that plan to reduce their spend, according to the 2023 State of IT report from Spiceworks Ziff Davis ...

September 27, 2022

Users have high expectations around applications — quick loading times, look and feel visually advanced, with feature-rich content, video streaming, and multimedia capabilities — all of these devour network bandwidth. With millions of users accessing applications and mobile apps from multiple devices, most companies today generate seemingly unmanageable volumes of data and traffic on their networks ...

September 26, 2022

In Italy, it is customary to treat wine as part of the meal ... Too often, testing is treated with the same reverence as the post-meal task of loading the dishwasher, when it should be treated like an elegant wine pairing ...

September 23, 2022

In order to properly sort through all monitoring noise and identify true problems, their causes, and to prioritize them for response by the IT team, they have created and built a revolutionary new system using a meta-cognitive model ...

September 22, 2022

As we shift further into a digital-first world, where having a reliable online experience becomes more essential, Site Reliability Engineers remain in-demand among organizations of all sizes ... This diverse set of skills and values can be difficult to interview for. In this blog, we'll get you started with some example questions and processes to find your ideal SRE ...

September 21, 2022

US government agencies are bringing more of their employees back into the office and implementing hybrid work schedules, but federal workers are worried that their agencies' IT architectures aren't built to handle the "new normal." They fear that the reactive, manual methods used by the current systems in dealing with user, IT architecture and application problems will degrade the user experience and negatively affect productivity. In fact, according to a recent survey, many federal employees are concerned that they won't work as effectively back in the office as they did at home ...

September 20, 2022

Users today expect a seamless, uninterrupted experience when interacting with their web and mobile apps. Their expectations have continued to grow in tandem with their appetite for new features and consistent updates. Mobile apps have responded by increasing their release cadence by up to 40%, releasing a new full version of their app every 4-5 days, as determined in this year's SmartBear State of Software Quality | Application Stability Index report ...

September 19, 2022

In this second part of the blog series, we look at how adopting AIOps capabilities can drive business value for an organization ...