The Deeper Problem Under's Rocky Start
January 08, 2014
Matthew Ainsworth
Share this

Even as the federal government appears to have untangled the worst of's problems, the finger pointing and agonizing about what went wrong with the Affordable Care Act's centerpiece website are unlikely to die down any time soon.

The political dimensions aside, there's a persistent curiosity about how such a high-profile project could have failed so spectacularly. It was possibly the world's most important IT project of the moment, yet it performed as if it were rolled out the door without so much as a cursory kick of the tires.

That's because it probably was – and that's far from unusual.

A recent LinkedIn/Empirix survey found that at most companies and public agencies, pre-deployment testing is half-hearted at best and non-existent at worst. Public agencies and private companies alike have abysmal records for testing customer-facing IT projects, such as customer service and e-commerce portals.

This is despite the importance that most organizations place on creating a consistently positive customer experience; almost 60 percent of the contact center executives interviewed for Dimension Data's 2012 Contact Center Benchmarking Report named customer satisfaction as their most important metric.

It's not that IT doesn't test anything before they roll out a project. It's that they don't test the system the way customers will interact with it. They test the individual components — web interfaces, fulfillment systems, Interactive Voice Recognition systems (IVRs), call routing systems — but not the system as a whole under real-world loads. This almost guarantees that customers will encounter problems that will reflect on the company or agency.

Empirix and LinkedIn surveyed more than 1,000 executives and managers in a variety of industries. The survey asked how companies:

- tested new customer contact technology before it was implemented

- evaluated the voice quality of customer/service agent calls

- monitored overall contact center performance to maintain post-implementation quality

The results are a series of contradictions. While it appears from overall numbers that pre-deployment testing rates are high — 80 percent or better — the numbers are actually much less impressive than they appear.

In truth, the overall picture isn't good. More than 80 percent of respondents said their companies do not test contact center technology under real-world conditions before go-live. They do some form of testing, but it's not comprehensive enough to reveal all of the issues that can affect customer service.

They're a little bit better about testing upgrades to existing systems: 82 percent reported testing upgrades. There's grade inflation in this number, however. Sixty-two percent use comparatively inaccurate manual testing methods.

While better than not testing at all, manual testing does not accurately reflect real-world conditions. Manual tests usually occur during off-peak times, which do not accurately predict how systems will work at full capacity. Because manual testing is difficult to repeat, it is usually done only once or twice. That makes it harder to pinpoint problems — and ensure they are resolved — even if they are detected pre-deployment.

Another 20 percent don't test new technology at all; they just "pray that it works” (14 percent) or react to customer complaints (3 percent). The remaining 3 percent are included with the non-testers because they only test major upgrades. They're included with the non-testers because of the obvious flaw in their reasoning that only major upgrades are test-worthy. A small change can erode performance or cause a system crash just as easily as a major upgrade. In fact, small upgrades can create performance drags that are harder to pinpoint because unlike large upgrades, they do not have the IT organization's full attention.

Only about 18 percent of respondents said that their companies use automated testing for all contact center upgrades. That's the second-largest block of users after the manual testing group, but a low overall percentage of the total. These companies use testing software to evaluate the performance of new functionality, equipment, applications and system upgrades under realistic traffic conditions. This approach yields the most accurate results and rapid understanding of where and why problems are occurring.

The Spoken Afterthought's problems highlighted shortcomings with web portal testing, but voice applications face similar neglect. Indeed, when the President advised people to use their phone to call and apply for healthcare, many of the call centers set up to field applicants also had trouble handling the spike in caller traffic.

Voice quality can be a significant drag on short- and long-term call center ROI. Contact center agents who must ask customers to repeat themselves because of poor voice connections — or worse, ask customers to hang up and call in again — are less productive than those who can hear customers clearly. In the long term, repetition and multiple calls erode customer satisfaction levels.

The vast majority of professionals who responded to the LinkedIn/Empirix survey — 68 percent reported that their companies never monitor contact center voice quality. Only 14 percent continuously monitor voice quality, while the remaining 17 percent periodically monitor on a daily, weekly or monthly basis.

This failure carries heavy risks. Globally, 79 percent of consumers replying to a Customer Experience Foundation survey said they experienced poor voice quality on contact center calls. Almost as many — 68 percent — said they will hang up if they experience poor voice quality. If they are calling about a new product or service, they will likely call a competing company instead.

Between misdirected efforts and testing rates like these, it's no wonder people aren't surprised when a major initiative like online healthcare enrollment goes off the rails, or customers calling a contact center get funneled down a blind alley in the IVR system. Customers who run into obstacles like those are on a fast track to becoming former customers.

Testing and performance monitoring can effectively stem those losses. Businesses that test and monitor customer service systems are better able to achieve maximum ROI on their customer service systems (CSS) by identifying and remediating problems quickly. An end-to-end monitoring solution provides organizations with deep visibility into complex customer service technology environments, enabling businesses to reduce the time it takes to understand the source of a problem — and fix it — before customers ever notice the glitch.

ABOUT Matthew Ainsworth

Matthew Ainsworth is Senior Vice President, Americas and Japan at Empirix. He has 15 years of experience in contact centers and unified communications solutions.

Related Links:

Share this

The Latest

February 29, 2024

Despite the growth in popularity of artificial intelligence (AI) and ML across a number of industries, there is still a huge amount of unrealized potential, with many businesses playing catch-up and still planning how ML solutions can best facilitate processes. Further progression could be limited without investment in specialized technical teams to drive development and integration ...

February 28, 2024

With over 200 streaming services to choose from, including multiple platforms featuring similar types of entertainment, users have little incentive to remain loyal to any given platform if it exhibits performance issues. Big names in streaming like Hulu, Amazon Prime and HBO Max invest thousands of hours into engineering observability and closed-loop monitoring to combat infrastructure and application issues, but smaller platforms struggle to remain competitive without access to the same resources ...

February 27, 2024

Generative AI has recently experienced unprecedented dramatic growth, making it one of the most exciting transformations the tech industry has seen in some time. However, this growth also poses a challenge for tech leaders who will be expected to deliver on the promise of new technology. In 2024, delivering tangible outcomes that meet the potential of AI, and setting up incubator projects for the future will be key tasks ...

February 26, 2024

SAP is a tool for automating business processes. Managing SAP solutions, especially with the shift to the cloud-based S/4HANA platform, can be intricate. To explore the concerns of SAP users during operational transformations and automation, a survey was conducted in mid-2023 by Digitate and Americas' SAP Users' Group ...

February 22, 2024

Some companies are just starting to dip their toes into developing AI capabilities, while (few) others can claim they have built a truly AI-first product. Regardless of where a company is on the AI journey, leaders must understand what it means to build every aspect of their product with AI in mind ...

February 21, 2024

Generative AI will usher in advantages within various industries. However, the technology is still nascent, and according to the recent Dynatrace survey there are many challenges and risks that organizations need to overcome to use this technology effectively ...

February 20, 2024

In today's digital era, monitoring and observability are indispensable in software and application development. Their efficacy lies in empowering developers to swiftly identify and address issues, enhance performance, and deliver flawless user experiences. Achieving these objectives requires meticulous planning, strategic implementation, and consistent ongoing maintenance. In this blog, we're sharing our five best practices to fortify your approach to application performance monitoring (APM) and observability ...

February 16, 2024

In MEAN TIME TO INSIGHT Episode 3, Shamus McGillicuddy, VP of Research, Network Infrastructure and Operations, at Enterprise Management Associates (EMA) discusses network security with Chris Steffen, VP of Research Covering Information Security, Risk, and Compliance Management at EMA ...

February 15, 2024

In a time where we're constantly bombarded with new buzzwords and technological advancements, it can be challenging for businesses to determine what is real, what is useful, and what they truly need. Over the years, we've witnessed the rise and fall of various tech trends, such as the promises (and fears) of AI becoming sentient and replacing humans to the declaration that data is the new oil. At the end of the day, one fundamental question remains: How can companies navigate through the tech buzz and make informed decisions for their future? ...

February 14, 2024

We increasingly see companies using their observability data to support security use cases. It's not entirely surprising given the challenges that organizations have with legacy SIEMs. We wanted to dig into this evolving intersection of security and observability, so we surveyed 500 security professionals — 40% of whom were either CISOs or CSOs — for our inaugural State of Security Observability report ...