Richard Gagnon once said, βAn untested plan is only a strategy.β If a plan exists that can test whether your system will work as expected but you don't actually use it, you're taking an unnecessary risk. We've seen customers go live prior to a busy period like Black Friday or open enrollment only to encounter a problem that was entirely preventable. For instance, their carrier made a change that left them with only half of the active lines they were expecting. The result is customer frustration, lost money, and it's very likely that someone's job was also on the line.
One of the most common justifications for not testing is the idea that customers are already testing the system. Let's say a contact center has 500 agents who are all periodically busy. Sometimes they are even all on calls at the same time. Doesn't that mean that the system will work during busy periods? Not necessarily. What is the actual quality of the calls that are going out to the agents? Are people talking to agents because they are getting stuck in the self-service IVR and pressing 0 repeatedly? Is a large portion of customers hearing busy signals, ring no answers, or calls with absolutely no audio? There's no way for you to tell, because those calls aren't ever reaching agents or internal network monitoring tools. The only way to truly understand the customer experience under peak load i.e. Black Friday traffic is to actually go through testing using real phone calls.
Read our Black Friday case study here.
Most systems will work as expected when a handful of people are calling. That's typically the process that people will go through to ensure that everything is working from end to end. If the entire QA department is placing incoming calls, they'll know how the system is working from their perspective. However, their calls might not actually be going through the public telephone network. Testing with an impartial third-party gives you not only the confidence that the system works as expected, but also the documentation you'll need if something does go wrong. For example, if the carrier hasn't activated all the lines, IVR licenses aren't configured properly, or SBCs aren't handling the intended velocity of calls per second, you have empirical data from testing prior to peak period. The hard data might show that the system performed 500 concurrent calls at 40 calls per second, for instance, which you wouldn't be able to see if you only had your QA team call in.
If the system was tested before the Black Friday last year, you might assume that it will work fine this year as well. However, are you willing to bet on it? Consider all the changes made that your team may have seen as minor or insignificant. If you made a small routing change that affects every call going forward, you might not be aware of it ahead of time. If your team can say without a doubt that the system is exactly the same today as it was twelve months ago (absolutely no patches, upgrades, or minor changes), you could roll the dice. However, keep in mind that a test prior to Black Friday doesn't have to be a long, drawn out process. It can be as short as 30 minutes, as long as you're using peak traffic volume and real-world traffic to have confirmation that everything works as expected.
You may think that you haven't changed anything in the system, so there is no need to test. However, wouldn't it make most sense to be completely certain that everything still works as intended? There is no shortage of horror stories when people rolled the dice only to find out that something had, in fact, changed. We work with a marketing company that was in the middle of a large campaign when they began to notice that something had gone awry. They had 800 agents ready to take phone calls, but they weren't seeing the response they anticipated. We performed a test by calling all 800 lines and found out that only 400 of them were active. The carrier had only provided half of the lines they paid for, and as a result, half of their potential business had been squandered.