Contact centers are often made from best-in-class components. Many contact center implementations have elements that come from as many as 18 to 20 different vendors. Each one of those discrete elements—whether it's the CTI routing and hunting package, the speech recognition technology, or the interactive voice system—is built to perform at peak levels by its vendor. In a sense, the process of building a contact center is similar to building your own car, piece by piece.
Different software application developers also have areas of specialty. They bring their expertise into building a full contact center, which is more than just voice in today's world. The modern contact center transcends the voice channel to provide a true omni-channel experience. That means there are elements in the contact center tied to web and social media interaction. Of course, faxing and email are also still important elements of interaction in a lot of industries.
Stress testing is about making sure all those different components have been properly integrated within the contact center technology environment to deliver the omni-channel experience today's customers have come to expect. It's a reliable method to make sure the whole system will hold together when it's running at its designed capacity, both in terms of velocity of interactions across all the different channels, and/or number of concurrent interactions handled by each component (e.g., the self-service IVR, speech recognition system, or the connections to agent desktop).
Prior to the stress test, all of your various system inputs should be gathered and funneled into the contact center ensuring all components in the system are effectively tested and utilized. When it comes time to perform a stress test, you have to send loads of traffic into the system to represent real customer interactions. It shouldn't just be synthetic transactions created inside the system by a bit pump—it needs to be actual outside-in traffic that accesses and exercises all the elements in the public telephone network or internet, in addition to what goes through the internal network. Whether they're voice, web, or WebRTC interactions, they should be an accurate representation of real-world usage, so you can have confidence your system will operate as intended in a real-world high load situation.
We're able to create outside-in traffic across multiple channels to act just like real customers trying to interact with the system. We create test case scripts that follow step-by-step instructions —after every input the testing system stops and waits for the target system to give a response. That way, we can measure the performance at all stages of interaction.
With outside-in interaction, we're sampling and reporting upon the experience that's delivered by all that contact center technology sitting between your brand and the customer. We're effectively making sure that the system can perform the way it's supposed to when it's running at full speed.
Our Prognosis toolkit can be deployed internally as an overlay on a multivendor contact center environment to collect data from the various vendor APIs. It gathers all relevant data and puts it on a single pane. Prognosis provides a bird's-eye perspective of the technology's reaction to the velocity of customer interactions.
What's really cool about the integration between the inside-out Prognosis toolkit and the outside-in Testing perspective is that it gives you the ability to couple a view of the customer service experience with the internal metrics and analytics that Prognosis collects. Everybody knows how important the customer experience is in today's world. When you combine that with information from all the different components and network elements within the contact center complex itself, it's the equivalent to carefully tuning your car.
Customer issues will be effectively identified by the data collected by Prognosis. By comparing the tags on phone calls or browser interactions, we can significantly accelerate root cause analysis. We're not just waving a red flag saying the something went wrong—we're also providing deep-dive information that shows exactly what components were involved when the issue developed.