I am just finishing up a study that examines our level of system preparedness for a new line of business we will start to service later this year. I created a category that captured the call every time a verbal cue was used that showed the agent was waiting on their system to finish processing. Next, I compared this to a lay out of our traffic patterns for the same period. I used these two sources of data to show how slow our systems would get under different levels of traffic, i.e. 200 calls per hour, 1000 calls per hour, 2000 per hour, etc. This fulfilled the first requirement of this project, to perform a systems stress test. So now that we have shown a connection…the question becomes “How much does this cost us?”. To establish this I examined a random group of 100 calls that had video, and showed up in the category for system slowdowns, and timed the transactions. I compared this group to a second group that did not have any visible system issues, and contrasted the two sample groups. This will establish the negative impact that system issues cause. Finally, I will forward the results of this study to our IT and Resource Planning divisions so they can use it to prepare for this influx of new customers.
There are some concerns that I have about this project…mainly, those agents that use language that indicates the system is processing as a normal part of their lexicon on calls. I am banking on the fact that this group would be evenly represented in my sample, and would impact the time per transaction if there was no real system slowdown issues. What do you all think? Is this adequate? Or, what are you working on?