Video Transcript
Here's a tale of two studies - well, case studies, about the success and failure of the same kind of call center project at two different companies.
One company who handled nearly 200 million calls per year had hired an outside contractor to audit the performance of their agents based on a variety of things such as if they built a rapport with the customer, or if they were actively listening, or if they followed the right process on the call, etc. The company asked me to audit how that contractor audited the performance for their phone reps. What I found is that the contractor was unintentionally biasing their data by not randomly selecting the calls they reviewed or who was auditing those calls.
Using fundamental statistical concepts, I proved the contractor's audits and conclusions were unreliable. I tried to work with that contractor on how to improve their process, but they didn't want to change. So we terminated their contract and saved over $15M/yr and we developed a cheaper and more reliable way to audit the agent performance internally.
That's the good study. But the same thing happened with another company who handled only a fraction of the call volume of the first. And I quickly discovered the same thing where their audit process was also biased and delivered unreliable results.
I shared my findings with the executives and recommended they stop this process and terminate the contract - especially since I proved their conclusions and results were untrustworthy and unactionable. But the key executive involved didn't want to cancel the contract because she had no alternate audit process to replace it.
My argument was that she's better off saving money by stopping what's obviously a broken, unreliable process and then we can work on building and replacing it with a new trusted process. But she was afraid to let go of that broken process. To me, it's another case of being "dead right, but still dead".