Youâ€™ve probably all heard of the phrase â€œwhat gets measured gets doneâ€ and certainly organizations are paying increasing lip service to the concept of measuring performance more. This post is not an argument for not measuring. Itâ€™s a lesson about the importance of measuring the right things.
A number of years ago, I was called in to help a call center improve their performance. This call center was a 1-800 â€œhelpâ€ providerâ€”you called them when a particular appliance stopped working and you needed immediate help or troubleshooting (from simple steps to fix the problem to where to take it to get repaired to what your warranty did and did not cover). Thus, when customers called this center, it was almost always because something was brokenâ€”and often with catastrophic consequences.
The call center management team specifically asked me to find ways to reduce the amount of â€œhold timeâ€ that individuals had to wait before getting an associate to help them online and also reduce the average length of the calls (with the theory being that shorter calls would also means less wait time). And, as a â€œpsâ€ the management team asked me to also take a look at a call center associate named Martha. Martha, they said, was a really sweet person but if she didnâ€™t turn things around, they would have to fire her. Specifically, they said she was too informal with callers (often times not referring to them as â€œMisterâ€ or â€œMsâ€). And her average call length was longer than the majority of other associates in the call center. Now itâ€™s worth noting that the vast majority of call centers do measure things likeâ€¦.average wait time and call length and whether or not associates follow the scriptâ€”thatâ€™s pretty standard for the industry.
In doing a front-end analysis with this client and gathering more data, I found some interesting things. Typically, clients had to call almost 3 times in order to get a problem fixed. This happened because the call center associate may not have asked the right questions or had been given inaccurate information by the caller (thus necessitating some call-backs in order to ultimately fix the problem). But Marthaâ€”the â€œproblem employeeâ€â€”required little more than an average of one call to fix the problem (1.2 calls by customers to â€œfix the problemâ€ versus a call center average of 2.9 calls by customers to produce a solution that worked). And here was another ironyâ€”although Marthaâ€™s calls tended to last longer (on average), because she was fixing the problem with fewer call-backs by the customer, she ended up handling more complaints and individual problems than anyone else in the call center. Finally, when I asked customers which associates they dealt with and to rank them on the basis of how much respect they felt they received during their call, Marthaâ€”even though she often neglected to use â€œMisterâ€ or â€œMsâ€ during her conversations ended up as associate perceived as being the most respectful and the best listener.
The Center was measuring the wrong things. They were prepared to fire Marthaâ€”and she was actually their best employee. Martha was resolving more problems in a typical day, requiring fewer calls from customers to resolve a problem, and was perceived as being the most respectful (even though she violated the Call Center script for conversations).
What can we learn from this case? First, just because you can measure something doesnâ€™t mean you shouldâ€”or that itâ€™s worth measuring. Second, itâ€™s critical to be clear about why youâ€™re measuring somethingâ€”to what purpose. Otherwise, the metrics we use end up hiding the performance we seek to measure.