You’ve probably all heard of the phrase “what gets measured gets done†and certainly organizations are paying increasing lip service to the concept of measuring performance more. This post is not an argument for not measuring. It’s a lesson about the importance of measuring the right things.
A number of years ago, I was called in to help a call center improve their performance. This call center was a 1-800 “help†provider—you called them when a particular appliance stopped working and you needed immediate help or troubleshooting (from simple steps to fix the problem to where to take it to get repaired to what your warranty did and did not cover). Thus, when customers called this center, it was almost always because something was broken—and often with catastrophic consequences.
The call center management team specifically asked me to find ways to reduce the amount of “hold time†that individuals had to wait before getting an associate to help them online and also reduce the average length of the calls (with the theory being that shorter calls would also means less wait time). And, as a “ps†the management team asked me to also take a look at a call center associate named Martha. Martha, they said, was a really sweet person but if she didn’t turn things around, they would have to fire her. Specifically, they said she was too informal with callers (often times not referring to them as “Mister†or “Msâ€). And her average call length was longer than the majority of other associates in the call center. Now it’s worth noting that the vast majority of call centers do measure things like….average wait time and call length and whether or not associates follow the script—that’s pretty standard for the industry.
In doing a front-end analysis with this client and gathering more data, I found some interesting things. Typically, clients had to call almost 3 times in order to get a problem fixed. This happened because the call center associate may not have asked the right questions or had been given inaccurate information by the caller (thus necessitating some call-backs in order to ultimately fix the problem). But Martha—the “problem employeeâ€â€”required little more than an average of one call to fix the problem (1.2 calls by customers to “fix the problem†versus a call center average of 2.9 calls by customers to produce a solution that worked). And here was another irony—although Martha’s calls tended to last longer (on average), because she was fixing the problem with fewer call-backs by the customer, she ended up handling more complaints and individual problems than anyone else in the call center. Finally, when I asked customers which associates they dealt with and to rank them on the basis of how much respect they felt they received during their call, Martha—even though she often neglected to use “Mister†or “Ms†during her conversations ended up as associate perceived as being the most respectful and the best listener.
The Center was measuring the wrong things. They were prepared to fire Martha—and she was actually their best employee. Martha was resolving more problems in a typical day, requiring fewer calls from customers to resolve a problem, and was perceived as being the most respectful (even though she violated the Call Center script for conversations).
What can we learn from this case? First, just because you can measure something doesn’t mean you should—or that it’s worth measuring. Second, it’s critical to be clear about why you’re measuring something—to what purpose. Otherwise, the metrics we use end up hiding the performance we seek to measure.