Okay, Iâ€™ve got a pet peeveâ€”something that really pushes my buttons. Â The data from a host of sources has continually shown that organizations and executives are placing more emphasis on â€œperformance.â€ Â Leave aside the reality that many of them (organizations and execs) donâ€™t really know what performance is in this case (below the organization level of profits or sales or end results). Â But almost everyone in the HR field therefore knows there is more emphasis on â€œperformance.â€
So part of what we see is for people (internally as well as external consultants) to tack the word â€œperformanceâ€ on to what they do. Â We see â€œperformance-based trainingâ€ or â€œperformance-enhancing facilitationâ€ or â€œperformance-driven HRâ€ or some other variation. Â To me, this reveals a fundamental misunderstanding of the performance improvement field. Continue reading “Improving Performance Doesn’t Mean You Do Performance Improvement”
Performance consultants believe strongly in evaluation–in checking to see if our work has made a difference. And to evaluate means to collect data.
Lots of people collect data. The problem is that just because you can collect that data doesn’t mean it’w worth collecting. I run into clients all the time to ask me to gather particular information or are so proud about the data they do collect. A classic case consists of some of the training data that many learning and development departments collect annually. For instance, it’s not uncommon for a lot of training shops to gather and then aggregate such information as: the total average score (from a 1-5 Likert scale) of all training classes, or total number of employees who attended training. Think about those two measures for just a second. If the average evaluation score for all classes goes up or down, does that prove anything? First, if people like (or like less) a particular course, that doesn’t mean it was an effective training program. Second, if the overall score goes up or down, that’s not a meaningful measure of the department’s performance. Scores could have gone up because people are taking different classes (that are more enjoyable to take). Lots of corporate climate survey questions may seem important on their face but under deeper examination really don’t tell us a lot.
One of the things I typically ask clients when they appear to be on one of these data goose-chases, collecting all sorts of data that I’m not sure is that valuable is to ask the client a simple question. I ask “what will you do with the data?”. That question stops a lot of clients in their tracks. They might have reasons for why it’s useful to gather that data (“it tells us if the training department is doing a better job” or “we want to know if the employees like our office space”) but those don’t explain what they’ll DO with the information once they get it. Would they realistically award bonuses to staff (or conversely–fire or demote people) because average Likert scores changes? If employees indicated that they didn’t like the office space, does that mean they’d buy new furniture?
Asking clients “what will you do with that data?” is a really good reality test for some types of questions. It forces clients to explain what they intend to do with the information once they get it. And what you’ll often find is that clients say things like “it will tell us if the training courses are valued by the employees”–in which case maybe we should ask that question instead (or look at other indicators like–how often employees blow off classes they’e signed up for) Or if they want to know if employees like the office space as a clue to determine if they’ll leave, maybe we should just measure employee retention and then do exit interviews to better assess why employees leave.