Corporate Universities – A Case Against

There are lots of versions of corporate universities.  I actually worked with one client (Amtrak) who’s senior leadership decided they needed to have an Amtrak University.  The senior training leader–when it was clear she couldn’t fight this decision–simply relabeled the existing training and development  resources a “University” and set up a “campus” at an existing training site in Wilmington, Delaware.  Nothing else changed (in terms of content offerings, structure, focus) but hey….they were a University now.

I’m sure not all organizations that adopt a “university model” follow that approach.  But I”m skeptical of the value of adopting a Corporate University approach to learning, development and performance within organizations.  And I’m not the only one who holds this belief.  The esteemed Ruth Colvin Clark has noted some similar issues around a move to corporate universities.

I think there is a tendency to assume that a “university” has more prestige and functions at a higher level than a training department–and that this is therefore a good thing.  We (meaning people in general) often view the term “university” positively and assume that the work is more rigorous (rather than the converse of referring to your organization’s L&D shop as a “kindergarten”).

But this focus on making or branding an organization’s learning and development shop as a “university” seems to me to be mis-guided.  First, it places the emphasis on education versus performance.  The primary reason for learning and development in most organizations should be to improve performance.  That’s why most training evaluation measures (looking at reaction to the training or even if learning took place) don’t seem very relevant to me.  I can enjoy the training or even learn a lot yet fail to get better at my job.  When the focus is on learning rather than performance, it’s too easy for learning/training professionals to be unaccountable for results (“don’t blame me why results didn’t get better–the participants enjoyed the class!”).

Additionally, I’m not sure Universities provide a great model to help guide training and development functions.  While there are plenty of great examples of innovate learning approaches with higher education, most educators would say that the majority of universities still operate with very traditional models and approaches to teaching and the organization of knowledge.

Plus, the way that a number of organizations have treated the creation of a corporate university was with the centralization of the learning function (to create a “campus”).  The irony with this approach is that one of the better examples of innovation with many schools of high learning has been the decentralization of learning–moving out to the field, off the campus, away from a central-visible school.

I would argue that many organizations who’ve adopted a university model have done so either to “keep up with the Joneses” (i.e. seeing it as a trend they need to follow) or as a way to enhance the prestige of the training department.  I think a far better way to enhance prestige of L&D is to demonstrate a strong track record for being focused on and effectively building performance.

Data and Information

Performance consultants believe strongly in evaluation–in checking to see if our work has made a difference.  And to evaluate means to collect data.

Lots of people collect data. The problem is that just because you can collect that data doesn’t mean it’w worth collecting. I run into clients all the time to ask me to gather particular information or are so proud about the data they do collect. A classic case consists of some of the training data that many learning and development departments collect annually. For instance, it’s not uncommon for a lot of training shops to gather and then aggregate such information as: the total average score (from a 1-5 Likert scale) of all training classes, or total number of employees who attended training. Think about those two measures for just a second. If the average evaluation score for all classes goes up or down, does that prove anything? First, if people like (or like less) a particular course, that doesn’t mean it was an effective training program. Second, if the overall score goes up or down, that’s not a meaningful measure of the department’s performance. Scores could have gone up because people are taking different classes (that are more enjoyable to take). Lots of corporate climate survey questions may seem important on their face but under deeper examination really don’t tell us a lot.

One of the things I typically ask clients when they appear to be on one of these data goose-chases, collecting all sorts of data that I’m not sure is that valuable is to ask the client a simple question. I ask “what will you do with the data?”. That question stops a lot of clients in their tracks. They might have reasons for why it’s useful to gather that data (“it tells us if the training department is doing a better job” or “we want to know if the employees like our office space”) but those don’t explain what they’ll DO with the information once they get it. Would they realistically award bonuses to staff (or conversely–fire or demote people) because average Likert scores changes? If employees indicated that they didn’t like the office space, does that mean they’d buy new furniture?

Asking clients “what will you do with that data?” is a really good reality test for some types of questions.  It forces clients to explain what they intend to do with the information once they get it. And what you’ll often find is that clients say things like “it will tell us if the training courses are valued by the employees”–in which case maybe we should ask that question instead (or look at other indicators like–how often employees blow off classes they’e signed up for) Or if they want to know if employees like the office space as a clue to determine if they’ll leave, maybe we should just measure employee retention and then do exit interviews to better assess why employees leave.