There are lots of versions of corporate universities. I actually worked with one client (Amtrak) who’s senior leadership decided they needed to have an Amtrak University. The senior training leader–when it was clear she couldn’t fight this decision–simply relabeled the existing training and development resources a “University” and set up a “campus” at an existing training site in Wilmington, Delaware. Nothing else changed (in terms of content offerings, structure, focus) but hey….they were a University now.
I’m sure not all organizations that adopt a “university model” follow that approach. But I”m skeptical of the value of adopting a Corporate University approach to learning, development and performance within organizations. And I’m not the only one who holds this belief. The esteemed Ruth Colvin Clark has noted some similar issues around a move to corporate universities.
I think there is a tendency to assume that a “university” has more prestige and functions at a higher level than a training department–and that this is therefore a good thing. We (meaning people in general) often view the term “university” positively and assume that the work is more rigorous (rather than the converse of referring to your organization’s L&D shop as a “kindergarten”).
But this focus on making or branding an organization’s learning and development shop as a “university” seems to me to be mis-guided. First, it places the emphasis on education versus performance. The primary reason for learning and development in most organizations should be to improve performance. That’s why most training evaluation measures (looking at reaction to the training or even if learning took place) don’t seem very relevant to me. I can enjoy the training or even learn a lot yet fail to get better at my job. When the focus is on learning rather than performance, it’s too easy for learning/training professionals to be unaccountable for results (“don’t blame me why results didn’t get better–the participants enjoyed the class!”).
Additionally, I’m not sure Universities provide a great model to help guide training and development functions. While there are plenty of great examples of innovate learning approaches with higher education, most educators would say that the majority of universities still operate with very traditional models and approaches to teaching and the organization of knowledge.
Plus, the way that a number of organizations have treated the creation of a corporate university was with the centralization of the learning function (to create a “campus”). The irony with this approach is that one of the better examples of innovation with many schools of high learning has been the decentralization of learning–moving out to the field, off the campus, away from a central-visible school.
I would argue that many organizations who’ve adopted a university model have done so either to “keep up with the Joneses” (i.e. seeing it as a trend they need to follow) or as a way to enhance the prestige of the training department. I think a far better way to enhance prestige of L&D is to demonstrate a strong track record for being focused on and effectively building performance.
Customers like to kvetch a lot and so it’s easy to complain about missing the “good old days” when a lot of times the old days weren’t so good with no vaccine for polio, 1 in 2 children dying before the age of 10, maybe a world war going on with millions dying, being born lower class where your chances of going to college were nonexistent or living in a time where there was no such thing as an iPod. Living in the past wasn’t always better.
But lots of people (me being one) feel that overall service performance is getting worse. That’s not just generational narcissism or curmudgeonly attitudes that come from an aging group of baby boomers. I do a lot of client work around service issues and customers experience and that’s my take. And Bloomberg and JD Power collect data on overall service and that’s their take too–overall service performance is getting worse. Oh, there are exceptions–firms that continue to raise the bar. But overall, most firms seem to be doing a worse job serving customers and creating distinctive experiences that provide a competitive edge. How is that so when so many firms pay lipservice and actually spend a lot of bucks on supposedly improving service.
I’d argue there are a couple of factors:
1. The economy has certainly had some impact on this issue. As firms have laid off people, some work simply is not going to get done (or won’t be done with the same degree of detail or consistency). When you compete on price (where customers have no loyalty) then service standards tend to be evaluated solely on the basis of cost (ie: “Is there a cheaper way to do this? What can we stop doing?”). But the economy isn’t the sole culprit because data on dropping service levels showed up in many sectors prior to the global recession.
2. Too many firms don’t evaluate service from the customer’s perspective. We hear hoary extortions like “the customer is always right” or “under promise and over deliver” which are actually bad service mantras to live by. Firms that don’t provide service guarantees usually do so on the belief that customers would rip them off (when data continuously shows this not to be the case). To many firms define good customer service on the basis of a set of association behaviors (smile–be friendly, etc.) that are nice but usually don’t matter if a host of other service issues are present before the company associate ever comes into contact with the customer. Issues like the customer’s expectations (realistic or off-base) and the company’s reputation (deserved or unfair) have far more impact than smiling, being friendly, listening well and being prompt (or other behavioral tactics). Related to this, too many firms view good customer service as a set of employee behaviors–not a performance issue. That’s unfortunate because when we make it all about behavior, we set up a series of targets that are moving, nebulous and difficult for employees to meet (so we breed cynicism and ultimately failure). The outcomes we want from good service are not friendly staff or smiling desk clerks. We want an outcome of customers who feel welcomed and respected–it’s about the customer’s perspective not the employee’s behavior. As more firms attempt to improve service, there seems (at least to me) to be more focus on behavior which of course runs counter to how performance works (which is starting with outcomes and working our way back).
3. Absence of standards is a huge factor with service failure. When you define good customer service as a set of behaviors (like a “friendly smile”) it makes it difficult (though not impossible) to measure performance objectively. Check with any five-star hotel property around the world and they have hundreds or thousands of performance standards. Some of them are behavioral or appearance-based. But many involve specific outputs (what a clean sink is supposed to look like) that allow for consistent, objective measurable data that can be used to track performance and assess progress. Show a business with consistently good service and I’ll show you one with explicit standards to measure service against. For too many firms, identifying and codifying and then measuring standards is just too much work. So they just tell employees to go out and “wow” customers.
So a better economy might help improve service somewhat. But ultimately service problems in the West are based on a fundamental misunderstanding of customer service and performance.
As the content (and garbage) on the web continues to proliferate, it’s sometimes hard to sort through the gold nuggets from the chaff (or the garbage). This is especially true in the performance arena. There is one particular site that has been up a while (“a while” in this case means since 1995). It’s the work of consultant Don Clark. Don has produced a true labor of love that everyone in the workplace learning and performance fields needs to be aware of. With a background in the Army and then Starbucks before he set off on his own, Don decided to create a site not to promote himself but really cover a wide range of ISD, training, OD, performance, management and programmed learning content. He’s got a variety of self-created templates, forms and manuals you can download on topics like ISD or task analysis. He provides a list of HRD names and why they matter, books that are important, timelines for particular topics, relevant quotes and more. But mostly the “more” is about tools and examples and content around how to do what it is that we do–more intelligently and effectively. And the site is clearly designed to share knowledge, not for self-promotion or profit. Frankly, I cannot think of a single person in the workplace learning and performance field who has been so prolific on their website in terms of content.
The primary topic headings off the main page are: leadership, training, learning, history, knowledge, performance, java, news and his blog. And under each of these topics, you’ve got a wealth of depth (in some cases over 100 individual pages of content in terms of a user’s manual or separate job aids). Quite simply, there is a tremendous amount of eclectic depth and breadth on this site. In the few exchanges I’ve had with Don in the past, he’s proven to be trusting, magnanimous and helpful and easy to deal with. I once wanted to use some of his material for some University Professors in Central Asia and instead of providing a lot of hoops for me to jump through, really made it easy to move forward with his stuff.
I’m going to list the URL in just a few lines but I do so with the caveat–I think the URL has changed a few times over the 15+ years that Don has has this site up (and he continues to add to it). So if for some reason the URL doesn’t work (which could be due to my error or a change on his part), I’ve always found it by searching for “Big Dog’s bowl of biscuits” (certainly a memorable phrase). And if you go to the website and look at “about” you’ll see pictures of “big dog” and “little dog” with an explanation that will probably draw a chuckle from you and also just drive home how amazing this site is–that Don is clearly doing this out of a desire to help the field and share knowledge, not profit individually or market himself.
The most recent URL that got me to Don’s site is: www.nwlink.com/~donclark/ and if you haven’t visited the site before, I strongly suggest you do. Don–keep up the great work!
With some labeling the BP oil spill in the Gulf of Mexico as the worst environmental disaster the USA has ever experienced, it’s worth looking at what we know so far about efforts to deal with the spill for performance improvement lessons. As I look at what I’ve heard about this disaster, several critical lessons come to my mind.
- Ignore process at your own peril. There has been such an emphasis on “action” and “leadership” (both by private and public sector organizations) that we’ve seen lots of money, people and activity–but often at cross-purposes. Throwing money and resources at any problem is usually ineffective when there is no clear alignment around the process connecting all of the specific tasks.
- It’s a lot easier to prevent a problem than to fix a mistake. The Gulf Oil spill illustrates this point so well–far better and easier to prevent the rig blowout than to clean up tar balls from beaches and try to bathe birds.
- Being clear about the desired outcome is critical. Those of you knowledgeable about performance improvement know how critical outcomes are as a means of providing direction. Unfortunately, everyone assumed there was a clear purpose (clean up the spill) when actually there was tremendous disagreement among directions. Some groups argued for booms to corral the oil (which doesn’t address oil beneath the surface). Others argued for strong use of chemicals to eat the oil or break it down (which was opposed by others who felt this could produce worse environmental impacts than the oil itself). The disagreements were more than just differences on tactics but instead reflected major (and often incompatible) directions.
- Data matters. Throughout the first month of the disaster, there was a consistent inability to answer some of the most basic questions like: approximately how much oil is escaping daily, what backup or contingency plans are reasonable if the first cap fails, what are the environmental impact of the oil dispersants being used, and what percentage of the oil is remaining beneath the surface? Without some kind of data, policy decisions were being made on the basis of educated guesses and anecdotes.
What other performance insights have you gotten from this mess?
I’ve been doing work on strategic and strategic planning with a number of different clients lately and it’s gotten me thinking about the issue of blindspots. There are things that we know to be true (or we suspect them to be so). I don’t mean dogma or blind faith, but rather through data, research, experience, customer feedback, measuring performance—there are some things that we can confidently say “this is something that we know to be true or accurate.”
Then we have areas that we know we don’t know. For instance, I know that I’m pretty uninformed about the tax code. Because of my awareness of my ignorance, I can make smarter decisions about taxes—by hiring an accountant. Or being especially careful when I fill out my taxes each year.
The reality is that no person or organization can know everything. So ignorance about particular topics or situations is a reality of being in the world.
But a blindspot occurs when a person or organization is ignorant about a situation and doesn’t realize the ignorance exists. It may be due to dogma. It may be because the situation has changed—what used to be true no longer is but people haven’t recognized that. It may be due to a lack of depth—someone doesn’t realize the degree of complexity to a particular issue. In short, a blindspot is a case where we don’t know that we don’t know something.
Blindspots are particularly damaging to organizations. That’s because most big surprises (especially environmental or market ones) to organizations tend to occur because of a collection blindspot that meant the organization and executives simply failed to perceive the potential for surprise with that specific issue.
You’ve probably all heard of the phrase “what gets measured gets done” and certainly organizations are paying increasing lip service to the concept of measuring performance more. This post is not an argument for not measuring. It’s a lesson about the importance of measuring the right things.
A number of years ago, I was called in to help a call center improve their performance. This call center was a 1-800 “help” provider—you called them when a particular appliance stopped working and you needed immediate help or troubleshooting (from simple steps to fix the problem to where to take it to get repaired to what your warranty did and did not cover). Thus, when customers called this center, it was almost always because something was broken—and often with catastrophic consequences.
The call center management team specifically asked me to find ways to reduce the amount of “hold time” that individuals had to wait before getting an associate to help them online and also reduce the average length of the calls (with the theory being that shorter calls would also means less wait time). And, as a “ps” the management team asked me to also take a look at a call center associate named Martha. Martha, they said, was a really sweet person but if she didn’t turn things around, they would have to fire her. Specifically, they said she was too informal with callers (often times not referring to them as “Mister” or “Ms”). And her average call length was longer than the majority of other associates in the call center. Now it’s worth noting that the vast majority of call centers do measure things like….average wait time and call length and whether or not associates follow the script—that’s pretty standard for the industry. Continue reading “Are You Measuring the Right Thing?”
Intellectually, everyone gets the value of performance appraisals. Yet every client I’ve ever encountered usually bemoans the process and most employees criticize the appraisals. Why is something that should have so much value end up being so belittled?
Organizations do lots of things wrong when it comes to reviews. There is a tendency to spring the final evals on employees as a surprise. I have lost count of the number of people who told me that they came out of their appraisal session in shock—having heard things they didn’t expect. One basic rule of the formal appraisal is that nothing in that session should come as a surprise to the employee—it’s just a formal meeting to review and sign-off on informal coaching and counseling that went on earlier during the year. Another issue is the tendency for managers to put off appraisals until the last possible moment. There are lots of reasons this happens. In some cases, it’s about avoiding unpleasantness or confrontation. In others, it’s because it’s a hassle to do the appraisal paperwork and prepare for it—often because the criteria are so subjective. Continue reading “Performance – And Performance Appraisals”
Anyone who is familiar with my work or my publications knows that job aids are near and dear to my heart. My third book (Job Aid Basics) is about the subject. Any serious performance student or consultant knows about the power of job aids—about how they are a cheap and effective way of improving performance. Well, there is a great new book out by Surgeon Atul Gawande called The Checklist Manifesto.
Checklists are just one example of a job aid. What is a job aid? A job aid is a device or tool used to improve memory or confidence on the job and thus overall performance. A wrench is not a job aid (it’s just a tool). But a checklist (which reminds us of what to do), a recipe with steps (so we don’t add the eggs too soon), a trouble-shooting guide on how to figure out why the car doesn’t start—these are all job aids.
Gawande writes about a number of examples in this great book but his first primary examples involves healthcare. He examines the case of the Johns Hopkins ICU where using a simple 5 bullet checklist, the staff reduced central line infections from 11% to 0% saving an estimated 43 infections, 8 lives and 2 million dollars per year. Gawande and a team then went to a number of hospitals around the world and tried the same approach from rural Tanzania to Seattle. Using a 19-point checklist for surgery, they found that EVERY hospital experienced a significant drop in post-operative complications and deaths. In the 6 months after the checklist was introduced complications fell by an average of 36% and deaths fell by an average of 47%. This was no new technology, no other major changes or influx of talent or resources—just the use of the checklist during surgery.
Performance consultants know about job aids. Joe Harless gets credit for having coined the term. Job aids are often a faster, cheaper alternative to training. They’re an underutilized way of improving performance and a useful tool in the performance consultant’s tool box.
Gawande has done us performance consultants a tremendous favor. He’s got a significant following (staff in the Obama White House look at his writings, both this book and his previous one Better about improving performance). Dr. Gawande has provided very specific, tangible and quantifiable examples about how performance can be radically improved with even just simple tools or approaches. For all the clients out there who want to throw training at the problem or rehire a work force or change the bonus structure, Gawande’s work is a useful tool to help us make the case for a performance-based approach to improvement.
When I was initially starting out as a performance consultant, clients used to ask what that title meant—what is a performance consultant? And I’d stumble into a definition of what human performance improvement is and what distinguishes it from other approaches only to discover that after about the second sentence my client’s eyes had usually glazed over. Typically we, as performance consultants do a lousy job trying to explain to clients what it is we do and why it works. And the biggest reason why this happens repeatedly is that we fail to see (or hear) things from a client’s perspective.
An accurate definition of HPT or HPI may be fine and good but frankly, most clients don’t care about the academics or the theory. Their focus is more likely to be on: “what can you do for me?” Now if a client wants to know how my approach differs from that of someone in another field, I’m more than happy to provide a performance consulting model or explain particular aspects of the process. But now, when talking with clients, my explanation usually is about the payoff to the clients—the business result. Most of the time I tell clients (especially executives) that I’m a “business consultant.” Because, frankly, the process I use (performance consulting) is of secondary interest to my clients—what they want are results. Continue reading “Explaining What Performance Consulting is to Clients”
We’ve probably all heard a reference to someone as having “natural talent” or being “particularly gifted in an area” or even being a prodigy. Such claims are often made about athletes or musicians but you will hear them about just about any kind of profession. And they’re complete bunk.
Professor Anders Ericsson at Florida State is the leading researcher into what has now become known as “Genius Research”. Ericsson and others look at what it takes for someone to become an outstanding performer in their field. What they’ve found out is that raw talent, even physical ability (like size in football or height in basketball) make very little difference in determining whether or not someone becomes great or not. Instead, it’s primarily about two different factors:
- How much you practice
- How well you practice
Let’s take a look at each of these factors. Continue reading “Natural Talent? I Think Not.”