Rethink Your Annual Performance Review

As we approach the end of the year, law firms and legal departments are getting geared up for annual performance reviews. Undoubtedly we’ve been through a lot of change recently. Some for the better. Some for the worse. I don’t know about you, but I prefer change for the better – and that includes working with my clients to change performance reviews for the better too.

Law firms generally conduct performance reviews to achieve two results:

  1. To gather information they can use to make decisions about associates, including decisions about compensation and promotion

  2. To gather information they can communicate to associates to help them develop and improve

To collect this information, many firms use a performance-based competency model. By that I mean, first firms decide which skills and knowledge play a part in success. Then they ask partners to rate associates on these skills. Often the rating’s on a scale from 1 to 5 - with “excellent” on one end of the scale, and “needs improvement” on the other.

The ratings are then used as a measure of the associate’s skill and ability. Written comments might accompany the numerical score, but often aren’t included at all except more generally in an overall comments section. Sounds good. Partners who work with an associate rate their skill and ability on - let’s say - “teamwork”. Then we use the partner's rating to assess how strong the associate is on that skill. But it turns out ratings aren’t that effective. And they aren’t that effective because they don’t give us the information we need.

Ratings Don’t Give Firms the Information They Need 

If you’ve collected this type of feedback over several review cycles, like I have, you come to recognize that you have to look at who gave the rating, to understand the rating’s significance. That’s because over time you notice that some partners are more lenient, and give everyone high scores, while other partners are tough, and generally give lower scores. Similarly, some partners focus more on some competencies than others. When this is the case, ratings seem at least in part to depend on the priority and importance a partner places on the competency.

Studies have measured our ability to rate others, and we’re not that good at it

Anecdotal observations aside, researchers have studied, and measured, our ability to rate someone else on an abstract characteristic. And it turns out that our rating of someone else actually says more about us, than the person we’re rating. Researchers call this “the idiosyncratic rater effect” (Scullen S.E. et al. “Understanding the Latent Structure of Job Performance Ratings”, Journal of Applied Psychology 2000).

So for example, when a partner rates an associate on “teamwork” that rating is fundamentally impacted by a number of factors, including things like the partner’s own understanding of teamwork, their opinion of what good teamwork looks like, how tough they are as a rater, and other biases they may have. In the end, the partner’s rating tells us more about the partner, than it does about the strength of the associate’s skill and ability.

Ratings Don’t Give Associates the Information They Need

I can almost guarantee we’ve all been rated on something. If that includes you, what did your rating tell you about your development, and how to improve your performance? When we think of it this way, it seems obvious that ratings aren’t effective.

Ratings don’t help associates develop and improve

At best, a good rating might positively impact how an associate feels about who rated them, their perception of themselves, and how secure they feel in their position. But a good rating does little beyond that. Even the strongest associates don’t know what they can do to keep their rating high. An associate who is told to “keep doing what you’re doing” will not be a strong performer in a year or two if they actually take that advice. Instead, we’re expecting natural progression to a new level of aptitude. And we often expect it without sufficient insight into what it looks like, or how to achieve it. 

At worst, an average or negative rating probably has a less than positive impact on how an associate feels about who rated them, their perception of themselves, and how secure they feel in their position. Ironically, the first question the associate is likely to ask is – “Who gave me this low score?” They sense intuitively that they need this information to make sense of it. And they’re right. (Remember the idiosyncratic rater effect.) But more importantly, without context and detail, associates who receive average or below average ratings are left at a loss as to how to improve. We hope they get a better score next time, without sufficient insight into what’s expected, and how to achieve it.

You Can Change - For The Better

If you’re still collecting ratings to assess specific characteristics and skills, and concede that ratings probably don’t give you good information, and don’t help associates develop and improve, how can you get more value from the time and effort you invest? The answer certainly can’t be to keep collecting the same information, and doing performance reviews the same way. We’ve all heard this quote by Einstein.

“The definition of insanity is doing the same thing over and over again and expecting different results.”

Instead of collecting the same ratings, gather information that’s more wholistic, balanced and fair, and use it to inform decision-making. Instead of collecting the same ratings, gather information that focuses on growth and development, and communicate this to associates. My clients and I have done it differently - and we think it’s changed results for the better. You can too.