A radical rethink of assessment centres

Assessment centres are employed with one precise objective: to be more objective and reliable in assessing talent. Yet many ACs we’ve seen have two inherent biases in their design that are rarely discussed.
Published 18th January 2020 The principle of AC design should be simple. Let’s take an AC designed to pick entry onto a graduate or a high potential scheme.

You create a realistic scenario, so that the AC takes place in a replica of the real world situation – a chemical factory, consultancy or retail setting. You then create exercises based on key parts of the role, so that you are assessing capacity for the role requirements. Finally, you add in an intelligence test, because research suggests that general ability scores are the single best predictor of future performance. Now you have the perfect, balanced method of assessment – because you know that ACs which include an ability measure have proved even better than intelligence tests alone, for prediction.

Simple, right?

But now we start to complicate and interfere with this pure design.

We introduce competencies. It’s pretty much standard now that ACs are designed around a set of competencies, with each competency assessed at least twice, by different people. This means that observers have to assess several competencies in every exercise.

There are 2 problems with this, both of which bias our ‘objective’ results.

Bias 1 – Bias towards observation data

A simple task-based AC might produce the following results, where performance is rated 1-5, with 1 being in the top 20% of performers and 5 being in the bottom 20%:
Exercise A – 1
Exercise B – 2
Exercise C – 2
Exercise D – 3
Ability Test – 4

A competency-based AC might yield the following data on the same candidate:
Problem-solving – 3, 3
Innovation – 1, 2
Goal Orientation – 1, 2
Drive – 2, 3
Influence – 1, 3
Diplomacy – 1, 2
Ability Test – 4

Remember that I mentioned that ability tests are usually the most predictive indicators of success? In a simple AC this emerges clearly as the key issue with this candidate. But when you use competencies – you skew the data. Because you get 12 ratings from the exercises, but only 1 rating from the ability test.

Ability is now 1 rating out of 13, where before it was 1 rating out of 5. The most predictive measure provides 8% of the data, rather than 20%. It’s easy to overlook.

Bias 2 – Manager limitation bias

There’s another problem with using competencies.

Unfortunately, managers are not very good at assessing 3 different things from one exercise. Especially when they have little time for thorough training immediately before the AC (ever had a problem with that?).

In fact, the research suggests that managers don’t actually assess 3 competencies when they observe an exercise. They assess “How well did this person do on the exercise?” Their assessment of Influence on Exercise 1 will be more closely related to their other 2 ratings on Exercise 1, than to another assessor’s rating of Influence on another exercise.

So the second problem with competencies is that they are spuriously scientific. We think we are fine-tuning the assessment by looking at 6 competencies, but this isn’t supported by the facts of how managers actually rate.

Is there an alternative?

Instead of using competencies as the basis of assessment – why not use them as the basis for design?

Design 4 exercises that reflect different key aspects of a job and test these against the competencies required for high performance. Now you know that the competencies are being assessed. You can simplify things by just asking managers to rate how good the candidate is at the exercise they are observing?

Managers now only have to understand the exercises, and what good looks like – making assessment much simpler. Observers can focus on ticking the behaviours that were or were not shown, and have more time and focus to convey the nuances of how to improve.

There are benefits for candidates, too. They will understand feedback on how to improve at different tasks more easily than competency feedback (especially if they get hung up on those conflicting Influence scores from different exercises). The feedback will seem more real, and fairer. And because feedback is simpler – maybe you’ll have time to give unsuccessful external applicants feedback, boosting your employer profile.

Take Away
Consider whether competency-based assessment is really helpful. Would managers find it easier to rate and make decisions without competencies?
Would it be quicker to give everyone feedback?
Would candidates understand better how to improve?