Skip to navigationSkip to content
LISTEN UP

Bias isn’t the only problem with decision-making at your company

An empty boardroom with many chairs around the table
REUTERS/Andrew Kelly
Not noisy now, but just wait.
  • Cassie Werber
By Cassie Werber

Cassie writes about the world of work.

Published

The problem of bias in hiring, pay, and other decision-making is high up the corporate agenda right now, and the subject of a range of solutions with varying degrees of efficacy.

But a new book, Noise, by Princeton University psychologist Daniel Kahneman, HEC business school professor Olivier Sibony, and Harvard legal scholar Cass Sunstein, offers a whole other problem that the authors say has slipped under the radar.

Noise, in this case, refers to the plethora of errors that creep into decision-making and are hard to spot because they are so many and various. Speaking to Quartz from Paris, Sibony explained just what noise in an organization is, and what to do about it.

Quartz: Why is the concept of noise important for organizations to be aware of? What problem do you think identifying it can solve?

Olivier Sibony: Noise is the fact that in an organization, where you expect judgments to be identical, there are differences between people, or sometimes between two different occasions when the same person is making the same judgment. To take an example, hiring decisions or performance evaluation decisions are usually dependent on the person who is making the decision, as opposed to the person who is the subject of the decision.

Organizations should care about that, because they often rely on one person to make the decision—and if they rely on more than one person, they usually don’t have the right measures in place to make sure that they take advantage of that potential diversity.

You don’t want your decisions to be the result of a lottery, and essentially “noise” is a lottery. And because it’s a lottery, it creates a lot of errors. If you’re making “noisy” hiring decisions, you are not hiring the best people. If you’re making noisy performance evaluations, you are not rewarding the better people and sending the right signals to the underperforming ones. It’s a matter of credibility, it’s a matter of fairness, and it’s a matter of accuracy in your decisions.

You make a clear distinction in your book between bias and noise. Can you explain it?

Bias is a great explanation for errors, it’s a great culprit. We can actually point our finger at it and say: “I was not hired because of the gender bias in the person who was evaluating me.” These biases do explain many errors in HR decisions.

The problem, though, is that there isn’t just bias, there are also random errors. If you have a bias toward hiring more men than women, you’re probably also not hiring the right men, and when you’re hiring women you may not be hiring the right women.

The fact that you have, or that you don’t have, a bias is a separate question from whether there is variability in who you hire based on who is making the decision, or what time of the day it is. And when we look at how susceptible those kinds of decisions are to a change in the person making them, or even to a change in the context in which they are made, it’s clear that influences that shouldn’t play a part do play a part. What we’re trying to do is to raise the profile of noise, because in many cases it’s actually larger than bias.

What are some of the practical things organizations can do about noise?

Nicolas Reitzaum/HEC Paris
Olivier Sibony

Fixing a bias is like curing a disease: You know what the disease is, you know what the symptoms are, and you’re pushing in the opposite direction. Fighting noise, on the other hand, must be prophylactic in nature, because you don’t know in what direction you are going to make mistakes. It’s about changing the process by which you make decisions, to stop the sources of noise from seeping into your decision process.

Take a simple example of measurement: If you step on your bathroom scales in the morning and your scale is a bit noisy—when you step on it several times in succession it gives you different readings—you intuitively know so that if you take the average of several readings you will get a more accurate estimate of your weight than if you take the first reading. So, averaging multiple independent measurements of the same thing, or averaging independent judgements of the same problem, like how good is a candidate, will reduce noise. We can even say statistically by how much it reduces noise.

The problem in how organizations practice this—for example, in hiring—is that they do not generally keep the judgements independent. In all sorts of ways, sometimes subtle and sometimes not subtle at all, people who participate in the decision process influence each other. Say three of us have met the candidate and we get together. The first person walks into the room and says: “What a great guy, let’s talk about him.” Or: “Interesting candidate, I’d love to know what you think about him.” Well, if that person is the boss, she’s already given you her sense of which way she’s leaning.

Those group influences, the social influences, do not reduce the noise; they actually increase it. They make it more random, more likely to be different from what another group of people would decide, than if you had had no discussion at all.

So what’s the solution?

Organizations are designed to produce consensus, to produce convergence, and to produce action. If you want to have independent judgments and to be able to aggregate them, you need to take special precautions to keep people in the dark about what others think so that their judgements remain independent until the time you decide to actually put them together, to make a final decision.

So that’s two solutions: Aggregating multiple inputs from independent people, and structuring judgements across multiple dimensions, making sure you evaluate those dimensions independently of one another.

You talk about using rankings rather than ratings. Can you explain a bit more?

There is inherently less noise in relative judgements than in absolute judgements. There is less noise when you rank people or things than when you rate people or things.

If I ask you to rate people and you say “very good,” well, “very good” to you may mean something very different than it means to another person. It may mean something very different to the person who reads it. “Pretty good” in England means something very different from “pretty good” in the US. For cultural reasons, and for interpersonal variability reasons, you could have a lot of misunderstandings about the scale that a company uses when they say: excellent, very good, OK, etc.

It’s a lot less noisy to say: On this dimension, say, quality of writing, our gold standard is Kathy. If you write as well as Kathy, that’s an A. If you were a B, you would write as well as Olivia. To be a C, you would need to have the same quality of writing as Tom. And every time we look at somebody’s writing we ask: Is this as good as Kathy? No. Is it as good as Olivia? Yes, ok, so it should be a B. There’s a lot less noise when you do those types of comparisons then when you say, “Yes, she’s a very good writer.”

You also talk about using rankings in performance reviews, which sounds terrifying.

What you are saying is people hate performance reviews, right? They hate them whether it’s a rating or a ranking. And one of the things they hate about them is that they are very noisy. In fact, most of the research we looked at suggests that, in the evaluation you get, about three quarters of the variance is noise. Only one quarter of it has anything to do with your performance. So it’s a fair question as to whether, as a company, you want to have an evaluation system. You can do without it. But if you choose to have one, and if it has consequences, you probably want to measure something that is not noise, that looks like an individual’s performance.

Now, when I say rankings are better than ratings, that doesn’t actually mean you have to rank your employees. In fact, that’s a practice that companies have sometimes adopted that is quite destructive, for all sorts of reasons. What it means is that you’ve got to compare the performance of each employee, on each dimension, to a standard that is embodied by a case that you can point your finger at. The case might be someone who left the company five years ago and remains the gold standard. Or the case might be a video vignette that has been created for the occasion to describe what it is like to behave like a B in customer service skills in a restaurant. The point is to compare people to an actual standard that can be defined in sharp terms, without interpersonal variability, and not just to say “very good” or “ok,” because that’s very noisy.

Is there anything about the present moment, with all its discussions about equality, that is either good or bad for talking about noise?

The implicit question here is, why haven’t we talked about this for so long? Why don’t we care about it as much as we care about bias? And the reason is manifold. First, bias is more charismatic, bias is sexier. Noise is a statistical observation: It’s abstract, it’s a less easy thing to get worked up about.

The other reason why we don’t notice noise is that organizations do a pretty good job of hiding it. Organizations do not regularly do what we call a “noise audit,” which is to ask different people to weigh in on the same decisions separately and to measure how much they disagree.

It’s more of a hope that I would formulate: Thanks to the concern about biases, we are getting much more attuned to the importance of making correct decisions, and to the risk of making bad decisions, than we were before. That should lead us to also tackle noise at the same time, because a lot of the remedies that tackle noise will also reduce bias.

This interview was lightly edited and condensed for clarity.

📬 Kick off each morning with coffee and the Daily Brief (BYO coffee).

By providing your email, you agree to the Quartz Privacy Policy.