Often, you’ll hear people say that you should “trust your instincts” when making decisions. But are first instincts always the best?
Psychological research has shown many times that no, they are often no better—and in many cases worse—than a revision or change. Despite enormous popular belief that first instincts are special, dozens of experiments have found that they are not.
While that may be a useful fact to bring up in an academic discussion, anyone who has ever made a decision in real life will undoubtedly reply: But I remember times when I made a correct choice, then changed my mind and was wrong.
This happens for two reasons: First, humans naturally have something called an endowment bias, where we feel strongly attached to things we already have (our first instinct, in this case).
We don’t want to give it up, and we feel especially bad when we give it up and it turns out later to be correct. We remember these instances vividly and thus they seem to be very common, even though all research shows that they are less common.
The second reason is more obvious: sometimes first instincts actually are correct. The problem is figuring out when to trust yourself and when to change course.
The solution may lie within the realm of “metacognition,” the ability to “think about thinking” and use those thoughts to monitor and control behavior.
I originally began exploring “metacognition” in rhesus monkeys. They would be given various questions, some easy and some quite difficult, and had to either answer or report that they did not know. I was amazed at their robust ability to look into their own minds and “know when they did not know” the right answer. They were able to accurately judge whether they would get the question correct or incorrect.
I was equally amazed that my (human) undergraduate students sometimes seemed to lack the ability. They were always surprised at their exam grades; some would significantly overestimate their performance, while others underestimated it. They would also believe their first instincts were special, even when their own behaviors—successfully revising answers—showed otherwise.
Surely my students knew how well they performed on each question, and could thus figure out how well they’d done on the exam, right? Recently, my colleagues and I tested this by studying students’ metacognition while they were taking exams.
We asked the students to track their confidence on each response to a real multiple choice psychology exam, marking it either a “guess” or “known” to indicate how sure they were about their original answer. They also marked whether or not they revised that original response.
More often than not, the students’ revisions—changes from a first instinct to a new choice—resulted in a correct answer. And on questions that caused the most uncertainty, sticking with an initial response was a bad idea: they were wrong more than half the time.
Their “metacognition,” in the form of confidence ratings for each question, was an excellent predictor of whether they made the correct decision and thus whether or not they should change their response. In other words, they were able to tell, in the moment, whether or not they would get the question correct. And because they wrote down those accurate judgments, they could use them later when deciding to change their answer or not.
In a second experiment, we looked at sticking with original answers. Again using metacognitive confidence ratings, this time on a one-to-five scale, students were able to identify the questions that they were mostly likely to get correct or incorrect.
Using those ratings as a guide, we found that when they chose to stick with an original instinct they were correct more often than not. Thus, both revisions and first instincts were correct most of the time.
On the surface, that might seem like a contradiction. And it would be, if the only tools the students had in their arsenal were “always trust your instincts” or “always change your mind.”
But we gave them a slightly more sensitive tool, a written-down record of their metacognitive confidence, which allowed them to choose when to revise and when to stick. Everyone feels their level of confidence when they make a decision, but the problem is that we quickly forget this information when we move on to the next decision.
Because they rated their confidence for each question on paper, they could use those ratings instead of (notoriously faulty) memories. Using this tool, they made more informed choices that helped them perform better.
But why take the time to record confidence levels for each individual question? If they know how they performed on each question, don’t they know how well they did at the end of the exam?
It turns out, no.
Despite being excellent at predicting their performance on each question during the exam, when we asked after the exam, students were very bad at judging how well they’d done.
I’ll give you just one dramatic and disturbing example. We asked them, at the end of the first study I mentioned above, whether they thought a revised choice was more likely to be correct than a first instinct.
Despite the fact that their actual choices and ratings, moments earlier, clearly showed that revisions were better, the overwhelming majority of students falsely believed that their original choice would be the best. That is the dramatic part.
The disturbing part is that an even larger majority reported that a professor or teacher—apparently unaware of the huge body of literature to the contrary—had specifically told them that first choices were mostly likely to be correct.
Thus, the key to knowing when to stick with your first instinct and when to change your mind is to track feelings of confidence during the moment you make the decision. During college exams, both revising and sticking with original answers had the potential to result in more correct than incorrect answers.
Only the self-tracking of confidence levels predicted when each was more appropriate. By using that simple form of metacognition, students could better identify which questions to revise and which were better left alone.
This leads to problems in almost every area of decision-making. Most of these problems stem from the fact that our beliefs about ourselves and our personal histories are usually formed long before or after a decision, not “in the moment.”
Upon reflection, things often seem much different than they actually were.
Tracking how you feel while initially making a decision can provide valuable information later, can help you make more informed choices and will better prepare you to revise your initial decision when necessary.
I would encourage all educators to consider these findings both while administering exams and while forming their own beliefs about how students learn and take tests. Like the students themselves, our reflective beliefs often differ from the actual experience.
Students benefit from a system that allows them to build metacognitive skill, and they will generally make better decisions if they use empirically validated information about their confidence rather than a folk belief or popular misconception. It is also relatively simple to do this during paper-based or electronic exams, so there is little cost.
Educators would, perhaps most importantly, be wary of giving advice based on their subjective beliefs or (almost certainly) unreliable memories, and instead be able to foster a useful skill based on memory and metacognition research.