Health

Research on Decisions Shows Instinct Makes Us Behave Like Cyborgs, Not Robots

How conscious are "conscious" decisions?

Flickr (David Goehring)

Adam Bear isn’t saying that people are pre-programmed automatons. Well, he is saying that, but with a caveat: people are pre-programmed automatons at least when it comes to low-level decisions. You think you decided to take a sip of water, but Bear’s new research with fellow Yale psychologist Paul Bloom suggests that your choice is a retroactive trick of the mind. Our minds may be tricking us into believing we have agency over automatic or reflexive behaviors. In reality, Bear says, we’re neither totally programmed nor totally empowered. We’re cyborgs, not robots.

“We might be under an illusion that more of our everyday choices are conscious and deliberative than they actually are,” he tells Inverse.

Here’s how Bear and Bloom’s study worked: 25 participants sat in front of a computer screen on which five white dots suddenly appeared. Participants were told to choose a dot and remember their selection. Then, one of the five dots turned red and the participants were asked to indicate via one of three keystrokes whether they had made the correct choice, the incorrect choice, or failed to make a choice at all. Each participant went through 280 trials. The researchers controlled how long after the dots’ appearance the change to red would occur. The chosen dot would change 50 to 1,000 ms after all the dots appeared on the screen, depending on the trial.

Statistically, there’s a one-in-five or 20-percent chance that a subject makes the correct decision. Yet, researchers found that the quicker the change, the higher the probability that participants reported they had made the correct decision. In other words, participants surpassed the statistical baseline when the change occurred extremely close to the dots’ appearance. On average, they did 10 percent better than should be possible.

The participants' experience.

Yale University

“If they’re reporting that they guessed the red circle at an unrealistically high rate,” Bear explains, “that means that their phenomenology isn’t accurate.”

"Probability that participants chose the red circle on trials in which they claimed to have had time to make a choice."

Yale University

This discrepancy — this “unrealistically high” success — could be chalked up to humans’ unexploited, remarkable intuitive capacities. Or to everyone being a goddamned liar. But the researchers did a follow-up study to rule out that latter possibility, and themselves, have a more reasonable explanation: They argue that the participants were retroactively, but unwittingly telling themselves that they had made the correct decision. An improvement so great, Bear and Bloom argue, cannot be otherwise explained.

It’s a fascinating result, so Inverse… “decided” to talk to Bear about his work.

Why did these results surprise you?

There’s this nice bias that ramps up as the events of the circle turning red happens close to your choice, and then things sort of drop down to random chance — suggesting that there’s a very robust, low-level perceptual illusion where we can experience our choices as occurring before they’re actually made in the real world, in this kind of paradoxical way.

What do you see as the possible greater implications for human agency?

I think it’s still very up in the air. A lot of choice studies, like this one, are based on very simple paradigms — picking some random thing, or flexing your wrist. So, this could be a bias that only emerges in these very artificial situations in a lab. But, if not — if it extends to more of our decisions, even if they’re decisions in everyday life that are made carelessly — we might experience ourselves as consciously making those decisions. When in fact, our conscious experience is an experience that’s created after the fact: our conscious mind tricks us into thinking it was the cause of our actions by reordering events in time.

Do you have any real-world examples of this phenomenon? When you think about the findings, how does it change the way you look at your own decisions?

Most of our everyday lives are made up of these little sort-of decisions, where we feel like we’re agents. We’re guiding our body, we get up to go to the bathroom, we take something out of the fridge; we decide what to eat for dinner, we decide to get out of bed, what clothes to put on — these are all decisions that we think we’re consciously guiding. And we have an experience as agentically moving our bodies. Maybe that experience itself is constructed after the fact, but it feels as if it’s not — it feels as if it’s the causal source of our actions.

Do these results change your opinions about guilt?

I think I could say yes, to kind of, like, tell a nice story. I would like to know a lot more about the conditions under which this illusion occurs. I do think, at some ultimate level, yes it might — in that, if it suggests that a lot, if not all of our decisions are made unconsciously, and we think they’re the result of conscious, careful reasoning, that’s going to affect how much we blame people for things. The law does mitigate punishment for more unconscious, ‘crimes of passion’–type acts. So, if it’s true that most or all of our behavior that we think is under conscious control is actually guided by unconscious processes more of the time, that might suggest that we’re less culpable than we think we are.

This interview was edited for brevity and clarity.