counting electric sheep
Mind-bending neuroscience theory answers an age-old question about dreams
“The point of dreams is the dreams themselves.”
Erik Hoel, neuroscientist and assistant professor at Tufts University
Like the droning chime of a grandfather clock, our waking lives are filled with monotonous and repetitive moments — wake, eat, work, sleep.
This kind of rote behavior might work for machinery like a toaster that we expect to always perform the same task, but for the human brain that continues to learn every day, such repetition can be mind-numbing.
That’s where dreaming comes in to set us straight, Erik Hoel, a neuroscientist and assistant professor at Tufts University, tells Inverse.
In a new report published in the journal Patterns, Hoel proposes that the novelty of dreams — where we can fly through outer space without oxygen or fight evil clones wielding nothing but a tennis racket — provide an essential escape from these routines for our brains that can help us better process and generalize new information.
“The point of these experiences is precisely that they are not lifelike.”
Even more, Hoel’s theory predicts that the way human brains do this is actually more similar to how artificial intelligence learns than it is to other biological systems.
“We know from artificial neural networks ... that if you can't turn off learning, you're going to become biased,” says Hoel. “Based on the fact that you're never able to really appropriately sample the world — your experiences are always some limited subset of experiences.”
How does A.I. dream?
The question of what A.I. might dream of has fascinated scientists for decades and in science fiction has often masqueraded as a parallel for determining A.I.’s more human-like traits, like imagination or longing. But in reality, we know what A.I. dreams about — and it’s not electric sheep. At least, not usually.
Google Deep Dream has been conjuring up hypnotic and jarring images — like the “Mona Lisa,” made completely from parts of dogs and snakes — since 2015 and was one of the first mainstream examples of what happens when we let A.I. “dream.”
In this case:
- A.I. is fed data, like images of dogs or paintings, in an “awake” state and then put into a “sleep” state to process what it has seen.
- During this sleep state, Deep Dream will then look for new types of emergent patterns in the images it has already seen (like dogs in the face of the Mona Lisa) and then produces an entirely new photo with these patterns drawn out — often recursively, meaning one dog pattern might contain another, small dog within it.
This kind of “dreaming” is an example of A.I. generalizing what it’s already learned, explains Hoel, and is an essential step to avoid a problem that plagues neural networks (the artificial connection that makes up A.I. brains) called over-fitting.
Essentially, if a neural network only trains itself to memorize one set of data (e.g. dogs are animals) it will end up stunting its ability to generalize that knowledge when presented with new data (e.g. cats are also animals.)
Scientists can avoid this overfitting problem by introducing chaotic or novel data to their A.I. to keep it on its toes, and Hoel says this might be exactly what our brains are doing when we dream.
“You’re not going to automatically generalize just through your experiences,” says Hoel. Instead, he says that dreams play a crucial role in helping our brains avoid overfitting our experiences and improving how we generalize them.
“It does that by having wild, crazy experiences every night,” says Hoel. “And the point of these experiences is precisely that they are not lifelike.”
The Overfitted Brain Hypothesis
In a nutshell, Hoel’s hypothesis proposes that dreaming is a biological form of chaotic data and helps us learn from our repetitive daily experiences by looking for and exploring new novel patterns and scenarios.
Unlike other leading dream theories, which Hoel says frame dreaming more as an artifact of the sleep process than a necessary component of it, this new theory looks at how the content of dreams themselves could be essential for robust learning.
“The point of dreams is the dreams themselves,” says Hoel. “It's just to experience, wild wacky stuff... because that will keep you from overfitting yourself to your more boring daily routine.”
How does it work? Because this is a new hypothesis, Hoel has yet to conduct new research to validate it. However, he says that existing research on dreams already fits well into this new idea. Such as evidence from human behavioral studies that have shown that doing a repetitive and novel task (like playing Tetris or juggling) before sleeping a sure-fire way to trigger a dream about similar real-world scenarios.
“Maybe [we can] alleviate... sleep deprivation by feeding them dreamlike experiences.”
Hoel explains in his report that this could be viewed as our brains attempting to generalize new input data (e.g. how to juggle) in our sleep to avoid overfitting. Similar to how you might cram for a test the night before and sleep on our textbook to help the knowledge sink in. While the book’s presence likely plays no role, sleeping after this kind of activity likely does, says Hoel.
If our brains didn’t look for new patterns and connections while we slept, we might only be able to answer questions we memorized instead of ace the entire test.
What are the ethical implications? One way this theory could be applied in the future, says Hoel, would be to harness this dream power in virtual reality dream experiences that could quickly restore mental acuity by creating a dream-like rest state for your mind without ever going to sleep.
“Why don't we take someone who’s sleep-deprived and see if maybe alleviate some of their sleep-deprivation by feeding them dreamlike experiences,” says Hoel. “One example might be like a very dreamlike VR experience that you give to a fighter pilot who has to be awake for a long time.”
While this might sound like something straight out of science fiction, Hoel says it actually may be similar to how we watch movies and TV today. In fact, Hoel points out that this inherent draw to visual storytelling may itself be originally driven by the narratives we write in our dreams.
What’s next? It’s still early days for the Overfitted Brain Hypothesis, but Hoel says the next steps are to design experiments to more closely explore exactly how this plays out in the human mind. And just as A.I. inspired this new insight into the brain, Hoel says that new information learned about the hypothesis can also be reintroduced into A.I. learning as well.
“Now that we have an explicit and kind of formal proposal as a hypothesis, we can... kind of go the other way and use information that we have about dreaming to craft techniques to avoid overfitting in artificial neural networks,” says Hoel. “That will be very exciting.”