By now, you’ve probably mastered the fine art of Peekaboo: When something moves out of sight, you can at least sort of remember what it looked like.
Psychologists have had a pretty good idea of how this works for some time — your brain holds information for a short period of time, as long as you stay focused on it. This is called working memory, and it’s the reason you can still picture things in your head shortly after you’ve seen them. In research published Thursday in Nature Communications, researchers have unlocked another piece of the neural puzzle behind this phenomenon. And while this research is still preliminary and far from human application, it may someday help people with schizophrenia prevent or manage their hallucinations.
Using a deep learning algorithm, the team from McGill University and Massachusetts Institute of Technology tracked down and isolated the specific bundle of neurons that helps people store this visual information.
By probing the brains of two macaques, which are monkeys frequently used in neuroscience research because their brains are so similar to ours, scientists traced the ability of the macaques to recognize which direction an object on a screen was moving and the ability to later recall that same information to a bundle of neurons housed within the lateral prefrontal cortex. The prefrontal cortex is responsible for some of the most complicated human thoughts and behavior, including the decision to physically do something after thinking about it.
While researchers previously had conflicting ideas on how these neurons worked, this new finding has determined that the answer is really a happy medium. Previously, some scientists argued that the neurons responsible for recognizing and analyzing a picture were separate from those that held it in working memory. Others argued that the neurons were one and the same.
By using A.I. to track which neurons were working on which tasks, it became clear that both camps were correct. Some neurons only do one or the other, but some handle both. In fact, the split among neurons that recognize, those that remember, and those that do both is just about even. What they don’t yet know is whether the information is actually encoded into these neurons or if they act as a sort of projector screen for yet-undiscovered cells that actually hold onto these memories.
This research is still far-removed from human relevance, but the cluster of neurons within the lateral prefrontal cortex seems to map onto the same region that’s impaired in people who have schizophrenia, particularly those who experience hallucinations. One of the major problems with current medications for these symptoms is that it affects the entire brain, causing widespread side effects that may cause people to stop taking their medicine. The researchers behind this new macaque study speculate that if a therapy could be designed to specifically target the small region of the lateral prefrontal cortex that’s involved with these visual memories, they may be able to develop a more localized treatment for schizophrenia with fewer side effects.
Abstract: The primate lateral prefrontal cortex (LPFC) encodes visual stimulus features while they are perceived and while they are maintained in working memory. However, it remains unclear whether perceived and memorized features are encoded by the same or different neurons and population activity patterns. Here we record LPFC neuronal activity while monkeys perceive the motion direction of a stimulus that remains visually available, or memorize the direction if the stimulus disappears. We find neurons with a wide variety of combinations of coding strength for perceived and memorized directions: some neurons encode both to similar degrees while others preferentially or exclusively encode either one. Reading out the combined activity of all neurons, a machine-learning algorithm reliably decode the motion direction and determine whether it is perceived or memorized. Our results indicate that a functionally diverse population of LPFC neurons provides a substrate for discriminating between perceptual and mnemonic representations of visual features.