summary: A new study reveals the neural mechanisms underlying how we process and remember everyday events.
source: WUSTL
A new study by researchers at the University of Washington provides new insight into how the brain strains to process and remember everyday events.
Zakaria Rig, professor of psychology and brain science at Washington University in St. Louis, and co-author Charan Ranganath of the University of California, Davis, used functional MRI machines to monitor the brains of people watching short video clips of scenes that could come from real life. Among them are men and women working on laptops in coffee shops or shopping at grocery stores.
“It was a very ordinary scene,” said Rigg. “No car chases or anything.”
The study participants then immediately described the scene in as much detail as possible. Regular snippets yielded interesting results, including different parts of the brain working together to understand and remember a situation.
Networks in the anterior temporal lobe, a brain region long known to play an important role in memory, focus on subjects regardless of their environment. But the posterior medial network, which includes the parietal lobes at the back of the brain, is more concerned about the environment. The network then sends the information to the hippocampus, explains Rigg, which combines the signals to create a coherent scene.
Previously, researchers have used very simple objects and scenarios – such as drawing an apple on the beach – to study different memory building blocks, says Rigg. But he said life is not that simple. “I was wondering if anyone had ever done this kind of study with dynamic situations in real-world words, and surprisingly, the answer is no.”
New studies show that the brain creates mental pictures of people that can be moved from place to place, much like an animator can copy and paste characters into different scenes. “It may not seem intuitive that your brain can diagram family members moving from place to place, but it is very effective,” he says.
Some people can remember scenes in coffee shops and grocery stores more completely and accurately than others. Rigg and Ranganath found that those with the clearest memories used the same neural patterns when recalling scenes as they did when watching clips.
“The more you can relive these patterns online when describing an event, the better your general memory will be,” he says.
Right now, it’s not clear why some people appear to be more adept at reproducing the thought patterns needed to access memory, Rigg said. But clearly, many things can get in the way. “It can go wrong when you try to retrieve the memory,” he said.
Even memories that seem vivid and vivid may not really reflect reality. “I tell my students that your memory is not a video camera. It doesn’t give you a perfect representation of what’s going on. Your brain is telling a story,” he said.
Reg is a University of Washington faculty member involved in the research group “Storytelling Lab: Bridging Science, Technology, and Creativity,” part of the Interdisciplinary Futures Incubator. Led by Jeff Zaks, chair of Psychology and the Brain, with Ian Bogost and Colin Burnett, Storytelling Lab explores novel psychology and neuroscience.
In the future, Rigg plans to study the brain activity and memory of people who watch more complex stories.
“The Storytelling Lab fits perfectly into the scientific questions that I find most interesting,” says Rigg. “I wanted to understand how the brain creates and remembers narratives.”
About this memory research news
author: Chris Woolston
source: WUSTL
communication: Chris Woolston – WUSTL
picture: Image is in public domain
Original search: open access.
“Flexible reuse of cortical and hippocampal representations during the encoding and recall of natural eventsWritten by Zachariah M. Rigg et al. Nature Communications
summary
Flexible reuse of cortical and hippocampal representations during the encoding and recall of natural events
While each event in life is unique, there are many similarities between them. However, little is known about whether or how the brain represents information about the components of various events in coding and during memory plastically.
Here, we show that different cortical and hippocampal networks systematically represent specific components of events described in videos, both during online experiences and during episodic memory retrieval.
The anterior temporal network region represents information about subjects, with generalizability across contexts, whereas the posterior temporal network region represents context information, generalizing across subjects.
The medial prefrontal cortex generalizes across videos depicting the same event outline, while the hippocampus retains event-specific representations. Similar effects were seen in real time and retrieval, suggesting the reuse of event components across overlapping episodic memories.
Together, these representational profiles provide an optimal computational memory support strategy for the various high-order event components, enabling the efficient reuse of event understanding, memory, and imagination.