Our brains reveal our choices before we’re even aware of them, study finds


Researchers were able to see evidence of a choice being made before the participant had consciously decided on it.

A new UNSW study suggests we have less control over our personal choices than we think, and that unconscious brain activity determines our choices well before we are aware of them.

Published in Scientific Reports today, an experiment carried out in the Future Minds Lab at UNSW School of Psychology showed that free choices about what to think can be predicted from patterns of activity 11 seconds before people consciously chose what to think about.

The experiment consisted of asking people to freely choose between two of red and green stripes – one of them running horizontally, the other vertically – before consciously imagining them while being observed in a imaging (fMRI) machine.

The participants were also asked to rate how strongly they felt their visualisations of the patterns were after choosing them, again while researchers recorded their brain activity during the process.

Not only could the researchers predict which pattern they would choose, they could also predict how strongly the participants were to rate their visualisations. With the assistance of machine learning, the researchers were successful at making above-chance predictions of the participants’ volitional choices at an average of 11 seconds before the thoughts became conscious.

The that revealed information about the future choices were located in executive areas of the brain – where our conscious decision-making is made – as well as visual and subcortical structures, suggesting an extended network of areas responsible for the birth of thoughts.

Lab director Professor Joel Pearson believes what could be happening in the brain is that we may have thoughts on ‘standby’ based on previous brain activity, which then influences the final decision without us being aware.

Participants were asked to choose one of these patterns and visualise it while an fMRI machine recorded their brain activity.

“We believe that when we are faced with the between two or more options of what to think about, non-conscious traces of the thoughts are there already, a bit like unconscious hallucinations,” Professor Pearson says.

“As the decision of what to think about is made, executive areas of the brain choose the thought-trace which is stronger. In, other words, if any pre-existing brain activity matches one of your choices, then your brain will be more likely to pick that option as it gets boosted by the pre-existing brain activity.

“This would explain, for example, why thinking over and over about something leads to ever more thoughts about it, as it occurs in a positive feedback loop.”

Interestingly, the subjective strength of the future thoughts was also dependent on activity housed in the early visual cortex, an area in the brain that receives from the outside world. The researchers say this suggests that the current state of activity in perceptual areas (which are believed to change randomly) has an influence in how strongly we think about things.

These results raise questions about our sense of volition for our own private and personal mental visual images. This study is the first to capture the origins and content of involuntary visual thoughts and how they might bias subsequent voluntary conscious imagery.

The insight gained with this experiment may also have implications for mental disorders involving intrusions that use mental imagery, such as PTSD, the authors say.

However, the researchers caution against assuming that all choices are by nature predetermined by pre-existing .

“Our results cannot guarantee that all choices are preceded by involuntary images, but it shows that this mechanism exists, and it potentially biases our everyday choices,” Professor Pearson says.

U.S. Department of Defense Study Successfully Zaps Brains to Improve Memory


Most of us accept eventual memory impairment as an unfortunate fact of life, but recent advances in brain stimulation, funded by the U.S. Department of Defense, could be changing all that. On Tuesday, scientists conducting research within the program reported in the journal Nature Communications that they’d found a way to electrically stimulate the brain to improve memory recall in a significant way.

Transcranial direct stimulation coma

The goal of the program is to ultimately treat people with neurological disorders like Parkinson’s disease and epilepsy, which affect many combat veterans living with the long-term effects of head trauma and post-traumatic stress disorder. The new paper, published Tuesday, is a major step in achieving those goals. The team of researchers, led by Youssef Ezzyat, Ph.D. of the University of Pennsylvania, found that directly stimulating a part of the brain called the lateral temporal cortex could help improve patients’ memory recall by as much as 15 percent.

Electric Brain
By stimulating the lateral temporal cortex, scientists found they could improve patients’ memory recall by 15 percent.

In the study, the researchers recorded the brain activity of 25 volunteers who were participating in a clinical trial intended to treat drug-resistant epilepsy, a seizure disorder that can affect a person’s memory. The patients read a list of 12 words and were instructed to remember them. All the while, scientists monitored the patients’ brain activity via electrodes on the cortical surface of the brain as well as embedded in the brain.

Feeding this data to a machine-learning algorithm showed them when a patient’s brain was most likely not encoding the memories properly. Later, when patients were performing a memory task, the electrodes hooked up to their lateral temporal cortex, a part of the brain associated with memory and language processing, gave the patients a little zap of electricity to stimulate the region whenever it detected activity associated with encoding deficits.

The study’s authors report that this monitoring and responding process, known as closed-loop stimulation, improved patients’ memory recall by 15 percent. This new study rests on previous findings by Ezzyat’s team, which demonstrated the potential of closed-loop brain stimulation for improving memory encoding. This differs from open-loop stimulation in that it only fires up when a patient is showing the biomarkers that indicate encoding problems. The latest research builds on this intervention by identifying an anatomical target for brain stimulation.

Since, as Inverse previously reported, the National Institutes of Mental Health isn’t very interested in funding this kind of research, it looks like this project and others funded by the D.O.D. could be doctors’ best bet at perfecting treatments that involve direct brain stimulation.

Abstract: Memory failures are frustrating and often the result of ineffective encoding. One approach to improving memory outcomes is through direct modulation of brain activity with electrical stimulation. Previous efforts, however, have reported inconsistent effects when using open-loop stimulation and often target the hippocampus and medial temporal lobes. Here we use a closed-loop system to monitor and decode neural activity from direct brain recordings in humans. We apply targeted stimulation to lateral temporal cortex and report that this stimulation rescues periods of poor memory encoding. This system also improves later recall, revealing that the lateral temporal cortex is a reliable target for memory enhancement. Taken together, our results suggest that such systems may provide a therapeutic approach for treating memory dysfunction.

In Birds’ Songs, Brains and Genes, He Finds Clues to Speech


The neuroscientist Erich Jarvis discovered that songbirds’ vocal skills and humans’ spoken language are both rooted in neural pathways for controlling learned movements.

Songbirds are star subjects in the research of Erich Jarvis, a professor at the Rockefeller University who leads its Laboratory of Neurogenetics and Language.

Songbirds are star subjects in the research of Erich Jarvis, a professor at the Rockefeller University who leads its Laboratory of Neurogenetics and Language.

 

When Erich Jarvis, a neuroscientist at the Rockefeller University in New York, won the Ernest Everett Just Award from the American Society for Cell Biology in 2015, he wrote an essay describing the path that led him there, “Surviving as an Underrepresented Minority Scientist in a Majority Environment.” “I believe the evidence will show that the science we conduct and discoveries we make are influenced by our cultural experience,” it begins.

Jarvis grew up in Harlem and started out as a dancer, studying ballet at the High School of Performing Arts and winning several scholarships to pursue that training further. But as an undergraduate at Hunter College, he decided to shift his focus to biology, drawing inspiration from his parents: his mother, who always encouraged him to work for the good of society, and his father, a musician with a passion for science. (While Jarvis was in graduate school, his father, who had struggled with mental illness and homelessness for years, was shot and killed by a teen gang.)

After completing his doctorate at Rockefeller, Jarvis went on to open his own lab at Duke University before returning to New York and Rockefeller in 2016. He has dedicated the past two decades to understanding the neural and genetic mechanisms that allow some birds to imitate novel sounds and produce complex, varied vocalizations (called vocal learning). He’s using those songbirds to illuminate how our capacity for language may have evolved, as well as to provide insights into human speech disorders.

His explorations have involved improving genome assemblies, hunting for parallels between brain structures in different animal groups, and genetically manipulating birds and mice to sing better. Jarvis spearheaded a total rewrite of avian brain nomenclature, which has allowed scientists to better unravel the relationships between the brains of birds and vertebrates. He has led efforts to sequence dozens of bird genomes — something he’s now helping to do for thousands of vertebrate species as a co-chair of the Genome 10K Project — in order to construct accurate evolutionary histories. This work led to one of his major theories: that vocal learning arose independently in songbirds, humans and a handful of other animals when a far more ancient motor-learning pathway in the brain was duplicated.

Quanta Magazine recently spoke with Jarvis about the evolution of vocal learning and language, the influences his parents and his background in dance have had on his academic endeavors, and the need for greater diversity in science. A condensed and edited version of that conversation follows.

What is the difference between vocal learning and language?

Vocal learning is the ability to imitate and learn sounds you hear that you weren’t born producing. Spoken language involves a combination of traits, including vocal learning — which many of us consider the most unique and specialized component.

You can teach dogs to understand the meaning of words like “sit” or “run.” That occurs through auditory learning, the ability to form sound associations with things that you hear. But dogs can’t actually say the sound “sit.” That would be vocal learning.

Zebra finches and other songbirds are among the few creatures other than humans that demonstrate vocal learning — the ability to hear and repeat novel sounds that they were not born knowing how to produce. Jarvis has found the neural and genetic underpinnings of this ability, and how they relate to humans’ language talents.

Zebra finches and other songbirds are among the few creatures other than humans that demonstrate vocal learning — the ability to hear and repeat novel sounds that they were not born knowing how to produce. Jarvis has found the neural and genetic underpinnings of this ability, and how they relate to humans’ language talents.

Hatnim Lee for Quanta Magazine

Very few groups of species have vocal learning. Songbirds, parrots and hummingbirds are the only ones amongst birds. There are roughly 40 or so lineages of birds: Those three have it, and the others appear not to. Amongst mammals, besides humans, there are dolphins, whales, bats, elephants and seals.

How did vocal learning evolve, then?

All the species I just mentioned have close relatives that don’t imitate sounds. Like chimpanzees, in our case, or suboscine birds for songbirds. So the vocal-learning groups of species all likely evolved this ability independently. But when we look at the brain pathways they use for vocal learning, they’re similar. They’re embedded within the pathway that controls learning how to move for other muscle groups. So how could that happen if they evolved independently? We propose it’s due to a duplication of that motor-learning pathway during embryonic development. The vocal-learning pathways in humans and these bird groups came from a preexisting structure that has similar connectivity and similar functions — it’s just that instead of controlling muscles for the hands or eyes or feet, this pathway controls muscles that produce sound. Because it had similar ancestry, even though it evolved independently, it inherited similar traits from the surrounding motor areas.

Does that mean there are constraints on how vocal learning — and ultimately, language — can develop, then?

Exactly. A good analogy is the evolution of wings. Wings evolved at least three times in vertebrates: in bats; in birds; in pterosaurs, which are ancient flying reptiles. Each time, the wings evolved on the upper limbs of the body. One constraint is center of gravity. In other words, the wings evolved there because that’s where the least amount of energy is needed to fly. Another constraint is the preexisting substrate of the arms.

For the speech or spoken language pathway, the preexisting substrate is the motor-learning pathway, which was used to control other body parts, not the voice. In fact, we recently found that parrots have two song systems, just as humans have two vocal-learning pathways: In parrots, one pathway is within another, and in humans, the two are near each other. It looks like in the parrot brain there’s been a duplication outside of a duplication.

So why didn’t this happen in more animals?

There are many hypotheses out there. I think something is selecting against vocal learning: predators. We propose that sexual selection is selecting for vocal learning: More varied and diverse kinds of sounds attract the opposite sex. Sexual selection is pretty strong. But if that’s true, why isn’t it more common? We think that predators are selecting against it: Vertebrates’ auditory pathways tune in to changes in sound. And species with vocal learning have a higher ability to keep changing their sounds. So an animal that keeps changing his song is more likely to get a predator’s attention and more likely to get eaten.

But is it unexpected that these duplication events didn’t arise even in related species?

Right, why isn’t it more common? Or, to think about it a bit differently: Can we find some rudimentary circuits? We actually think we are finding some in other species. There’s one suboscine species that we’ve looked at, which looks like it has one of these brain regions partially formed. And in mice, we’re seeing a rudimentary circuit that was thought to exist only in humans among mammals.

Neuroscientist Erich Jarvis discusses how the brain circuitry for vocal learning in songbirds and humans evolved from systems for controlling body movements and why so few species have this ability.

Video: Neuroscientist Erich Jarvis discusses how the brain circuitry for vocal learning in songbirds and humans evolved from systems for controlling body movements and why so few species have this ability.

This has led us to what we call the “continuum hypothesis of vocal learning.” First, you have the brain stem areas that control innate sounds. Then you get a forebrain vocal-pathway duplication from the motor-learning pathway — but a rudimentary circuit, like what we see in mice. After that, this new duplicated pathway specializes and moves outside of the motor-learning pathway to become its own speechlike pathway, like what we see in songbirds. And finally, it gets duplicated again, to form two or more vocal-learning pathways near each other or surrounding each other, like what we see in parrots and humans. That’s one possibility of what gets us to the ability to produce language through the voice.

What are the implications to finding a rudimentary circuit in a mouse? Does that mean that mice could be on their way to attaining vocal-learning abilities?

Yes, I think it’s possible they could go along to a higher level in the continuum and evolve vocal learning — in the absence of predators. But not necessarily. We’re trying to test that now by genetically manipulating them to go in that direction. Male mice use specific sequences of sounds, mostly in the ultrasonic range, for courtship. So they’re already showing some interesting vocal communication behavior, which has complexity to it. But they can’t manipulate changes to those sounds as well as we can.

We found genetic differences in the speech areas of humans compared to nonhuman primates, and of songbirds compared to nonvocal-learning birds, that we don’t see in the mouse brain. What we’re trying to do is take those specific genes that have these differences in the human brain, put them in the mouse brain’s rudimentary circuit, and see if we can push it to become more like a human’s. We’d like to see if we can train them to change their sound, their pitch, the sequences of vocalizations they produce for food reward. Ideally, we could take one strain of mice that sings one kind of vocalization, and another strain that produces a different kind, and then genetically manipulate one of those groups to see if it could copy the other.

What controls a species’ motivation to imitate sounds?

We proposed a hypothesis, but it needs more fleshing out. Children don’t normally learn spoken language from somebody teaching them how to say something. Rather, they learn how to do it by listening to other people, copying those other people and being rewarded. There’s this social interaction involved. Your parents pick you up and give you a big hug because you said “Daddy” or completed a sentence.

The neural wiring and genetics of various species led Jarvis and his colleagues to propose a “continuum hypothesis” for the origin of vocal learning.

The neural wiring and genetics of various species led Jarvis and his colleagues to propose a “continuum hypothesis” for the origin of vocal learning.

We think this positive feedback is involved in part of the mechanism of vocal learning.

If you raise vocal-learning birds that hear tape-recorded sounds of their own species, but not live animals, they won’t grow up imitating those sounds. They’ll produce something aberrant. But if you give them a living bird — of another species, even — they will imitate that other species. They’d rather imitate a live bird of another species than taped sounds of their own species. Social interaction, and the feeling of getting a reward from another, is determining what you imitate.

That’s why humans aren’t imitating all the songbirds out there. One of the Ph.D. students I’m co-advising, Constantina Theofanopoulou, and I recently published a paper arguing that oxytocin, the bonding hormone, could be controlling the social mechanism of vocal learning. When a child says “Daddy” and is rewarded with a pat on the back or a smile, that gives the child a rewarding feeling. That feeling may release oxytocin into the vocal-learning circuits, to strengthen the memory in the vocal-learning pathway of how to say that sound.

What implications does your research on birds have for human spoken language?

We’ve been studying a gene called FOXP2 that, when mutated in humans, causes a speech deficit. People with the mutation have good auditory learning; they can understand speech relatively well, both cognitively and auditorily. But they have a harder time producing the sounds. And when that gene is knocked out in the songbird brain, they also have difficulty imitating sounds. So we initially thought this was a convergent function of the gene.

But we recently found that when the exact human mutation of the gene is put into the mouse genome, the mice can still produce their vocalizations, they just can’t switch to more complex, innate sequences that females prefer. That makes us think this gene was already there and being used for sequencing of vocalizations, even before advanced vocal learning evolved. Humans and songbirds just depend more on that gene than mice do. It also means that we can use mice and songbirds, along the continuum, as models to study genes involved in spoken language disorders.

But before we get there, we first have to continue studying what the parallel brain regions in songbirds and humans are, what the homologous and convergent cell types are.

So the path you took to get to this point didn’t start with science, but with dance. How did that influence your scientific pursuits?

I started out as a dancer in high school. Most of my family, particularly on my mother’s side, were into the performing arts. I went to the High School of Performing Arts here in New York City, where I majored in ballet. I also eventually did some jazz and African dance. And then, when I was graduating high school, I took the advice my mother gave me all the time growing up: “Do something that has a positive impact on society.” I felt I could do that better as a scientist than as a dancer, and I did like science a lot. But I learned that being trained to become a dancer also trained me to become a scientist.

To be a scientist, you need to have a lot of discipline, which I learned from practicing dance so much; and you need to be creative, something I was prepared for by choreographing dance. You need to accept a lot of failure before you have success. Many experiments don’t work the first, second or ninth time around. It’s the same in dance. And neither is a 9-to-5 job. It’s something that you have to become passionate about.

Are you still dancing?

Yes. I thought I would stop one day, but it hasn’t happened [laughs]. After I got into college, I danced African dance for a number of years, including when I went down to Duke to become a professor there. Then, about six years ago, I switched to doing a lot of salsa dancing, and performed with the Cobo Brothers dance team until I came to New York.

You mentioned your mother’s influence on your decision to become a scientist. What about your father?

So they both actually went to high school in music and arts, to become singers. But my father had a passion for science, and when he went to college he majored in chemistry, until he dropped out after having us four kids, and kind of tuned out of society as well. My parents eventually got divorced. But although I didn’t grow up with him past the age of six, I would see him a lot. And he influenced my thinking: What was he trying to achieve as a scientist? He was trying to understand how the universe works, how civilization began. And in some ways, I felt like I was actually taking up the reins where he left off. His passion did that.

Opportunities for him, as an African-American, were more difficult than they were for me. He felt he was mistreated by some of his teachers, for example. He really studied hard, and skipped two grades in elementary and junior high school. A story he told me before he passed was that he was taken out of a mostly underrepresented-minority school in the Bronx and was put in a mostly white school, and he felt that the teachers resented him and made it harder for him on his exams and so on. That kind of experience has influenced me, how I think about things going forward as a person of color. I’ve felt some of that isolation: Even [as an undergraduate], when I went to international conferences, most of the people were white. And when I went to Duke, even though there’s an even higher population of people of African-American descent in the South, it was quite the opposite on campus. And I started to see what people are thinking. They wouldn’t realize they were saying racially charged things. Like when I was interviewing for graduate school, somebody told me, “Don’t go to that part of New Haven, because blacks and Puerto Ricans are living there, it’s a pretty dangerous neighborhood.” And I’m thinking, “Did I just hear that?” Or I’d be the first African-American to get some award, and someone would [imply that] it was given to me because of the color of my skin.

Like my mother says: “You can’t be as good, you have to be better.” It’s common to hear that as someone from an underrepresented-minority background.

And now that you’re in a teaching position, you’ve been working to develop a program for underrepresented minority students to come and do research at Rockefeller. How have your own experiences shaped your approach?

There are a lot of hardworking people out there with a lot of talent. But if they don’t have the third piece of that equation — opportunity — then their talent and hard work is not going to go far. Hunter College gave me the opportunity. We’ll be working on a program [at Rockefeller] to bring undergraduate students into a high-intensity research institution, into the laboratories of Nobel laureates, to provide them the opportunity as well.

We need to start “fixing the leaking pipeline,” supporting underrepresented minorities as postdocs and faculty to keep them in the sciences. A program was just started by the Howard Hughes Medical Institute to provide fellowships for eight years, called the Hanna H. Gray Fellows Program, named after one of their former board chairs. We have to fix how society thinks. We’re having workshops and classes to discuss things like unconscious bias, creating opportunities for the people affected by those biases to trust their own value. And I also think role models make a big difference.

Having people of diverse backgrounds helps science as well. When there are diverse backgrounds, you do have to manage cultural differences, but those cultural differences also bring different ways of thinking and lead to new ideas — good ideas — that you wouldn’t have thought of otherwise. Diversity generates a broader and more productive scientific operation.

The Computers Of The Future Will Think Like Brains


Virtually every device you use—from the one you’re using to read this to the pocket calculator growing dust in the back of your desk—relies on the same basic technology: circuits containing many tiny transistors that communicate with each other using electrons. We’ve come a very long way since the room-sized computers of the 1950s, but as computing gets smaller, faster, and more complicated, we get closer to hitting a wall. There’s a physical limit to how powerful traditional computers can get. That’s why scientists are turning to completely new forms of technology for future computers.

The first in our three-part series on the future of computing involves one form you’re familiar with—it’s sitting right inside your skull.

If It Works Like A Brain And Thinks Like A Brain

 These days, we’re not satisfied letting our computers simply run programs and crunch numbers. For tasks like recognizing faces, identifying speech patterns, and reading handwriting, we need artificial intelligence: computers that can think. That’s why scientists figured out a way to build computers that work like brains, using neurons—artificial ones, anyway.

The big difference between an artificial neural network, as it’s called, and a conventional, or algorithmic, computer is the approach it uses to solve problems. An algorithmic computer solves problems based on an ordered set of instructions. The problem is, you have to know what the instructions are first so you can tell the computer what to do. The benefit to this approach is that the results are predictable, but there are definite drawbacks. An algorithmic computer can only do things one step at a time—even though with many components working simultaneously, that can happen surprisingly fast—and you can’t ask it a question you don’t know how to solve.

That’s where neural networks come in. They process information kind of like a brain: a large number of interconnected “neurons” all work at the same time to solve a problem. Instead of following a set of instructions, they do things by following examples. That means that a neural network literally learns how to solve problems based on limited information. Of course, when you don’t know how to solve a problem, you also don’t know what the solution will be. Like your brain, neural networks sometimes arrive at the wrong solutions. That’s the one drawback to neural computing: it’s unpredictable.

Perfect Harmony

 This isn’t to say that artificial neural networks are better than conventional computers. Each system has its own applications. Need some equations solved? Algorithmic computer to the rescue. Need to quickly and accurately detect lung cancer? Neural networks can do that. They can even work together: algorithmic computers are often used to “supervise” neural networks.

Watch And Learn: Our Favorite Content About Neural Networks

Computers That Think Like Humans

Inside A Neural Network

Get deep into the nitty gritty of how a neural network operates.

Researchers are figuring out how our brains cope with so much data – ScienceAlert


The human brain is a wonderful thing. Consider the way it can recognise faces and objects despite a multitude of variations: we can always identify an “A” as an “A” for example, no matter what colour, size, or shape it comes in. And now researchers have come up with an algorithm that could show just how clever the brain’s way of working is, and how we’re able to process so much data all at once.

A team from Georgia Tech has discovered that a human brain can categorise data using just 1 percent or less of the original information. “We hypothesised that random projection could be one way humans learn,” said one of the team, Rosa Arriaga. “The short story is, the prediction was right. Just 0.15 percent of the total data is enough for humans.”

As part of the experiment, test subjects were asked to view several original, abstract images, and were then challenged to identify the same images when shown a small portion of each one.

The researchers then came up with a computational algorithm based on the idea of random projection. The random projection technique compresses information in a certain way, sacrificing accuracy for speed of processing. Using the technique, the AI was able to complete the tests just as well as human participants.

This shows that the human brain network and artificial neural networks are in fact very similar in their behaviour, the team says, adding that both human and machine found the same types of data difficult to process.

“We were surprised by how close the performance was between extremely simple neural networks and humans,” said one of the researchers, Santosh Vempala. “The design of neural networks was inspired by how we think humans learn, but it’s a weak inspiration. To find that it matches human performance is quite a surprise.”

While the study’s results aren’t enough to prove that the brain naturally uses a random projection as a way to process information, the findings are enough to indicate that it’s a “plausible explanation” for what’s happening inside our minds.

Learning based on random projection already plays a role in computers involved in the processing of large amounts of data, and the new research could lead to further developments in the same area.

“How do we make sense of so much data around us, of so many different types, so quickly and robustly?” says Vempala. “At a fundamental level, how do humans begin to do that? It’s a computational problem.”

Our brains can make decisions while we’re sleeping


Your brain doesn’t shut down when you go to sleep, in fact, a recent study has shown that it remains quietly active, and can process information to help you make decisions, just like when you’re awake.

shutterstock_65181187_1

Image: Andrea Danti/Shutterstock

A new study led by senior research scientist Sid Kouider and PhD student Thomas Andrillon at the Ecole Normale Supérieure de Paris in France has investigated how active our brains are when we’re asleep, and the results could have implications for the Holy Grail of humanity’s quest to become ever-smarter – learning in our sleep.

Previous studies have shown that rather than switching off from our environment when we sleep, our brains ‘keep one eye open’, so they can catch important information that’s relevent to us. This means we’re more likely to wake up when we hear someone say our names, or when our alarms go off in the morning, than to the less-pressing sounds of an ally cat scratching around the bins outside, cars driving past, or the periodic chime of a cuckoo clock.

Kouider and Andrillon wanted to take this finding a step further and found that complex stimuli from our environment can not only be processed by our brains when we sleep, but can actually be used to make decisions. It’s just like what’s going on in your brain when you’re driving your car home every day – you have to process so much information all at once and very quickly in order to safely operate your vehicle, but you’re so used to it, you barely even notice it happening. The same concept appears to apply to our decision-making processes when we’re asleep.

Of course, the parts in our brain associated with paying attention to and following instructions are shut down when we sleep, so we can’t start performing a new task, but what Kouider and Andrillon wanted to find out is if a task was implemented right before sleep, would the brain continue working on it even after the participant dozed off?

They explained their experiment at the Conversation:

“To do this, we carried out experiments in which we got participants to categorise spoken words that were separated into two categories: words that referred to animals or objects, for example “cat” or “hat” in a first experiment; then real words like “hammer” versus pseudo-words (words that can be pronounced but are found nowhere in the dictionary) like “fabu” in a second one.

Participants were asked to indicate the category of the word that they heard by pressing a left or right button. Once the task became more automatic, we asked them to continue to respond to the words but they were also allowed to fall asleep. Since they were lying down in a dark room, most of them fell asleep while words were being played.

At the same time we monitored their state of vigilance thanks to EEG electrodes placed on their head. Once they were asleep, and without disturbing the flow of words they were hearing, we gave our participants new items from the same categories. The idea here was to force them to extract the meaning of the word (in the first experiment) or to check whether a word was part of the lexicon (in the second experiment) in order to be able to respond.”

Once they dozed off, the participants stopped pressing the buttons. But when the researchers looked at their brain activity, they found that the participants were still planning to press a button, and had a preference for either the right or left side, depending on the words that were being played to them. This means that even when they were sleeping, the participants’ brains continued to prepare a response for when they were to resume the task the next day.

When the participants woke up the next day, they didn’t remember anything about the words they responded to in their sleep, which means “not only did they process complex information while being completely asleep, but they did it unconsciously”, say Kouider and Andrillon at the Conversation.

So what does this mean for all of us who dream about learning new things even as we sleep? It’s known that sleep consolidates previously learned information, but introducing new information to us when we’re sleeping is a whole other story. And what sacrifices would the brain have to make in order to achieve this? Would our dreams start interfering with our learning? For a phenomenon that’s crucial to the existence of every animal on the planet, we’ve still got a lot to learn about the science of slumber.