Unborn Baby Partially Removed from Womb for Heart Surgery, Placed Back, Born 10 Weeks Later


In a rare, delicate, and ultimately lifesaving surgery, an unborn baby was partially removed from his mother’s womb at 26 weeks so that a tumor growing on his heart could be removed. He was then placed back in his mom’s womb for a further 10 weeks before being born at nearly full term.

Doctors at Cleveland Clinic, Ohio, said in a press release the baby is only the second person in the world to undergo this unique surgery, and survive. Today, he is thriving.

Baby Rylan Harrison Drinnon was diagnosed in the spring of 2021 with intrapericardial teratoma with fetal hydrops in utero, an extremely rare condition leading to heart failure if left untreated, according to the statement.

Epoch Times Photo
Dr. Hani Najm, who led the heart surgery team, inserts an IV line in the fetus’s right arm to deliver fluids and medications as needed.

“As far as we know, Cleveland Clinic is the second academic medical center in the world to have performed this fetal surgery successfully with continued pregnancy and delivery,” said Dr. Darrell Cass, director of Cleveland Clinic’s Fetal Surgery and Fetal Care Center.

“In this case, time was of the essence. Shortly after the patient arrived at Cleveland Clinic, imaging tests showed that the tumor kept growing and the fetus’s heart function was deteriorating.”

The malignant mass was compressing the left side of unborn baby Rylan’s heart, cutting off circulation and leading to an accumulation of fluid around his heart and other organs.

Parents Sam and Dave Drinnon of Pittsburgh were referred to Cleveland for their expertise, said Cass. A multidisciplinary team from Cleveland Clinic and Children’s Clinic performed surgery in May 2021 to remove Rylan’s tumor.

After making a “Caesarean section-like incision” to expose the mother’s uterus, the team, led by Dr. Hani Najm, Cleveland Clinic’s chair of pediatric and congenital heart surgery, used ultrasound to locate the placenta and fetus. They opened the uterus and lifted out Rylan’s arms to expose his chest. Najm removed the tumor from the baby’s beating heart before placing him back in the uterus, in a surgery lasting 3 1/2 hours.

“As soon as the tumor was removed, the compression of the left atrium disappeared, and there was a nice blood flow that was almost back to normal,” Najm reported.

Epoch Times Photo
Baby Rylan Harrison Drinnon.

Both mom and baby recovered well, and Rylan was able to remain in the womb until near full term, according to the statement.

Maternal-fetal medicine specialist Dr. Amanda Kalan, who attended the surgery, oversaw Sam’s aftercare and the delivery of her healthy baby boy by C-section on July 13, 10 weeks after the surgery.

Cass expressed pride in his team for their massive success.

Epoch Times Photo

“This tumor was growing rapidly in the exact wrong spot,” Cass explained. “We needed to act quickly and decisively to rescue the fetus … as far as we know, Cleveland Clinic is the second academic medical center in the world to have performed this fetal surgery successfully, with continued pregnancy and delivery.”

Only one previous incidence, said Cass, has ever been documented in the world’s medical literature.

Najm claimed that such innovative fetal surgery “provides hope to other families who may receive a similar devastating diagnosis.”

Looking to the future, Rylan will likely need surgery to reposition his sternum, which did not heal properly in the womb. Doctors will monitor his heart health as he grows to ensure the tumor does not reappear.

The Drinnons are beyond grateful for the lifesaving intervention.

“Now they have this beautiful boy, Rylan, and they think he’s going to be special,” Cass, told Cleveland.com. “He’s going to grow up to be a completely normal kid that just had a really unique odyssey to get to where he is now.”

COVID-19 Vaccines May Be Enhancing Disease: Malone


COVID-19 vaccines may be causing enhanced disease because they target an old version of the coronavirus, Dr. Robert Malone says.

“The data are showing that vaccination can actually increase the risk of being infected with the Omicron version of this virus,” Malone told The Epoch Times in a recent interview.

Malone was referring to how in some areas, including Scotland and New Zealand, patients hospitalized with COVID-19 are more likely to have received a COVID-19 vaccine than not.

A recent study, meanwhile, found that one dose of a vaccine boosted protection for people who recovered from COVID-19 but two or three doses seemed to lower protection; the authors said they weren’t sure why this was the case. Another study found higher protection among naturally immune who weren’t vaccinated versus those who were.

Vaccine-associated enhanced diseases (VAED) were identified (pdf) as an “important potential risk” of the COVID-19 vaccines by U.S. drug regulators, as was a similar event known as enhanced respiratory disease following COVID-19 vaccination. Some adverse events recorded following COVID-19 vaccination “could indicate” VAED (pdf), according to a Centers for Disease Control and Prevention (CDC) team.

VAED refers to disease “resulting from infection in individuals primed with non-protective immune responses against the respective wild-type viruses,” researchers said last year as they set a case definition for the term. “Given that these enhanced responses are triggered by failed attempts to control the infecting virus, VAED typically presents with symptoms related to the target organ of the infection pathogen,” they added.

“That’s what the data has been showing now for a few months,” Malone, who helped invent the messenger RNA technology that two of the three COVID-19 vaccines cleared for use in the United States is built on, told The Epoch Times.

In a Pfizer document (pdf) released this month, the vaccine manufacturer said there were a potential 138 cases with 317 relevant events of VAED reported from December 2020 to February 2021. Of the 138 cases, 71 were medically significant, 16 required hospitalization, 13 were life-threatening, and there were 38 deaths.

The most frequently reported event out of the 317 potentially relevant events was drug ineffectiveness (135). Other events included COVID-19 pneumonia, diarrhea, respiratory failure, and seizure.

“VAED may present as severe or unusual clinical manifestations of COVID-19,” Pfizer concluded, adding that, “based on the current evidence, VAED/VAERD remains a theoretical risk for the vaccine” and that they will continue to monitor the syndrome.

Pfizer, Moderna, and Johnson & Johnson didn’t respond to requests for comment.

A CDC spokesperson said that the agency, along with the Food and Drug Administration (FDA), are monitoring vaccine safety through surveillance systems such as the Vaccine Adverse Event Reporting System and v-safe.

Monitoring to date “has not established a causal relationship between COVID-19 vaccination and vaccine-associated enhanced disease,” the spokesperson told The Epoch Times in an email.

The CDC says the vaccines are largely safe and effective but also encourages people who experience side effects after getting one of them to report the issues to one of the systems.

The FDA, meanwhile, has not at this time identified an association between enhanced respiratory disease with the three vaccines the agency has cleared, a spokesperson told The Epoch Times via email.

Peptides on Stardust May Have Provided a Shortcut to Life


The discovery that short peptides can form spontaneously on cosmic dust hints at more of a role for them in the earliest stages of life’s origin, on Earth or elsewhere.

READ LATER
An illustration of a polyglycine molecule among the constellations.
The spontaneous formation of peptide molecules on cosmic dust in interstellar clouds could have implications for theories about the origin of life.Kristina Armitage / Quanta Magazine

Yasemin Saplakoglu

Billions of years ago, some unknown location on the sterile, primordial Earth became a cauldron of complex organic molecules from which the first cells emerged. Origin-of-life researchers have proposed countless imaginative ideas about how that occurred and where the necessary raw ingredients came from. Some of the most difficult to account for are proteins, the critical backbones of cellular chemistry, because in nature today they are made exclusively by living cells. How did the first protein form without life to make it?

Scientists have mostly looked for clues on Earth. Yet a new discovery suggests that the answer could be found beyond the sky, inside dark interstellar clouds.

Last month in Nature Astronomy, a group of astrobiologists showed that peptides, the molecular subunits of proteins, can spontaneously form on the solid, frozen particles of cosmic dust drifting through the universe. Those peptides could in theory have traveled inside comets and meteorites to the young Earth — and to other worlds — to become some of the starting materials for life.

The simplicity and favorable thermodynamics of this new space-based mechanism for forming peptides make it a more promising alternative to the known purely chemical processes that could have occurred on a lifeless Earth, according to Serge Krasnokutski, the lead author on the new paper and a researcher at the Max Planck Institute for Astronomy and the Friedrich Schiller University in Germany. And that simplicity “suggests that proteins were among the first molecules involved in the evolutionary process leading to life,” he said.

Researchers say they’ve found a shortcut to proteins — a simpler chemical pathway that reenergizes the theory that proteins were present very early in the genesis of life.

Whether those peptides could have survived their arduous trek from space and contributed meaningfully to the origin of life is very much an open question. Paul Falkowski, a professor at the School of Environmental and Biological Sciences at Rutgers University, said that the chemistry demonstrated in the new paper is “very cool” but “doesn’t yet bridge the phenomenal gap between proto-prebiotic chemistry and the first evidence of life.” He added, “There’s a spark that’s still missing.”

Still, the finding by Krasnokutski and his colleagues shows that peptides might be a much more readily available resource throughout the universe than scientists believed, a possibility that could also have consequences for the prospects for life elsewhere.

Cosmic Dust in a Vacuum

Cells make the production of proteins look easy. They manufacture both peptides and proteins extravagantly, empowered by environments rich in useful molecules like amino acids and their own stockpiles of genetic instructions and catalytic enzymes (which are themselves typically proteins).

But before cells existed, there wasn’t an easy way to do it on Earth, Krasnokutski said. Without any of the enzymes that biochemistry provides, the production of peptides is an inefficient two-step process that involves first making amino acids and then removing water as the amino acids link up into chains in a process called polymerization. Both steps have a high energy barrier, so they occur only if large amounts of energy are available to help kick-start the reaction.

Because of these requirements, most theories about the origin of proteins have either centered on scenarios in extreme environments, such as near hydrothermal vents on the ocean floor, or assumed the presence of molecules like RNA with catalytic properties that could lower the energy barrier enough to push the reactions forward. (The most popular origin-of-life theory proposes that RNA preceded all other molecules, including proteins.) And even under those circumstances, Krasnokutski says that “special conditions” would be needed to concentrate the amino acids enough for polymerization. Though there have been many proposals, it isn’t clear how and where those conditions could have arisen on the primordial Earth.

But now researchers say they’ve found a shortcut to proteins — a simpler chemical pathway that reenergizes the theory that proteins were present very early in the genesis of life.

An illustration of a polyglycine molecule among the constellations.

Last year in Low Temperature Physics, Krasnokutski predicted through a series of calculations that a more direct way to make peptides could exist under the conditions available in space, inside the extremely dense and frigid clouds of dust and gas that linger between the stars. These molecular clouds, the nurseries of new stars and solar systems, are packed with cosmic dust and chemicals, some of the most abundant of which are carbon monoxide, atomic carbon and ammonia.

In their new paper, Krasnokutski and his colleagues showed that these reactions in the gas clouds would likely lead to the condensation of carbon onto cosmic dust particles and the formation of small molecules called aminoketenes. These aminoketenes would spontaneously link up to form a very simple peptide called polyglycine. By skipping the formation of amino acids, reactions could proceed spontaneously, without needing energy from the environment.

To test their claim, the researchers experimentally simulated the conditions found in molecular clouds. Inside an ultrahigh vacuum chamber, they mimicked the icy surface of cosmic dust particles by depositing carbon monoxide and ammonia onto substrate plates chilled to minus 263 degrees Celsius. They then deposited carbon atoms on top of this ice layer to simulate their condensation inside molecular clouds. Chemical analyses confirmed that the vacuum simulation had indeed produced various forms of polyglycines, up to chains 10 or 11 subunits long.

The researchers hypothesized that billions of years ago, as cosmic dust stuck together and formed asteroids and comets, simple peptides on the dust could have hitchhiked to Earth in meteorites and other impactors. They might have done the same on countless other worlds, too.

The Gap From Peptides to Life

The delivery of peptides to Earth and other planets “certainly would provide a head start” to forming life, said Daniel Glavin, an astrobiologist at NASA’s Goddard Space Flight Center. But “I think there’s a large jump to go from interstellar ice dust chemistry to life on Earth.”

First the peptides would have to endure the perils of their journey through the universe, from radiation to water exposure inside asteroids, both of which can fragment the molecules. Then they’d have to survive the impact of hitting a planet. And even if they made it through all that, they would still have to go through a lot of chemical evolution to get large enough to fold into proteins that are useful for biological chemistry, Glavin said.

Is there evidence that this has happened? Astrobiologists have discovered many small molecules including amino acids inside meteorites, and one study from 2002 discovered that two meteorites held extremely small, simple peptides made from two amino acids. But researchers have yet to discover other convincing evidence for the presence of such peptides and proteins in meteorites or samples returned from asteroids or comets, Glavin said. It’s unclear if the nearly total absence of even relatively small peptides in space rocks means that they don’t exist or if we just haven’t detected them yet.

But Krasnokutski’s work could encourage more scientists to really start looking for these more complex molecules in extraterrestrial materials, Glavin said. For example, next year NASA’s OSIRIS-REx spacecraft is expected to bring back samples from the asteroid Bennu, and Glavin and his team plan to look for some of these types of molecules.

The researchers are now planning to test whether bigger peptides or different types of peptides can form in molecular clouds. Other chemicals and energetic photons in the interstellar medium might be able to trigger the formation of larger and more complex molecules, Krasnokutski said. Through their unique laboratory window into molecular clouds, they hope to witness peptides getting longer and longer, and one day folding, like natural origami, into beautiful proteins that burst with potential.

Will Transformers Take Over Artificial Intelligence?


A simple algorithm that revolutionized how neural networks approach language is now taking on vision as well. It may not stop there.

An illustration showing an orange and blue network of lines focus into a clear pyramid, emerging as a white light traveling into a clear eye.
Avalon Nuovo for Quanta Magazine

Imagine going to your local hardware store and seeing a new kind of hammer on the shelf. You’ve heard about this hammer: It pounds faster and more accurately than others, and in the last few years it’s rendered many other hammers obsolete, at least for most uses. And there’s more! With a few tweaks — an attachment here, a twist there — the tool changes into a saw that can cut at least as fast and as accurately as any other option out there. In fact, some experts at the frontiers of tool development say this hammer might just herald the convergence of all tools into a single device.

A similar story is playing out among the tools of artificial intelligence. That versatile new hammer is a kind of artificial neural network — a network of nodes that “learn” how to do some task by training on existing data — called a transformer. It was originally designed to handle language, but has recently begun impacting other AI domains.

The transformer first appeared in 2017 in a paper that cryptically declared that “Attention Is All You Need.” In other approaches to AI, the system would first focus on local patches of input data and then build up to the whole. In a language model, for example, nearby words would first get grouped together. The transformer, by contrast, runs processes so that every element in the input data connects, or pays attention, to every other element. Researchers refer to this as “self-attention.” This means that as soon as it starts training, the transformer can see traces of the entire data set.

Before transformers came along, progress on AI language tasks largely lagged behind developments in other areas. “In this deep learning revolution that happened in the past 10 years or so, natural language processing was sort of a latecomer,” said the computer scientist Anna Rumshisky of the University of Massachusetts, Lowell. “So NLP was, in a sense, behind computer vision. Transformers changed that.”

Transformers quickly became the front-runner for applications like word recognition that focus on analyzing and predicting text. It led to a wave of tools, like OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), which trains on hundreds of billions of words and generates consistent new text to an unsettling degree.

The success of transformers prompted the AI crowd to ask what else they could do. The answer is unfolding now, as researchers report that transformers are proving surprisingly versatile. In some vision tasks, like image classification, neural nets that use transformers have become faster and more accurate than those that don’t. Emerging work in other AI areas — like processing multiple kinds of input at once, or planning tasks — suggests transformers can handle even more.

“Transformers seem to really be quite transformational across many problems in machine learning, including computer vision,” said Vladimir Haltakov, who works on computer vision related to self-driving cars at BMW in Munich.

Just 10 years ago, disparate subfields of AI had little to say to each other. But the arrival of transformers suggests the possibility of a convergence. “I think the transformer is so popular because it implies the potential to become universal,” said the computer scientist Atlas Wang of the University of Texas, Austin. “We have good reason to want to try transformers for the entire spectrum” of AI tasks.

From Language to Vision

One of the most promising steps toward expanding the range of transformers began just months after the release of “Attention Is All You Need.” Alexey Dosovitskiy, a computer scientist then at Google Brain Berlin, was working on computer vision, the AI subfield that focuses on teaching computers how to process and classify images. Like almost everyone else in the field, he worked with convolutional neural networks (CNNs), which for years had propelled all major leaps forward in deep learning and especially in computer vision.

Photo of Alexey Dosovitskiy in a shirt and sweater against a white background
The computer scientist Alexey Dosovitskiy helped create a neural network called the Vision Transformer, which applied the power of a transformer to visual recognition tasks.Courtesy of Alexey Dosovitskiy

CNNs work by repeatedly applying filters to the pixels in an image to build up a recognition of features. It’s because of convolutions that photo apps can organize your library by faces or tell an avocado apart from a cloud. CNNs were considered indispensable to vision tasks.

Dosovitskiy was working on one of the biggest challenges in the field, which was to scale up CNNs to train on ever-larger data sets representing images of ever-higher resolution without piling on the processing time. But then he watched transformers displace the previous go-to tools for nearly every AI task related to language. “We were clearly inspired by what was going on,” he said. “They were getting all these amazing results. We started wondering if we could do something similar in vision.” The idea made a certain kind of sense — after all, if transformers could handle big data sets of words, why not pictures?

The eventual result was a network dubbed the Vision Transformer, or ViT, which the researchers presented at a conference in May 2021. The architecture of the model was nearly identical to that of the first transformer proposed in 2017, with only minor changes allowing it to analyze images instead of words. Language tends to be discrete,” said Rumshisky, “so a lot of adaptations have to discretize the image.”

The ViT team knew they couldn’t exactly mimic the language approach since self-attention on every pixel would be prohibitively expensive in computing time. Instead, they divided the larger image into square units, or tokens. The size is arbitrary, as the tokens could be made larger or smaller depending on the resolution of the original image (the default is 16 pixels on a side). But by processing pixels in groups, and applying self-attention to each, the ViT could quickly churn through enormous training data sets, spitting out increasingly accurate classifications.

Illustration of cartoon characters working alongside a machine.
ARTIFICIAL INTELLIGENCE

The transformer classified images with over 90% accuracy — a far better result than anything Dosovitskiy expected — propelling it quickly to the top of the pack at the ImageNet classification challenge, a seminal image recognition contest. ViT’s success suggested that maybe convolutions aren’t as fundamental to computer vision as researchers believed.

“I think it is quite likely that CNNs will be replaced by vision transformers or derivatives thereof in the midterm future,” said Neil Houlsby of Google Brain Zurich, who worked with Dosovitskiy to develop ViT. Those future models may be pure transformers, he said, or approaches that add self-attention to existing models.

Additional results bolster these predictions. Researchers routinely test their models for image classification on the ImageNet database, and at the start of 2022, an updated version of ViT was second only to a newer approach that combines CNNs with transformers. CNNs without transformers, the longtime champs, barely reached the top 10.

How Transformers Work

The ImageNet results demonstrated that transformers could compete with leading CNNs. But Maithra Raghu, a computer scientist at Google Brain’s Mountain View office in California, wanted to know if they “see” images the same way CNNs do. Neural nets are notorious for being indecipherable black boxes, but there are ways to peek inside — such as by examining the net’s input and output, layer by layer, to see how the training data flows through. Raghu’s group did essentially this, picking ViT apart.

Maithra Raghu, a computer scientist at Google Brain, analyzed the Vision Transformer to determine exactly how it “sees” images. Unlike convolutional neural networks, which first focus on small portions to find details like edges or colors, transformers can capture the whole image from the beginning.Arun Chaganty

Her group identified ways in which self-attention leads to a different means of perception within the algorithm. Ultimately, a transformer’s power comes from the way it processes the encoded data of an image. “In CNNs, you start off being very local and slowly get a global perspective,” said Raghu. A CNN recognizes an image pixel by pixel, identifying features like corners or lines by building its way up from the local to the global. But in transformers, with self-attention, even the very first layer of information processing makes connections between distant image locations (just as with language). If a CNN’s approach is like starting at a single pixel and zooming out, a transformer slowly brings the whole fuzzy image into focus.

This difference is simpler to understand in the realm of language, where transformers were first conceived. Consider these sentences: “The owl spied a squirrel. It tried to grab it with its talons but only got the end of its tail.” The structure of the second sentence is confusing: What do those “it”s refer to? A CNN that focuses only on the words immediately around the “it”s would struggle, but a transformer connecting every word to every other word could discern that the owl did the grabbing, and the squirrel lost part of its tail.

An infographic explaining the differences in how a convolutional neural network and a transformer process images.
Samuel Velasco/Quanta Magazine. Source: Dive into Deep Learning

Now that it was clear transformers processed images fundamentally differently from convolutional networks, researchers only grew more excited. The transformer’s versatility in converting data from a one-dimensional string, like a sentence, into a two-dimensional array, like an image, suggests that such a model could handle data of many other flavors. Wang, for example, thinks the transformer may be a big step toward achieving a kind of convergence of neural net architectures, resulting in a universal approach to computer vision — and perhaps to other AI tasks as well. “There are limitations to making it really happen, of course,” he said, “but if there is a model that can universalize, where you can put all kinds of data in one machine, then certainly that’s very fancy.”

Convergence Coming

Now researchers want to apply transformers to an even harder task: inventing new images. Language tools such as GPT-3 can generate new text based on their training data. In a paper presented last year, Wang combined two transformer models in an effort to do the same for images, a much harder problem. When the double transformer network trained on the faces of more than 200,000 celebrities, it synthesized new facial images at moderate resolution. The invented celebrities are impressively realistic and at least as convincing as those created by CNNs, according to the inception score, a standard way of evaluating images generated by a neural net.

Wang argues that the transformer’s success in generating images is even more surprising than ViT’s prowess in image classification. “A generative model needs to synthesize, needs to be able to add information to look plausible,” he said. And as with classification, the transformer approach is replacing convolutional networks.

Raghu and Wang see potential for new uses of transformers in multimodal processing — a model that can simultaneously handle multiple types of data, like raw images, video and language. “It was trickier to do before,” Raghu said, because of that siloed approach where each type of data had its own specialized model. But transformers suggest a way to combine multiple input sources. “There’s a whole realm of interesting applications, combining some of these different types of data and images.” For example, multimodal networks might power a system that reads a person’s lips in addition to listening to their voice. “You could have a rich representation of both language and image information,” Raghu said, “and in a much deeper way than was possible before.”

Collage of multiple faces generated by an artificial intelligence
These faces were created by a transformer-based network after training on a data set of more than 200,000 celebrity faces.Courtesy of Atlas Wang

Emerging work suggests a spectrum of new uses for transformers in other AI domains, including teaching robots to recognize human body movements, training machines to discern emotions in speech, and detecting stress levels in electrocardiograms. Another program with transformer components is AlphaFold, which made headlines last year for its ability to quickly predict protein structures — a task that used to require a decade of intensive analysis.

The Trade-Off

 Even if transformers can help unite and improve the tools of AI, emerging technologies often come at a steep cost, and this one is no different. A transformer requires a higher outlay of computational power in the pre-training phase before it can beat the accuracy of its conventional competitors.

That could be a problem. “People are always getting more and more interested in high-resolution images,” Wang said. That training expense could be a drawback to widespread implementation of transformers. However, Raghu sees the training hurdle as one that can be overcome simply enough with sophisticated filters and other tools.

Wang also points out that even though visual transformers have ignited new efforts to push AI forward — including his own — many of the new models still incorporate the best parts of convolutions. That means future models are more likely to use both than to abandon CNNs entirely, he says.

It also suggests the tantalizing prospect of some hybrid architecture that draws on the strengths of transformers in ways that today’s researchers can’t predict. “Perhaps we shouldn’t rush to the conclusion that the transformer will be the final model,” Wang said. But it’s increasingly likely that the transformer will at least be a part of whatever new super-tool comes to an AI shop near you.

Why placebo pills work even when you know they’re a placebo


In 2014, the American footballer Marshawn Lynch – a former NFL running back nicknamed ‘Beast Mode’ because he bulldozed and ran over would-be tacklers – signed an endorsement deal with Skittles. This was more than business. To Lynch, Skittles aren’t just Skittles. Since he was young, the button-shaped candies have been his secret weapon.

As a rising football star in high school, Lynch was often struck by anxiety in advance of his games. It was often so extreme it caused an intense upset stomach. Young Lynch tried several over-the-counter remedies, but nothing seemed to work. Then one day, his mother, Delisa Lynch, told him that Skittles would settle his stomach. Not only that, but she said the Skittles would also make him play better: ‘They’re going to make you run fast, and they’re going to make you play good.’ And, somehow, they did.

No offence to Skittles lovers, but there’s nothing special about them. They’re mostly sugar, corn syrup and artificial flavours. Yet, throughout his college football and illustrious NFL career, Lynch held on to the belief that Skittles helped his game, and he always ate them before taking the field. You might assume that the Skittles were, for him, just a silly pre-game ritual. But by eating the Skittles and believing that they helped improve his performance, Lynch was taking advantage of a very real phenomenon: the placebo effect.

The placebo effect occurs when someone experiences a benefit due primarily to the belief that something they are doing – taking a medication, engaging in a ritual, or getting treatment – will have a beneficial effect. Placebos are far more powerful than most people realise. They’ve been shown in research trials to help reduce anxietydepressionpainasthma, the motor symptoms of Parkinson’s disease, and recovery from osteoarthritis of the knee. It’s worth noting that these benefits aren’t just seen in terms of how people feel, although that alone is important, but also in terms of measurable physiological improvements.

The power of the placebo effect is such that new drugs are required to demonstrate that they have additional benefits, above and beyond a placebo, before they can go to market. Most drug and behavioural intervention trials fail this test – not because the drugs or interventions don’t work, but because the placebo effect is so strong.

Even though the placebo pills contained no active ingredients, and despite the patients knowing they’d been taking placebos, they reported fewer IBS symptoms

Given that placebos are such a powerful treatment on their own, we might ask ourselves: why are they not being used as a treatment more widely?

One of the biggest barriers is an ethical dilemma. On the one hand, placebos are highly effective for certain symptoms and conditions, and can have a real therapeutic effect. On the other hand, to benefit from placebos, the predominant thinking has been that people need to be misled into believing they’re taking an active treatment. Since most medical authorities worldwide have agreed – for good reasons – that lying to patients isn’t a best practice, this reliance on deception has prevented the widespread use of placebos as treatments in and of themselves.

But what about the case of Marshawn Lynch? Of course, he knows that Skittles don’t really have magical powers. He also knows the actual ingredients of Skittles can’t make him run faster or play better. And yet, he continues taking them, believing in and apparently enjoying their beneficial effects.

Lynch’s experience reflects an emerging research trend to study the possible beneficial effects of placebos given without deception, also known as ‘open-label placebos’ or ‘non-deceptive placebos’. In a foundational study in 2010, researchers at Harvard Medical School randomised patients experiencing irritable bowel syndrome (IBS) symptoms into either an open-label placebo group or a no-treatment control group – and crucially, all the patients knew which group they were in. The researchers told patients in the open-label placebo group that the placebo effect is powerful, that the body can respond automatically to taking placebo pills (similar to the classic conditioning example of Pavlov’s dogs, who salivated at the sound of the dinner bell), that a positive attitude helps but is not required, and that it is vital to take the pills faithfully for the entire 21-day study period, regardless of their belief in the pills. By the end of the study, even though the placebo pills contained no active ingredients, and despite the patients knowing they’d been taking placebos, they reported fewer IBS symptoms and more improvement in overall quality of life than patients in the no-treatment control group.

This paradigm of giving non-deceptive placebo pills as treatment has been repeated, including a recent replication of the benefit for IBS, while other trials have shown benefits for patients with ADHD and hay fever. Unsurprisingly, further research suggests that open-label placebos can also work in non-clinical settings. Together with colleagues, one of us (Darwin) showed in 2020 that an open-label placebo nasal spray reduced the distress provoked by looking at emotionally upsetting images. Like Lynch’s Skittles, the open-label placebo we used helped our volunteers manage their feelings and anxiety, an effect that was even visible in their electrical brain activity.

So what’s really going on here? It’s not the sugar pill itself that leads to these changes in psychology and physiology, and it’s not magic either. Research in medicine and psychology on both traditional and open-label placebos suggests several mechanisms at play.

Patients who also took the open-label placebo pills consumed approximately 30 per cent less daily morphine in the days after surgery

One is people’s expectations, or the positive belief that a treatment might have beneficial effects. In open-label placebo studies, including Darwin’s nasal spray study mentioned above, people are often told that a belief in the placebo isn’t necessary, but they are encouraged to keep an open mind. Some of the clinical studies have involved volunteers for whom many other treatments have failed, and so they have added reason to hope that this experimental, slightly unorthodox treatment might work for them. Emerging research suggests that this belief might be partially responsible for the benefits. For example, a study one of us (Kari) ran as part of her PhD showed that open-label placebos led to a reduction in allergic response from a histamine skin-prick test, but only for those volunteers who believed strongly in the beneficial power of placebos.

Another possible mechanism is conditioning, in which the body learns to associate beneficial effects with an action or ritual. Many of us have had repeated experiences of taking pills that help reduce our symptoms – ibuprofen for a headache, NyQuil for a cold, or Pepto Bismol for an upset stomach. Over time, the body may learn to associate taking a pill with symptom relief. So the very act of taking a pill itself can catalyse the body’s own capacity for healing.

This conditioning is sometimes done explicitly in research with open-label placebos. In one clinical study, researchers asked patients recovering from spine surgery to pair their active pain medication with open-label placebos and also to take the placebo pills on their own. The placebo pills began exerting their own pain relief. Compared with the control group who received treatment as usual, patients who also took the open-label placebo pills consumed approximately 30 per cent less daily morphine in the days after surgery.

There are also other, less well-studied mechanisms that may be at play in open-label placebo effects. For example, when someone starts taking a treatment – placebo or not – they often begin paying closer attention to their own minds and bodies. Most conditions and symptoms fluctuate over time. For example, when we are experiencing a headache, even if we don’t take any medication or other action, the severity of that headache will naturally decrease over time. People who take open-label placebo pills may hope for improvement, making them more attuned to times when their symptoms subside. Other research shows that medical rituals – whether that’s taking a pill, getting an injection, or merely having a cup of tea and taking a hot bath – can evoke both expectations for healing and a conditioned response. Thus, the act of taking pills faithfully can become a healing medical ritual in and of itself.

The pharmaceutical industry has no incentive to promote this kind of medication over patented, privatised medications

Now that we are seeing an accumulation of evidence that open-label placebos might be helpful, researchers and clinicians are starting to think about how to apply them in practice to benefit patients. For certain conditions, particularly those such as IBS that have already been studied, open-label placebos could be an effective treatment on their own. As the American footballer Marshawn Lynch has known for years, and new research is demonstrating, open-label placebos could also be used for reducing stress and anxiety to help people get in the zone before exams or games. And as research continues combining open-label placebos with existing medications, we may find them useful for tapering or decreasing doses of medications that have side-effects, such as pain medications or medications for disorders such as ADHD.

Current and future research is continuing to shed light on which conditions open-label placebos might be best-suited to. As the field grows, a debate must follow: will open-label placebos ever become part of mainstream medicine? Is it better to focus efforts on convincing doctors (and patients) that open-label placebos can be effective, or should we better understand the mechanisms of open-label placebo effects and try to harness those mechanisms in conjunction with active medications and treatments, such as by boosting patient expectations? Will open-label placebos ever be more than a semi-fringe last resort for conditions and patients for whom most other treatments have failed?

Of no small consideration is the fact that, with little money to be made from prescribing sugar pills, the influential pharmaceutical industry has no incentive to promote this kind of medication over patented, privatised medications and treatments. In many ways, research on open-label placebos is still in its infancy. The next 10 years may determine the ultimate impact of this research. As the field progresses, one of us (Darwin) plans to continue to investigate and optimise open-label placebo effects on stress, anxiety and depression in both clinical and non-clinical settings.

On an individual level, while taking open-label placebos – or Skittles – isn’t a substitute for seeking medical advice or treatment, if you’re trying to be like Marshawn Lynch, you could begin to think about how you might use open-label placebos in your own life. As a starting point, when you take a real medication, such as Tylenol (paracetamol), you could try to boost some of the relevant mechanisms – such as expectations and conditioning – for example, by reminding yourself of the benefits you expect. You could consider using some of your own open-label placebos for minor issues that don’t require medical intervention, such as aches and minor pains, stress and anxiety, difficulty sleeping, or mild upset stomach. You can drink tea (reminding yourself that lots of herbal teas do actually contain active healing ingredients), take a hot bath (which can also have medical benefits), or do other rituals that help you feel better while deliberately focusing on their healing benefits. Try finding your own version of an open-label placebo to help you in times of need. The science says it just might work.

Déjà vu is just one of many uncanny kinds of déjà experiences


I was 16, and it was the spring of 1956. I remember the new leaves were beginning to sprout on the elm trees near where we lived in Oklahoma. I recall how happy we were that the road outside our home had finally been paved – it was now a lot quieter when cars drove past (and less dusty, too). And I remember that the high-school ‘sock hop’ dances had begun, where DJs played Elvis Presley’s first hit ‘Heartbreak Hotel’. But what I remember most from that spring was my first unsettling encounter with the impossible.

It started with a game. My friends and I decided to play hide-and-seek on our bicycles. One of us rode off to hide and, after some time, the rest fanned out throughout the neighbourhood trying to find him. This was in a small town on the northwest edge of Oklahoma City, which consisted mainly of small suburban homes with fenced yards, car garages and sheds, as well as trees and bushes that provided ideal camouflage. It didn’t take long until we decided that searching was hopeless: there were just too many places where he could be hiding. We gave up. But while riding back, an image appeared in my mind’s eye: I saw our quarry laying down his bicycle in the front yard of my house. We couldn’t have seen this directly because the yard was blocked by other houses. And yet, as we rounded the corner, I saw exactly what I’d pictured: our friend laying down his bicycle in the grass. I had seen it before seeing it – and in exquisite detail!

This experience, and others like it, contradicted everything I thought I knew about how reality functions and launched me on a quest that continues to the present day. At first, in the late 1950s and early ’60s, no one around me had any idea what I was talking about when I tried to explain these experiences. It wasn’t until 1970, while at the library of the National Institutes of Health near Washington, DC, that I finally found a knowledgeable librarian who knew that such experiences are called déjà vu, a French phrase meaning ‘already seen’. I learned that many others had had similar impossible experiences, too. I was not alone. This whetted my appetite. But my discovery of déjà vu led only to more questions. The deeper I delved, the more complex it became. ‘Déjà vu’ was not one thing. In fact, it was a better understood as a variety of experiences: ‘déjà experiences’.

For me, the defining features of a true déjà experience are sudden shock and bafflement, accompanied by the unsettling conviction that what one is experiencing is impossible – and yet it’s happening. This is different to feelings of vague familiarity with things that remind you of something from your past, or someone you know or have known. Such experiences are not uncommon. In 1992, the researcher John W Fox had a study published on ‘reported paranormal experiences’, including déjà vu. Out of a sample of 3,885 US adults, around 65 per cent said that déjà vu had happened to them, from once or twice to often. Roughly the same percentage has been found by surveys in other cultures as well.

Historically, ‘déjà vu’ has been used as an umbrella term to describe a range of possible déjà experiences. In his book The Deja Vu Experience (2004), Alan S Brown lists 32 phrases that have been used to describe déjà vu, 53 different definitions for it, and has devoted several whole chapters to the hypotheses put forward to explain it. Physiologists, for instance, have opined that its origin lies in delayed communication between the cerebral hemispheres; psychologists figure these experiences must be due to memory glitches (that the person has seen something in his or her past, and a present situation recalls it); neurologists say they are often caused by temporal lobe epilepsy; and some parapsychologists say they may arise from precognitive dreams. Others prefer reincarnation as an explanation.

The term déjà vécu (‘already lived’) would have been a better choice because such experiences entail all the senses, not just vision

The earliest mention of a déjà experience that I’ve found appears in the novel Guy Mannering; or, The Astrologer (1815) by the Scottish historian Walter Scott. While wandering through a ruined castle, the protagonist muses about a ‘shadowy recollection’ of events that have not yet taken place, and unfamiliar places ‘not entirely strange to me’:

[W]hy is it that some scenes awaken thoughts which belong as it were to dreams of early and shadowy recollection, such as my old Brahmin Moonshie would have ascribed to a state of previous existence? Is it the visions of our sleep that float confusedly in our memory, and are recalled by the appearance of such real objects as in any respect correspond to the phantoms they presented to our imagination? How often do we find ourselves in society which we have never before met, and yet feel impressed with a mysterious and ill-defined consciousness that neither the scene, the speakers, nor the subject are entirely new; nay, feel as if we could anticipate that part of the conversation which has not yet taken place!

Some years later, Charles Dickens reflected on a similar sensation in David Copperfield (1850):

We have all some experience of a feeling, that comes over us occasionally, of what we are saying and doing having been said and done before, in a remote time – of our having been surrounded, dim ages ago, by the same faces, objects, and circumstances – of our knowing perfectly what will be said next, as if we suddenly remembered it!

Toward the turn of the century, many terms were used to designate these experiences. So how did one term come to serve a proxy for this wider suite of uncanny sensations? It seems ‘déjà vu’ first entered the scientific literature in 1876. Émile Boirac, a professor of philosophy at a classical high school in the French city of Poitiers, had a letter published in the Révue Philosophique in which he described ‘le sentiment du déjà vu’. He shared his own experiences and classified them as one type of illusionary memory. However, his use of the terminology was forgotten or ignored. ‘Déjà vu’ didn’t appear again until 1893, when the French philosopher André Lalande and others were credited with using it.

The term was officially proposed in 1896 when the Société Medico-Psychologique met in Paris to designate the phenomenon. The French psychiatrist Francois-Léon Arnaud objected to the other suggested terms, such as ‘false recognition’, ‘false memory’, ‘paramnesia’ and ‘reminiscence’, because they were too broad. He felt that ‘already seen’ more neatly fitted the experience (and was also more neutral from a theoretical point of view). Not everyone agreed. As several authors insisted, déjà vécu (‘already lived’) would probably have been more accurate and a better choice because such experiences entail all the senses, not just vision. However, déjà vécu and other alternatives never gained wide acceptance and ‘already seen’ was taken up by several writers, including the pioneering psychologist Pierre Janet, who had been present at that 1896 meeting. The term quickly entered popular parlance and, as a result, a panoply of unsettling experiences collectively gained the name ‘déjà vu’.

The problem here is that real progress – a deeper understanding of the phenomenon – will be possible only once more accurate terminology is developed and used in a well-differentiated way. With better terms, we can be more precise about the experiences that are out there and the conditions that surround their occurrence.

Déjà visité is when you find yourself in a place you know you’ve never been before, but you still somehow know your way around

In his 1981 doctoral thesis, the South African neuropsychiatrist Vernon Neppe, now director of the Pacific Neuropsychiatric Institute in Seattle, enumerated 21 variants of déjà vu. Among them were the aforementioned déjà vécu, as well as déjà visité (‘already visited’), déjà rêvé (‘already dreamt’), déjà entendu (‘already heard’), déjà senti (‘already smelt’), and déjà lu (‘already read’). It was Neppe, in fact, who first proposed that we place the various types under the heading ‘déjà experiences’.

Each form may well have its own cause and explanation. In my research, I have concentrated on the two forms of déjà experience that I believe are the most prevalent: déjà visité and déjà vécuDéjà visité is when you find yourself in place you know you’ve never been before, but you still somehow know your way around or it seems amazingly familiar. Déjà vécu is when a person relives a situation for the second time – the most common déjà experience people describe. They’re what the former Time magazine journalist James Geary in 1997 described as ‘Been There, Done That’. In terms of explanation, some instances of déjà visité could be understood through theories of reincarnation. However, these wouldn’t explain déjà vécu because the clothes people are wearing and many of the things they talk about would not have been present in a previous lifetime.

As part of the search for other explanations, several attempts have been made to reproduce déjà experiences and, in some of them, feelings of familiarity have been evoked but, as far as I know, none were accompanied by strong feelings of startle and bafflement. Some general characteristics of déjà experiences, though, are known and/or agreed upon. For most people, they occur sporadically and are unpredictable. Some people have them often and others have them rarely. They tend to be more frequent and intense when the person is young, say between 15 and 25, and taper off and lose intensity as they get older. They usually don’t last very long. Since, like dreams, déjà experiences cannot be reliably reproduced in the laboratory, all theories are necessarily ad hoc, and it may well be that many of them are true, depending on which form of déjà experience the person has had.

I myself explain my déjà vécu experiences as arising from precognitive dreams that were not remembered until they ‘came true’. I am far from saying that every instance of déjà vécu or déjà visité is caused by a preceding precognitive dream, but many are. When I have given lectures about it, I usually ask the audience how many are convinced their déjà experiences come from dreams they’ve had. About one-third have raised their hands, so I’m certainly not alone.

Some just find the experiences an intriguing quirk, something fun to think about and ponder. Then there are those who find them scary: they’re afraid everything is predetermined and that they have no free will. Some say they find it reassuring – it suggests that they’re on the right track in life. For others, the precognitive element opens them up to metaphysical and non-materialistic views of reality. I’ve been so intrigued by these encounters that I’ve wanted to learn as much as I can.

And yet, despite all the hypotheses that have been put forward, we’re still no closer to explaining how these arise than when they were first described in Scott’s novel. No closer to explaining my encounter that day in the spring of 1956; no closer to explaining the sudden shock, the bafflement, the unsettling conviction that I was experiencing something impossible – and yet, it was happening. I find that exciting. There’s still a lot to be done.

Modular cognition


Powerful tricks from computer science and cybernetics show how evolution ‘hacked’ its way to intelligence from the bottom up

is the Vannevar Bush Chair and Distinguished Professor of Biology at Tufts University in Massachusetts, where he directs the Allen Discovery Center and the Tufts Center for Regenerative and Developmental Biology.

is professor of biological sciences and neuroscience, co-director of the Kavil Institute for Brain Science, director of the NeuroTechnology Center and chair of the NeuroRights Foundation at Columbia University.

Intelligent decision-making doesn’t require a brain. You were capable of it before you even had one. Beginning life as a single fertilised egg, you divided and became a mass of genetically identical cells. They chattered among themselves to fashion a complex anatomical structure – your body. Even more remarkably, if you had split in two as an embryo, each half would have been able to replace what was missing, leaving you as one of two identical (monozygotic) twins. Likewise, if two mouse embryos are mushed together like a snowball, a single, normal mouse results. Just how do these embryos know what to do? We have no technology yet that has this degree of plasticity – recognising a deviation from the normal course of events and responding to achieve the same outcome overall.

This is intelligence in action: the ability to reach a particular goal or solve a problem by undertaking new steps in the face of changing circumstances. It’s evident not just in intelligent people and mammals and birds and cephalopods, but also cells and tissues, individual neurons and networks of neurons, viruses, ribosomes and RNA fragments, down to motor proteins and molecular networks. Across all these scales, living things solve problems and achieve goals by flexibly navigating different spaces – metabolic, physiological, genetic, cognitive, behavioural.

But how did intelligence emerge in biology? The question has preoccupied scientists since Charles Darwin, but it remains unanswered. The processes of intelligence are so intricate, so multilayered and baroque, no wonder some people might be tempted by stories about a top-down Creator. But we know evolution must have been able to come up with intelligence on its own, from the bottom up.

Darwin’s best shot at an explanation was that random mutations changed and rearranged genes, altered the structure and function of bodies, and so produced adaptations that allowed certain organisms to thrive and reproduce in their environment. (In technical terms, they are selected for by the environment.) In the end, somehow, intelligence was the result. But there’s plenty of natural and experimental evidence to suggest that evolution doesn’t just select hardwired solutions that are engineered for a specific setting. For example, lab studies have shown that perfectly normal frog skin cells, when liberated from the instructive influence of the rest of the embryo, can reboot their cooperative activity to produce a novel proto-organism, called a ‘xenobot’. Evolution, it seems, doesn’t come up with answers so much as generate flexible problem-solving agents that can rise to new challenges and figure things out on their own.

The urgency of understanding intelligence in biological terms has become more acute with the ‘omics’ revolution, where new techniques are amassing enormous amounts of fresh data on the genes, proteins and connections within each cell. Yet the deluge of information about cellular hardware isn’t yielding a better explanation of the intelligent flexibility we observe in living systems. Nor is it yielding sufficient practical insights, for example, in the realm of regenerative medicine. We think the real problem is not one of data, but of perspective. Intelligence is not something that happened at the tail end of evolution, but was discovered towards the beginning, long before brains came on the scene.

From the earliest metabolic cycles that kept microbes’ chemical parameters within the right ranges, biology has been capable of achieving aims. Yet generation after generation of biologists have been trained to avoid questions about the ultimate purpose of things. Biologists are told to focus on the ‘how’, not the ‘why’, or risk falling prey to theology. Students must reduce events to their simplest components and causes, and study these mechanisms in piecemeal fashion. Talk of ‘goals’, we are told, skirts perilously close to abandoning naturalism; the result is a kind of ‘teleophobia’, a fear of purpose, based on the idea that attributing too much intelligence to a system is the worst mistake you can make.

But the converse is just as bad: failing to recognise intelligence when it’s right under our noses, and could be useful. Not only is ‘why’ always present in biological systems – it is exactly what drives the ‘how’. Once we open ourselves up to that idea, we can identify two powerful tricks, inspired by computer science and cybernetics, that allowed evolution to ‘hack’ its way to intelligence from the bottom up. No skyhooks needed.

Sign up to our newsletter

Updates on everything new at Aeon.

DailyWeeklySubscribe

See our newsletter privacy policy here.

Embryos aren’t the only things capable of flexible self-repair. Many species can regenerate or replace lost body parts as adults. The Mexican salamander known as the axolotl can regrow lost limbs, eyes, jaws and ovaries, as well as the spinal cord and portions of the heart and brain. The body recognises deviations from its correct anatomy as errors, and the cells work rapidly to get back to normal. Similarly, when Picasso-style tadpoles are made in the lab, with eyes and other organs randomly placed in different starting positions, they undergo novel paths of movement that nevertheless end up creating largely normal frog faces.

How do the tadpole’s facial organs know when to stop moving? How does the salamander’s tissue determine that a limb of the right size and shape has been produced, and that the remodelling can stop? It’s clear that cell groups need to be able to ‘work until goal X is satisfied’, storing a memory of the goal (which might be an anatomical configuration much larger than any single cell), and course-correcting if they are perturbed along the way.

One explanation of embryos’ amazing feats comes from control theory, which show us that dynamical systems (such as a thermostat) can pursue goals without magic, simply using feedback to correct for errors. Such ‘homeostasis’ in biology, as the process is known, keeps parameters such as pH within specific limits. But the same dynamic operates on a much larger scale, as long as cells are physiologically plugged into networks. Networks of cells are able to measure chemical, electrical and biomechanical properties of tissues, and make decisions about things that single cells cannot appreciate.

Single cells can process local information about their environment and their own states to pursue tiny, cell-level goals (such as staying pointed in a specific direction). But networks of cells can integrate signals from across distances, store memories of patterns, and compute the outcomes of large-scale questions (such as ‘Is this finger the right length?’ or ‘Does this face look right?’) Cell networks carry out computations that can assess larger quantities (such as anatomical shape) and direct the underlying cell activity to bring the system closer to a specific desired goal (called a ‘setpoint’).

Modularity means that the stakes for testing out mutations are reasonably low

Achieving such intelligent, finely tuned calibration likely relies on modularity – the first step that we believe can explain the emergence of intelligent behaviour. Like a large organisation that deploys a number of specialised teams to make and sell a single product, modularity is about self-maintaining units that can cooperate or compete to achieve local outcomes, but end up collectively working towards some larger goal. Crucially, this structure avoids micromanagement – each level doesn’t need to know how the lower levels do their job, but can simply motivate them (with things like reward molecules and stress pathways) to get it done.

Modularity – the presence of competent subunits, which solve problems in their own local problem space, that can cooperate and compete to achieve bigger goals – is part of what enables the emergence of intelligence in biology. The way these modules’ agendas are nested within one another in biological networks gives them the flexibility to meet goals at each level, even when conditions change at lower levels

When unicellular organisms joined up to make multicellular bodies, each module didn’t lose its individual competency. Rather, cells used specific proteins to merge into ever more complex networks that could implement larger objectives, possess longer memories and look further into the future. Networks of cells began to work as a society – measuring and pursuing goals defined at the level of the collective (such as ‘organ size’ and ‘organ shape’). Stress about large-scale states (such as ‘attention: finger too short’) triggered change, which was shared across tissues to implement coordinated action. This multiscale architecture has many advantages. For one, it’s easy for evolution to simply shift the modules around and let the cycle of error reduction take care of the rest – setting new conditions as the setpoint, or the source of stress (eg, ‘wrong length of limbs’ instead of ‘misfolded proteins’). It’s similar to how you can change the setpoint of your thermostat without having to rewire it, or even know how it works. Feedback loops within feedback loops, and a nested hierarchy of incentivised modules that can be reshuffled by evolution, offer immense problem-solving power.

One implication of this hierarchy of homeostatically stable, nested modules is that organisms became much more flexible while still maintaining a coherent ‘self’ in a hostile world. Evolution didn’t have to tweak everything at once in response to a new threat, because biological subunits were primed to find novel ways of compensating for changes and functioning within altered systems. For example, in planarian flatworms, which reliably regenerate every part of the body, using drugs to shift the bioelectrically stored pattern memory results in two-headed worms. Remarkably, fragments of these worms continue to regenerate two heads in perpetuity, without editing the genome. Moreover, flatworms can be induced, by brief modulation of the bioelectric circuit, to regrow heads with shape (and brain structure) appropriate to other known species of flatworms (at about 100 million years of evolutionary distance), despite their wild-type genome.

Modularity means that the stakes for testing out mutations are reasonably low: competent subunits can be depended upon to meet their goals under a wide range of conditions, so evolution rarely needs to ‘worry’ that a single mutation could ruin the show. For example, if a new mutation results in an eye being in the wrong place, a hardwired organism would find it very hard to survive. However, modular systems can compensate for the change while moving the eye back to where it’s supposed to be (or enabling it to work in its new location), thus having the opportunity to explore other, possibly useful, effects of the mutation. Tadpole eyes have been shown to do this, conferring vision even if asked to form on the tail, by finding connections to the spinal cord instead of the brain.

Modularity provides stability and robustness, and is the first part of the answer to how intelligence arose. When changes occur to one part of the body, its evolutionary history as a nested doll of competent, problem-solving cells means subunits can step up and modify their activity to keep the organism alive. This isn’t a separate capacity that evolved from scratch in complex organisms, but instead an inevitable consequence of the ancient ability of cells to look after themselves and the networks of which they form a part.

But just how are these modules controlled? The second step on the road to the emergence of intelligence lies in knowing how modules can be manipulated. Encoding information in networks requires the ability to catalyse complex outcomes with simple signals. This is known as pattern completion: the capacity of one particular element in the module to activate the entire module. That special element, which serves as a ‘trigger’, starts the activity, kicking the other members of the module into action and completing the pattern. In this way, instead of activating the entire module, evolution needs only to activate that trigger.

Pattern completion is an essential aspect of modularity which we’re just beginning to understand, thanks to work in developmental biology and neuroscience. For example, an entire eye can be created in the gut of a frog embryo by briefly altering the bioelectric state of some cells. These cells are triggered to complete the eye pattern by recruiting nearby neighbours (which were not themselves bioelectrically altered) to fill in the rest of the eye. Similar outcomes can be achieved by genetic or chemical ‘master regulators’, such as the Hox genes that specify the body plan of most bilaterally symmetrical animals. In fact, one could relabel these regulator genes as pattern completion genes, since they enable the coordinated expression of a suite of other genes from a simple signal. The key is that modules, by continuing to work until certain conditions are met, can fill in a complex pattern when given only a small part of the pattern. In doing so, they translate a simple command – the activation of the trigger – and amplify it into an entire program.

Crucially, pattern completion doesn’t require defining all of the information needed to create an organ. Evolution does not have to rediscover how to specify all the cell types and get them arranged in the correct orientation – all it has to do is activate a simple trigger, and the modular organisation of development (where cells build to a specific pattern) does the rest. Pattern completion enables the emergence of developmental complexity and intelligence: simple triggers of complex cascades that make it possible for random changes to DNA to generate coherent, functional (and occasionally advantageous) bodies.

Modular pattern completion is also becoming evident in recent experiments in neuroscience. The pinnacle of intelligent behaviour in biology is the human brain. Nervous systems are built with large numbers of neurons, where every neuron is typically connected to very large numbers of other neurons. Over evolutionary time, there’s been a steady progression to larger and more connected brains, reaching astronomical numbers – with close to 100 billion neurons and hundreds of thousands of connections per neuron in humans. This move towards higher numbers of neurons and connections cannot be a coincidence: a system with many interacting units is exactly what it takes to become more competent and complex.

Analysing the brain by looking at a single neuron is like trying to understand a movie from an isolated pixel

But what exactly do all these vast neural circuits do? While many neuroscientists would agree that the function of the nervous system is to sense the environment and generate behaviour, it’s less clear how that actually happens. The traditional view known as the ‘neuron doctrine’, proposed by Santiago Ramón y Cajal and Charles Scott Sherrington more than a century ago, is that each neuron has a specific function. This would make the brain analogous to an aeroplane, built with millions of components, each precisely designed for a particular task.

Within this framing, neuroscientists have teased apart the brain and studied it one neuron at a time, linking the activity of individual neurons to the behaviour of the animal or the mental state of a person. Yet if the true goals of biological systems derive from how their subunits or modules interact, then analysing the brain by looking at a single neuron is as futile as trying to understand a movie while fixating on an isolated pixel.

What type of properties might neural circuits generate? Because neurons can set each other off, neural circuits can generate internal states of activity that are independent of the outside world. A group of connected neurons could auto-excite each other and become active together for a period of time, even if nothing external is happening. This is how we might understand the existence of the concepts and abstractions that populate the human mind – as the endogenous activity of modules, made up of ensembles of neurons.

Using those intrinsic activity states as symbols, evolution could then build formal representations of reality. It could manipulate those states instead of manipulating the reality, just as we do by defining mathematical terms to explore relationships between objects. From this point of view – the gist of which was already proposed by Immanuel Kant in his Critique of Pure Reason (1781, 1787) – the evolution of the nervous system represents the appearance of a new formal world, a symbolic world, that greatly expands the possibilities of the physical world because it gives us a way to explore and manipulate it mentally.

Neuronal modules could also be organised in a hierarchy, where higher-level modules encode and symbolise increasingly more abstract entities. For example, lower-level groups of neurons in our spinal cord might activate muscle fibres, and be under the control of upper-level ensembles in the motor cortex that could encode the desired movement more abstractly (‘to change the position of the leg’). In turn, these motor cortex ensembles could be controlled by higher-order neurons again (‘to perform a pirouette’), which could be controlled by groups of neurons in the prefrontal cortex that represent the behavioural intention (‘to perform in a ballet’).

Using modules nested in a hierarchy provides a neat solution to a tough design challenge: instead of specifying and controlling every element, one at a time, nature uses neuronal ensembles as computational building blocks to perform different functions at different levels. This progression towards increasing abstraction could help explain how cognition and consciousness might arise, as emergent functional properties, from relatively simple neural hardware. This same powerful idea of hierarchical emergence is behind the layered neural network models in computer science, named ‘neural’ because they are inspired by neural circuits.

But back to Darwin’s problem: if evolution is blind and acting solely on the individual units, one mutation at a time, how can the overall architecture and function of an organism be modified for the common good? Besides generating modules, neural networks have another interesting property we’ve already discussed: pattern completion.

In recent experiments, mice were induced to have artificial perceptions or visual hallucinations by activating only two neurons in their visual cortex. How is this possible, given that the mouse brain has around 100 million neurons? The answer is that those neurons can trigger a neuronal ensemble, via pattern completion. Neurons’ connectivity seems to amplify activity, so, like an avalanche, a change in one neuron ends up triggering the entire module. This means that you can activate an entire module of neurons by turning on only one key member of the group.

Pattern completion shows us how a single event – say, a mutation – can change an army, or build an eye

Pattern completion could be at the core of how the brain works internally – recruiting module after module, at different levels of the hierarchy, depending on the task at hand. But why wouldn’t pattern completion end up tripping the entire brain into an epileptic seizure? By adding inhibitory connections to these neural circuits – small circuit breakers – one can restrict these avalanches to small groups of neurons, instead of catastrophically activating the entire brain. By harnessing pattern completion along with inhibitory circuits, the brain has the ability to select and manipulate modules at different levels as needed.

(I) Trigger stimuli allow for pattern completion, because networks can be organised such that they tend to settle in specific states (memories) from diverse positions (like the way a ball rolls down into a well from many different starting points). The topology of this landscape means that the system automatically returns to the same state when disturbed – symbolised by the ball falling to the bottom of a valley, when it is placed near the edge. This metaphor for pattern completion captures the idea of how the ‘landscape’ of the system ‘makes you do it’: it allows subunits to pursue local homeostatic goals (such as minimising a variable to move down a gradient), but enables the end result to be a higher-level pattern
(II) The ability for networks to reach the same outcome from a range of partial inputs – the generalisation of pattern completion – is also exploited in computer science. For example, computational neural networks can recover an entire remembered image based on only a partial example, from which some aspect has been removed
(III) The flatworm is a master of pattern completion, generating its entire anatomy from a small piece of its pattern. The bioelectric network of cells stores a pattern memory that controls individual cells in order to restore the whole
(IV) Pattern completion is also present in neural circuits, where a small group of connected neurons can store associative memory. Here, you see: a) independent neurons are present without synaptic connections; b) neurons 1, 3 and 5 are activated simultaneously by external inputs, which forms and strengthens the synaptic connections between them; c) when the input ceases, neuronal activity also stops (however, the synaptic connections between the three neurons remain; these neurons have formed a module, and their interconnectivity determines how the module activates); d) an input activates just one of the original three neurons, but the connections activate all three neurons and complete the entire pattern; e) even after the input current has ended, the neurons can remain persistently active, in effect, storing a memory of the input

In this way, pattern completion enables connections between modules at the same and different levels of the hierarchy, knitting them together as a single system. A key neuron in a lower-level module can be activated by an upper-level one, and vice versa. Like changing the march of an army, you don’t need to convince every soldier to do so – just convince the general, who makes the others fall into line. Consistent with the many parallels between neurons and non-neural signals, pattern completion shows us how a single event – say, a mutation – can change an army, or build an eye.

From microbe cells solving problems in metabolic space, to tissues solving problems in anatomical space, to groups of people navigating the world as we know it, life has ratcheted towards intelligent designs by exploiting the ability of modules to get things done, in their own way. Homeostatic loops provide flexible responses, working until setpoints are achieved, even when things change. Modularity means that evolution can readily explore what the collective considers to be a ‘correct’ condition, and what actions it takes to get there. Hierarchies of modules mean that simple signals can trigger complex actions that don’t need to be rediscovered or micromanaged, yet can adapt when only a small piece of the puzzle triggers them to do so.

We have sketched a set of approaches to biology that rely heavily on concepts from cybernetics, computer science, and engineering. But there’s still a lot of work to do in reconciling these approaches. Despite recent advances in molecular genetics, our understanding of the mapping between the genome on the one hand, and the (changeable) anatomy and physiology of the body on the other, is still at a very early stage. Much like computer science, which moved from rewiring hardware in the 1940s to a focus on algorithms and software that could control the device’s behaviour, biological sciences now need to change tack.

We call on biologists to embrace the intentional stance: treating circuits and cells as problem-solving agents

The impact of understanding nested intelligence across multiple scales cuts across numerous fields, from fundamental questions about our evolutionary origins to practical roadmaps for AI, regenerative medicine and biorobotics. Understanding the control systems implemented in living tissue could lead to major advances in biomedicine. If we truly grasp how to control the setpoints of bodies, we might be able to repair birth defects, induce regeneration of organs, and perhaps even defeat ageing (some cnidarians and planarian flatworms are essentially immortal, demonstrating that complex organisms without a lifespan limit are possible, using the same types of cells of which we are made). Perhaps cancer can also be addressed as a disease of modularity: the mechanisms by which body cells cooperate can occasionally break down, leading to a reversion of cells to their unicellular past – a more selfish mode in which they treat the rest of the body as an environment within which they reproduce maximally.

In the field of engineering, designers have traditionally built robots from dumb but reliable parts. By contrast, biology exploits the unreliability of components, making the most of the competency of each level (molecular, cellular, tissue, organ, organism, and colony) to look after itself. This enables an incredible range of adaptive plasticity. If we break the neuronal code in biology, we could begin to program behaviour into synthetic nervous systems, and build self-repairing, flexible robots. Recent work shows that agential cells with their own local agendas can already be guided to create entirely new, autonomous biorobots. And beyond robots’ bodies, these ideas also open up new approaches to machine learning and AI: raising prospects for architectures based on ancient and diverse problem-solving ensembles beyond brains, such as those of bacteria and metazoa.

This emerging confluence of developmental biology, neuroscience, biophysics, computer science and cognitive science could have profound and potentially transformative applications. Top-down strategies exploiting – in effect, collaborating with – biology’s native intelligence could enable transformative progress in areas oppressed by a narrow focus on molecular and genetic detail. With that in mind, we call on biologists to embrace the intentional stance: treating circuits, cells and cellular processes as competent problem-solving agents with agendas, and the capacity to detect and store information – no longer a metaphor, but a serious hypothesis made plausible by the emergence of intelligent behaviour in phylogenetic history. If we can come to recognise intelligence in its most unfamiliar guises, it might just revolutionise our understanding of the natural world and our very nature as cognitive beings.

Does Russia’s Invasion of Ukraine Constitute Biological Warfare?


Invading in the midst of a pandemic surely begs this discussion

Ukrainian troops help civilians cross the shelled bridge connecting the town of Irpin and Kiev in Ukraine.

War can be the cause of, and the perpetuation of, a public health emergency. The current Russian-Ukrainian conflict is no exception, and warrants a discussion of whether invading a country in the midst of a global pandemic constitutes an act of biological warfare. A look back in history can help us explore this question.

The First Act of Biological Warfare: The “Black Death”

In the year 1346, in the port city of Kaffa (now modern-day Theodosia) on the Crimean Peninsula of the Black Sea, the consequences of a different war were unfolding. In that time, Italian notary Gabriele de’ Mussi wrote:

“One infected man could carry the poison to others and infect people and places with the disease by look alone. No one knew, or could discover, a means of defense…the scale of the mortality and the form which it took persuaded those who lived…that the last judgement had come.”

We know now that de’ Mussi was in fact describing the horrors of Yersinia pestis, in what came to be called bubonic plague or the Black Death.

The city of Kaffa came under attack by a Mongol army controlled by Kipchak khan Janibeg, a descendant of Genghis Khan. Janibeg laid siege to the city to remove Genoese forces from an important defensive position in order to alter the European sphere of trade influence. But the Mongols miscalculated the level of resistance and the war dragged on for years, until, as de’ Mussi later wrote:

“…the whole army was affected by a disease which overran…and killed thousands every day…all medical advice and attention was useless.”

Janibeg eventually called off the siege, but not before ordering that the bodies of soldiers felled by the plague be launched via catapult into the city in the hopes of decimating the population. This has been seen as the first intentional act of biological warfare in recorded history and contributed to the explosion of arguably the most devastating pandemic in world history. It has been postulated that Italians fleeing the carnage on ships brought the plague to their home ports (amongst other routes of transmission). Within a year, the plague had gained a firm grip on the European continent. Within 5 years it has been estimated that as much as 40% of the global population was killed, corresponding to 200 million people.

Flash Forward to Modern Day: The COVID-19 Pandemic

Though over 600 years have passed since that pandemic (to use the modern term), we find ourselves in the midst of a new war during a global pandemic in the same region where the Black Death exploded. As expected, the risk of greater spread of COVID-19 and threats to an already strained healthcare system are significant.

Though data is scant from Russia, according to the Russian Ministry of Defense as of December 2021, military vaccination against COVID-19 was broadcast as upwards of 95%, with 25% receiving boosters. Despite this, the Russian army is not invincible, especially in the face of Omicron, and cases have reportedly been spreading throughout units. Reports (though unconfirmed) have estimated the Russian military has been losing almost 300 troops daily to injury or death. With images of soldiers being left on the battlefield and military units refusing to fight and surrendering, one has to wonder whether COVID-19 is a factor.

Potentially infected Russian troops create a health hazard as they increasingly mix with the Ukrainian population, engaging in warfare in close quarters and being taken as prisoners by the Ukrainian army (or vice versa). The Ukrainian population has a low rate of people fully vaccinated against COVID-19 at only 35%, and just over 36% having received at least one dose. With such low vaccination rates, COVID-19 can continue to spread. Like many other countries, Ukraine had a significant surge in cases in November and February during the Omicron spike, but that doesn’t negate another surge or the emergence of new variants. And while the Ukrainian military reportedly has a 99% vaccination rate, its ranks have increasingly been infused with civilians defending their homeland, many of whom are unvaccinated. The close contact they will have with Russian prisoners will almost certainly increase the likelihood of continued COVID-19 transmission. Additionally, having people cramped indoors in bomb shelters and subway stations without access to health services poses a serious risk. As the remaining Ukrainian population huddles together for safety and over 1.7 million and counting flee the country into neighboring European lands — some of which, like Poland, have waived their standard coronavirus quarantine and testing requirements for refugees — such actions may tragically create more opportunities for COVID-19 to spread, much like the Black Death.

Healthcare workers in Ukraine must not only contend with traditional battlefield casualties, but also the consequences of additional COVID-19 infections. This becomes exceptionally challenging given that the oxygen supply to Ukrainian hospitals is already at the point of exhaustion. Moreover, electrical shortages from the invasion pose immense risk to hospital systems and supplies including ventilators, dialysis machines, and lights. Other critical supplies are increasingly unavailable. Médecins Sans Frontières (Doctors Without Borders) suspended normal operations in the country the last week of February, and staffing shortages are already apparent. The world is receiving reports of hospitals and healthcare workers deliberately being targeted by the Russian military, using internationally banned weapons, such as cluster munitions. As of today, the WHO confirmed at least 14 attacks on Ukraine’s healthcare facilities — killing 9 people and injuring 16 — and classified two more attacks as possible.

We posit that launching this invasion in the midst of the COVID-19 pandemic constitutes a form of a biological warfare, intentional or not. Biological warfare occurs when a state uses a disease-causing agent in waging war. While the 1972 Biological Weapons Convention specifically bans microbial or other biologic agents for use other than peaceful or protective purposes as well as the production of weapons or equipment designed to deliver these agents during conflict, the Soviet Union, parent of the current Russian government, has a long history of treaty noncompliance. While exacerbating the spread of COVID-19 may not have been Russia’s primary goal when invading Ukraine, it is an obvious side effect that Russian leaders had to be aware of when they made the decision to invade. Accordingly, this may constitute biological warfare, especially when hospitals and healthcare workers are indiscriminately targeted, violating international rules of war.

As the war grinds on, the world sits on a precipice. Up to now, impasse is leading to escalation. When war stops, the full impact on the health of the population will ultimately be revealed.

Vaccine Researcher Who Developed Tinnitus After COVID Shot Calls for Further Study


Gregory Poland, MD, advocates both vaccination and better understanding of possible side effect

A photo of Gregory Poland, MD

Gregory Poland, MD, director of the Mayo Clinic’s Vaccine Research Group in Rochester, Minnesota, remains a steadfast vaccination advocate — even though he developed tinnitus soon after receiving his second dose of COVID vaccine.

A little more than a year ago, Poland was driving back from the hospital after receiving his second shot when he nearly veered out of his lane.

“It was like someone suddenly blew a dog whistle in my ear,” Poland told MedPage Today. “It has been pretty much unrelenting.”

Since then, Poland said he has been experiencing what he describes as life-altering tinnitus, or ringing in the ear. It occurs in both ears, but is worse in the left than in the right.

He remains steadfast that opting to receive his booster — after which his tinnitus briefly disappeared but then returned at a slightly higher pitch that made it just a bit less bothersome — was the right move. After all, it would be “way too ironic” for a prominent vaccinologist to die of COVID, he said. He also worried about the possibility of contracting COVID and spreading it to his patients.

Yet Poland realizes his life may never be the same, and that many others may be grappling with the same reality. He continues to receive emails from other individuals across the country and around the world who say they have also developed tinnitus after COVID vaccination.

Poland believes there may be tens of thousands of people affected in the U.S. and potentially millions worldwide. He feels strongly that more research should be done to determine what caused these symptoms and what can be done to help people desperate for relief.

“What has been heartbreaking about this, as a seasoned physician, are the emails I get from people that, this has affected their life so badly, they have told me they are going to take their own life,” Poland said.

Troubling Symptoms

Poland said of his own symptoms that he “can only begin to estimate the number of times I just want to scream because I can’t get rid of the noise or how many hours of sleep I’ve lost,” he said. The noise he hears is “particularly loud at night when there are no masking sounds.”

On a recent evening, he had an especially difficult moment. Poland, a self-described lover of nature and the outdoors, realized that he may never be able to hear the silence of nature again, which brought tears to his eyes.

He said that he finds some comfort in his 14- to 16-hour work days that have helped him to not focus on the noise that won’t cease.

“It’s something that deserves attention,” Poland said, pointing to the effort that has gone into defining the risk of myocarditis post-vaccination, and rightfully so, he said.

Thankfully, myocarditis often resolves within a few days of treatment, Poland noted. But with tinnitus, symptoms can persist.

The American Tinnitus Association describes the condition as audiological and neurological. Tinnitus can be acute or chronic, and many cases can be extreme and debilitating. Currently, there is no cure for most types of the condition, though there are treatment options to help patients live more comfortable and productive lives, according to the association.

Is There a Link?

Elliott Kozin, MD, a neurotologist at Massachusetts Eye and Ear in Boston, told MedPage Today in an email that there are “ongoing research efforts to understand if COVID-19 vaccines may be related to various auditory complaints, including hearing loss and tinnitus.”

Kozin said there are “no definitive studies on the subject.” Still, some research has shown evidence of neurological complications following COVID vaccination. For instance, the CDC has acknowledged rare reports of and the FDA warned about the risk of Guillain-Barre syndrome (GBS) following vaccination with the Johnson & Johnson vaccine.

And in a recent Vaccine Adverse Event Reporting System (VAERS) analysis reported in the Annals of Neurology, tinnitus was among the most commonly reported adverse neurological events following vaccination. But its authors noted that rates of neurological adverse events were far higher following SARS-CoV-2 infection than after vaccination.

MedPage Today search of the VAERS database yielded more than 13,000 results for tinnitus following COVID vaccination with mRNA vaccines. However, the database specifies that, for any reported event, cause-and-effect relationship has not been established.

A spokesperson for CDC told MedPage Today in an email that the agency is “aware of reports of tinnitus occurring in temporal association with mRNA COVID-19 vaccination.”

“Tinnitus is a common condition, heterogenous in nature, and has many causes and risk factors,” the spokesperson added. “Hundreds of millions of people have received mRNA COVID-19 vaccination under the most intensive monitoring in U.S. history. Currently, the data from safety monitoring are not sufficient to conclude that a causal relationship exists between vaccination and tinnitus.”

Kozin said two lines of research are needed: prospective human studies and well-designed animal studies. “Without studying symptomatic and asymptomatic individuals, it is challenging to understand the overall risk,” Kozin said regarding the need for prospective studies. “Animal studies may allow us to better understand causation as one can readily control administration of vaccine versus a placebo, as well as study auditory changes that occur on behavioral, physiologic, and cellular levels.”

Poland also believes more research is needed, though he, too, cautioned against rushing to conclusions.

“Temporality is not causality,” Poland said. “Rather, it forms a hypothesis, and then what you do is carefully collect information to determine [whether] this potential syndrome or side effect [is] above and beyond the background rate before there was COVID or a COVID vaccine, and is the rate different in people who got the vaccine and people who didn’t.”

“My own best guess is that this may be an off-target inflammatory response, inflammation of the temporal lobe area of the brain where sounds are generated or made sense of,” Poland said.

What Can Be Done

Kozin said that, following the administration of any new medication, including the COVID-19 vaccine, “individuals should pay attention to symptoms of hearing loss, tinnitus, ear ‘fullness,’ and dizziness,” and seek “prompt evaluation by a primary care provider, otolaryngologist, and/or audiologist.”

“Sometimes hearing symptoms are subtle,” Kozin said. “For example, an individual may primarily experience tinnitus and not realize that it is also accompanied by hearing loss.”

“The first step is to visit a medical provider and obtain a formal hearing test,” he added. “In some circumstances, such as sudden hearing loss, steroids may be given if a diagnosis is made soon after the onset of symptoms.”

For Poland, he believes that ongoing transparency is essential to continuing to build trust and confidence in vaccines.

He stressed that his story is not meant to frighten others or discourage them from getting vaccinated.

Nearly 1 million Americans have died of COVID, which can be prevented by a free vaccine and a 25-cent mask, he said. And there have also been reports of tinnitus following COVID itself.

Poland added that he would absolutely receive the COVID vaccine again because a wise person makes decisions on the balance of risks and benefits, not on fear.

Moving Forward

With the possibility that Americans could be advised to receive a fourth shot in the near future, or that COVID vaccines could become recommended on an annual basis or even more frequently, Poland said he is hopeful for more options.

Given his personal situation, he will look to protein subunit vaccines that are in development but not yet authorized by the FDA, such as those from Novavax, Medicago, and Sanofi.

A spokesperson for Pfizer, one of the makers of mRNA vaccines, said the following in a statement provided to MedPage Today: “We take adverse events, that are voluntarily reported by HCPs and individuals following vaccination with our COVID-19 vaccine, very seriously. Tinnitus cases have been reviewed and no causal association to the Covid-19 vaccine has been established.”

“To date, about 3 billion of our COVID-19 vaccines have been delivered globally,” the spokesperson added. “It is important to note that serious adverse events that are unrelated to the vaccine are unfortunately likely to occur at a similar rate as they would in the general population.”

Moderna, which also makes an mRNA vaccine, did not immediately respond to a request for comment.

Though tinnitus can sometimes resolve within several months or a year, that hasn’t yet happened for Poland. But he tries to keep things in perspective.

“Tinnitus is often associated with hearing loss, and I have my first grandchild and I want to hear him, all the things he thinks about as he grows up,” Poland said. “I’d encourage him to get the vaccine. But I don’t want this to happen to him.”

Antidepressants Often Ineffective During Pregnancy, in New Moms


Antidepressants don’t always help ease depression and anxiety in pregnant women and new moms, according to a new study.

“This is the first longitudinal data to show that many pregnant women report depression and anxiety symptoms during pregnancy and postpartum, despite their choice to continue treatment with antidepressants,” said senior author Dr. Katherine Wisner. She directs the Asher Center for the Study and Treatment of Depressive Disorders at Northwestern University Feinberg School of Medicine in Chicago.

The new research “lets us know these women need to be continually monitored during pregnancy and postpartum, so their clinicians can tailor their treatment to alleviate their symptoms,” Wisner said in a university news release.

For the study, 88 pregnant U.S. women completed assessments every four weeks from the time they joined the study until delivery, and at six and 14 weeks after giving birth.

During pregnancy, 18% of the women had minimal, 50% had mild and 32% had clinically relevant symptoms of depression, the study found.

Despite taking antidepressants called selective serotonin reuptake inhibitors (SSRIs), many women had lingering depression throughout their pregnancy and after giving birth.

Anxiety was also common in treated women, with symptoms worsening over time in some, according to findings published March 4 in the journal Psychiatric Research and Clinical Practice.

“Psychological and psychosocial factors change rapidly across childbearing,” said co-author Dr. Catherine Stika, a clinical professor of obstetrics and gynecology at Northwestern. “Repeated screenings will allow your clinician to adapt the type and/or intensity of intervention until your symptoms improve.”

The researchers also noted that depression in mothers affects their babies.

“This is key as children exposed to a depressed mother have an increased risk of childhood developmental disorders,” Wisner said.

The study also found that pregnant women taking antidepressants had other health issues such as excess weight, infertilitymigrainesthyroid disorders and asthma. A history of eating disorders predicted higher levels of depression.

Depression and anxiety affect 20% of women during pregnancy and after birth. That translates to an estimated 500,000 U.S. women who have or will have mental illness during pregnancy, the researchers said.

More information

The March of Dimes has more about depression during pregnancy.

SOURCE: Northwestern University, news release, March 4, 2022