Novel Approach May Help Prevent Genetic Kidney Disease in Mice


About a half million people in the United States alone suffer from autosomal dominant polycystic kidney disease (ADPKD). There is no cure, and researchers are working to develop therapies for the disease. For several decades, researchers have known that mutations in the PKD1 gene, which encodes the polycystin-1 (PC1) protein, can cause the disease in about 80% of cases. However, the protein is too large to be modified through gene therapy strategies. Now, a research team led by Laura Onuchic, MD, postdoctoral researcher in the Yale department of cellular & molecular physiology and Michael Caplan, MD, PhD, chair and C.N.H. Long professor of cellular & molecular physiology and professor of cell biology, has found that just a small piece of this protein might hold the key to preventing the disease.

The findings are published in Nature Communications in an article titled, “The C-terminal tail of polycystin-1 suppresses cystic disease in a mitochondrial enzyme-dependent fashion.”

“Our research shows that a tiny fragment of the PC1 protein—just 200 amino acids from the very tail end of that protein—is enough to suppress the disease in a mouse model,” said Caplan, who was principal investigator of the study. “Our work will provide new insights into the underlying disease mechanisms for polycystic kidney disease and reveal new avenues for developing therapies.”

A little over a year ago, a team led by Stefan Somlo, MD, C.N.H. Long professor of medicine (nephrology) and professor of genetics, found that if they removed the PC1 protein in mouse models, the kidneys became enlarged.

“They did a really beautiful experiment showing that in mouse models of polycystic kidney disease, where these animals get huge cysts in their kidneys, even when those cysts have already developed, turning the expression of the normal protein back on makes the cysts go away,” said Caplan.

“The problem with this as a therapeutic strategy is that this protein is 4300 amino acids long,” added Onuchic. “It’s too big for gene delivery.” The solution, Onuchic and Caplan say, may be to bring gene therapy for ADPKD down to a manageable scale.

Researchers use gene therapy to try to take the sequence that encodes their gene of interest and get it expressed in their desired cells. This usually involves viral vectors. “Viruses can be the Trojan horses that deliver your gene of interest into the cell you need to get it into, but those viruses only have a certain amount of room in their trunk,” said Caplan. Because the PC1 protein is massive, this poses a problem for treating polycystic kidney disease. “PC1 is way too big to fit in the Volkswagen Beetle that is most gene therapy vectors, but now just this 200 amino acid piece can fit in the glove compartment.”

In their new study, the team used a mouse model that they had genetically modified to allow them to turn off the genes associated with polycystic kidney disease. In other words, they genetically induced the disease in these models by creating mutations in the genomes of the mice. As a result, the models developed cysts. Then, the team turned on the expression of the 200 amino acid-long fragment of the protein. “Imagine flipping a light switch where one light goes off and one light goes on,” said Caplan. “We’re turning off the normal polycystic kidney disease gene and turning on the expression of just this tiny piece of the protein.”

The team found that this dramatically reduced the size of the cysts. “Even though we got rid of the full-length PC1 protein, which would normally cause significant cystic disease, just turning on this tiny piece was enough to suppress the disease,” he said.

The team plans to continue pursuing the use of gene therapy, initially in mouse models, for just the 200 amino acid piece, with hopes that their work will one day benefit humans. “From a therapeutic perspective, it’s really exciting that we’ll hopefully be able to at least slow down disease progression,” said Onuchic.

Our new brains: neurotechnology advances that could change everything


Brain science is improving rapidly — and so is visionary neurotech.

Illustration of a brain

When considering the possible impact of future technologies on our lives, it’s hard to overestimate neurotechnology. The brain is our world’s ultimate filter, modulating everything that gives meaning to our life. As those who experience severe depression or sublime psychedelic experiences alike can attest, tiny changes to our brain chemistry can dramatically impact our lives.

And those effects may look rather crude compared to the likely impact of new neurotechnologies. From tools that enhance our self-understanding, to brain-computer interfaces that tackle disease and disability, all the way to sci-fi-sounding capabilities for whole brain emulations — neurotech is undergoing a blossoming spring, accelerated by AI advances.

At my non-profit the Foresight Institute, we help ensure that the long-term applications of such technologies are beneficial. Here are a few developments that have the potential to impact our lives in the years to come, presenting a variety of complex ethical implications.

Better Tools = Better Understanding

The human brain is one of the most complex things that we know exists in the universe, and we are still in the very early-stages of understanding it. Crucially, unlike many things scientists study, we can’t just “take it apart” — not only because it’s usually “in use” but also because much of the architecture that leads to its functionality is incredibly small, intricate, and delicate.

Fortunately, neuroscientists like Ed Boyden from MIT are working on better tools to help our understanding. Ed wondered what if, rather than painstakingly magnifying tiny structures in the brain with a microscope, we could instead enlarge them to make them easier to study?

It turns out that, similar to how polymers make diapers swell, it is possible to enlarge relevant brain samples using hydrogels. This “expansion microscopy” is a brand new tool for blowing up and imaging samples of complex biological structures such as organs and tissues.

Equipped with such techniques, we now have a better shot at tracking down the complex molecular changes in the brain that lead to disorders like epilepsy and Alzheimer’s.

Brain Cell Replacement

Many diseases, including Alzheimer’s, dementia, and other neurological disorders, are problems  of old age. Jean Hebert from the Albert Einstein College of Medicine in New York has made it his mission to stop aging in the brain before it starts. 

Hebert observed that over the years, advances in medical technology have allowed for the replacement of various failing organs in the body with organs cultivated from donors or grown in the lab. The human brain, however, remained irreplaceable.


Immunofluorescence image of graft in mouse brain tissue (scale bar, 1 mm).

Hebert is exploring the possibility of gradually replacing the neocortex — the brain area thought to be responsible for attention, thought, perception and episodic memory — while retaining human identity. Since the neocortex is plastic, meaning that brain functions can relocate within it, if tissue in an area responsible for a particular brain function is damaged, tissues in another area could, in principle, take over that function.

First experiments exist, aimed at introducing novel donor tissue into mouse cortices, such that the cortex was able to accommodate it successfully enough to respond to visual stimuli. It is very early days in this huge undertaking, but if it is successful, the approach would not only preserve brain function but also perhaps someday facilitate the acquisition of new skills and knowledge later in life.

Brain-Computer Interfaces (BCIs)

The plasticity of the brain has additional potential to give disabled people back some of their lost abilities. For instance, David Eagleman’s team at Stanford has created a Sound Awareness wristband that allows people to “hear” through their skin, by converting sound into tactile feedback, enabling deaf users to perceive spoken words.


Sound Awareness wristband.

New brain-computer interfaces (BCIs) are under development that seek to overcome the limitations that held back traditional devices. Neuralink, for instance, eliminates the cumbersome machinery and wiring that standard headsets require by using wireless brain implants. These are embedded into the skull and transmit data via Bluetooth. Neuroscientist Sumner Norman from Caltech is exploring the feasibility of functional ultrasound as a less invasive solution, and has returned encouraging results using ultrasound neuroimaging to predict movement.

The widespread adoption of future iterations of less invasive BCIs could address mental health conditions and brain disorders, and may empower us to control prosthetic limbs and other devices.

Brain-inspired AI

Anyone using ChatGPT and related apps will have experienced the rapid acceleration of AI technologies, and these breakthroughs are helping us better understand the brain, while the brain is helping us develop better AI.

Chris Eliasmith is a cognitive scientist who has developed a large-scale neural model of the human brain, called the Semantic Pointer Architecture Unified Network (SPAUN), which can perform multiple cognitive tasks, such as recognizing numbers, copying patterns, and solving simple arithmetic problems. He thinks that building such large-scale brain models can not only help us better understand the brain and develop new treatments for neurological disorders, but it may also lead us to create more advanced artificial intelligence systems.

By mimicking the human brain’s structure and function, it may be possible to create more trustworthy AI systems.

As AI systems become more complex, the need for safety also increases. Physicist turned AI safety researcher Steven Byrnes is pioneering a new field of brain-inspired AI safety. Byrne’s research focuses on developing neurally-inspired algorithms that can learn from experience, adapt to changing environments, and make decisions in a way that is similar to human cognition. He believes that by mimicking the human brain’s structure and function, it may be possible to create more trustworthy AI systems.

Whole Brain Emulation (WBE) & Artificial General Intelligence (AGI)

Looking further ahead Whole Brain Emulation (WBE) is a potential future technology that has not only captured the interest of science fiction authors but also that of serious scientists. The original Whole Brain Emulation Roadmap, co-authored in 2008 by neuroscientist Anders Sandberg and colleagues, proposes several potential pathways, such as beginning with the emulation of individual neurons as a first step towards simulating neural networks and eventually entire brains.

Brain mapping could one day allow us to create models of the brain’s structure and function.

Back in 2008, given the various technological bottlenecks to be overcome, discussions about WBE have assumed it was possible, but not in this century. Since then, many of the technical capabilities required for making progress, such as brain scanning and mapping technologies, have been rapidly improving. E11 for instance, is a new moon-shot effort to make brain mapping easier and more accessible. Data obtained frombrain mapping could one day allow us to create models of the brain’s structure and function. These could then be used for developing algorithms that simulate neural activity — a first step on the path to WBE.

At the same time, rapid AI advances have also led to shorter proposed timelines until we get to Artificial General Intelligence (AGI) — current community predictions on forecasting platform Metaculus project the development of AGI as early as 2032.

With the rise of AI safety concerns, some researchers believe that accelerating WBE development could reduce the risk of unaligned AGI by introducing human-aligned software intelligence first. The Foresight Institute‘s WBE for AI safety workshop aims to explore the current state of WBE technology, possible strategies to accelerate development, and risks and ethical issues that may arise from such a strategy.

Differential neurotech

The rapid development of neurotechnology offers exciting possibilities for understanding the brain, treating neurological disorders, advancing intelligence, and ultimately leading to new mind architectures.

As we explore this frontier, we need to address ethical and societal implications in tandem if we want human minds, and other minds, to live fulfilling lives. It’s not too early to start this process now as these futures are wild and they are coming fast.

A recent paper by Yu Takagi and Shinji Nishimoto showed that it’s possible to use generative AI to recreate images from human brain activity that match what the human subjects were looking at relatively well. Such technology could potentially help with crime, for instance by clarifying often clouded eyewitnesses’ recollection of a suspect. But in the hands of the wrong people, profit-seeking corporations, and especially authoritarian governments, mind-reading technology could be devastating to human privacy.

Looking at the very long term, philosophers Nick Bostrom and Carl Schulman suggest that human minds merely occupy a tiny corner of a vast space of possible minds that could be created with AI. We may have to update our moral intuitions, based on assumptions about human nature, for instance to account for new kinds of crimes, such as altering someone’s subjective experience without their consent.

The nascent field of “differential” neurotech development, aimed at preferentially advancing safety-enhancing technologies, may be a promising first step to determine projects that would focus on beneficial neurotechnologies. Whether we will be using neurotechnology to help elevate or control our brains, is ultimately up to us — rather, to our current brains.

Benefits of insulin pens for the management of diabetes


Diabetes is an increasing public health problem, and insulin therapy is the cornerstone for the treatment of type 1 diabetes. In type 2 diabetes treatment, insulin therapy is used only when oral or other injectable agents fail to achieve glycaemic control.

Injecting insulin with a syringe or pen is the preferred insulin delivery method for most patients with diabetes, although inhaled insulin is also available. 

The following are the benefits of pen devices:

  • Insulin pens have been developed to help address issues including fear of injections, poor dose accuracy, lengthy training time, lack of social acceptance, and difficulty of transportation. Therefore insulin pens offer the advantages of improvements in portability, dosing precision, mealtime flexibility, and convenience of delivery.
  • Compared to the vial and syringe, insulin pen devices confer increased treatment satisfaction and greater patient preference with better quality of life.
  • Pen devices are associated with improved costs of care, less reported injection pain, and improved patient self-management behaviours, including treatment adherence, as compared with the vial and syringe. Because of the greater ease of use of insulin pens, patients with visual impairment or reduced dexterity may especially benefit from using an insulin pen rather than a vial and syringe.

Sugar-powered implant produces insulin as needed


It could revolutionize diabetes management.

a hand holding a thumbnail-sized package that looks like a tea bag

Swiss researchers have developed a sugar-powered implant that automatically produces insulin when blood glucose levels are high — potentially giving people with diabetes an easier, less-painful way to manage their condition.

Diabetes management: For people with type 1 diabetes (T1D), the body doesn’t produce enough (or any) insulin, a hormone that converts blood sugar into energy. To prevent their blood glucose levels from being dangerously high, they need regular injections of synthetic insulin.

People who manage their diabetes manually must give themselves these painful injections multiple times a day. Those who use insulin pumps only have to deal with jabs when they change the catheter every 2 or 3 days, but they have to get used to having the pump on their person at all times.

In either scenario, people with T1D must check their blood sugar levels regularly, with finger pricks and/or a continuous glucose monitor — another device attached to the outside of their body.

When an electric current is applied to the cells, they produce and secrete insulin.

Sugar-powered implant: Researchers at ETH Zurich have now developed a fully implantable two-part diabetes management system, which automatically releases insulin when blood sugar levels are high and then stops releasing it when they return to normal.

The first part is a fuel cell, coated in alginate, an algae-based product approved for medical use. When implanted under the skin, the alginate soaks up fluids and allows glucose to enter the heart of the fuel cell, where copper-​based nanoparticles split it into gluconic acid and a proton. 

The proton triggers an electric circuit, powering the second part of the system: a capsule filled with artificial beta cells. The ETH Zurich team developed these cells in 2016, and when an electric current is applied to them, they produce and secrete insulin.

The testing: The system is designed so that the fuel cell only produces the electric current when blood glucose exceeds a certain level. Once the release of insulin brings blood sugar down enough, it stops producing the current.

When the researchers tested the sugar-powered implant in mice with T1D, it worked as hoped, triggering the production of insulin as needed to keep blood glucose levels steady.

The researchers say the system can also produce enough electrical energy to communicate with a smartphone or other external device — this could give users and their doctors a way to monitor it using an app.

a schematic showing how the system works autonomously
A schematic showing how the system works autonomously.

Looking ahead: The ETH Zurich team isn’t the first to develop a glucose-powered fuel cell — researchers have been exploring their use to power medical implants for decades — but it is the first to pair one with an implantable system that automatically generates insulin.

While their prototype showed promise in mouse tests, it’s still only a prototype, and it’s not clear how long a version designed for people would operate in a human body or how invasive the implantation and removal processes would be.

The researchers are now looking for partners to help them develop a version of the sugar-powered implant that could be approved for use in people with T1D — giving them a way to manage their diabetes that doesn’t involve constant monitoring and regular injections.

Harvard geneticists create an organism that is immune to all viruses


Using genetic engineering, researchers have made a virus-resistant E. coli.

Rendering of E.coli bacteria

Researchers at George Chuch’s Harvard lab have genetically engineered a bacteria, E. coli, to be totally immune to viruses.

In addition to blocking every virus the team has challenged it with thus far, their E. coli has also been designed so that its modified genes cannot escape into the wild, which does indeed sound like the plot of a lost Michael Crichton novel. (In fact, the parallels to Jurassic Park are there, but we’ll get to that.)

“We believe we have developed the first technology to design an organism that can’t be infected by any known virus,” genetics research fellow and study author Akos Nyerges said

“We can’t say it’s fully virus-resistant, but so far, based on extensive laboratory experiments and computational analysis, we haven’t found a virus that can break it.”

“We believe we have developed the first technology to design an organism that can’t be infected by any known virus.”Akos Nyerges

Production and protection: The main results of the study, published in Nature, could carry big implications for the future of bacteria-based production — for instance, using bacteria to make medicines.

Cells and bacteria can be used as little labs or factories, cranking out any number of small molecules and biological compounds. E. coli, with its well-understood genome and reputation as a workhorse, is used for the production of almost two dozen biopharmaceuticals, including insulin, and is also being used in making biofuels.

But while harnessing bacteria like E. coli can outsource complex chemistry to organisms for whom it is, ahem, second nature, it also leaves these processes vulnerable to viruses.

“Viral contamination in cell cultures remains a real risk with severe consequences: over the past four decades, dozens of viral contamination cases were documented in industry,” the authors wrote in their study.

Cutting codes: In 2022, a University of Cambridge team thought they had created a virus-resistant E. coli. But when Nyerges, research fellow Siân Owen, and graduate student Eleanor Rand challenged them with random viruses found around Harvard Medical School — including some from a rat nest and the nearby Muddy River — the bacteria proved far from invincible.

The Cambridge attempt hinged on designing the bacteria to make everything it needed using only 61 sets of genetic building blocks, called codons, as opposed to 64. Without those missing codons, the thinking went, the viruses wouldn’t be able to hijack the cells.

To make their virus-resistant E. coli, the team used a special kind of RNA.

This did not prove to be the case.

Rather than being hamstrung, the viruses merely brought in their own genetic pieces, doing an end-around the firewall and getting back to what they do best: infect, replicate, repeat. 

Rather than eliminate codons, the Harvard team decided to instead alter what the codons make.

Enter RNA: The new work centers around a specific type of RNA called transfer RNA (tRNA). 

The tRNA’s job is to recognize each codon in DNA and then add the correct amino acid to whatever protein is being created — kind of like putting a key component into a car on the factory line. The Cambridge team had deleted codons called TCG and TCA and the tRNA that recognizes them from their bacteria. Both of those codons direct the tRNA to install serine, an amino acid, onto the protein getting put together.

The Harvard team went one step further, by adding in “trickster” tRNAs; when they see TCG or TCA, they install a different amino acid — called leucine — instead of serine.

“Leucine is about as different from serine as you can get, physically and chemically,” Nyerges said. 

When a virus busts through the door carrying TCG and TCA, the trickster tRNA slips it leucine instead of serine, creating non-functional virus proteins and blocking it from replicating. (The virus does bring its own tRNA to the party, but the Harvard team believes their cell’s tRNA outcompetes it.)

“It was very challenging and a big achievement to demonstrate that it’s possible to swap an organism’s genetic code, and that it only works if we do it this way,” Nyerges said.

The team thinks it would take a virus developing dozens of mutations — in specific places and at the same time — to hijack their E. coli.

The Harvard team added in “trickster” tRNAs that install a different amino acid. This creates non-functional virus proteins and blocks replication.

Genetic firewalls: Speaking of Michael Crichton, a hallmark of the author’s books is science slipping its bonds and wreaking havoc — think Jurassic Park. The researchers took this concern seriously — a bacteria that can resist all of its natural virus enemies could be a real problem in the wild. 

To prevent their genetically engineered E. coli’s code from escaping, the researchers used two different safety mechanisms.

The first was to prevent horizontal gene transfer, a natural process that allows bacteria to swap genes with each other directly. To avoid the engineered code from getting co-opted by a wild bacteria, the team made all the leucine codons in their E. coli into TCG or TCA. 

This isn’t a problem for the engineered cells’ trickster tRNA, which uses TCG and TCA to make leucine anyway. But in a non-engineered organism, TCG and TCA are for serine, not leucine. Using serine in place of leucine will lead to junk proteins, genetic code “gibberish,” as Nyerges put it. And if a trickster tRNA itself gets into a normal cell, its amino acid swapping will kill the new cell, hopefully stopping the leak.

For the team’s other safeguard, we go back to Jurassic Park. In the book and film, the animals are made dependent on an amino acid called lysine that the park gives them; without the lysine, they die. Theoretically, this means any dino that escaped would’ve been on borrowed time.

The team’s E. coli was also made to be dependent on an amino acid, one which doesn’t exist outside of the lab. No amino acid, no bacteria.

“We can’t say it’s fully virus-resistant, but so far, based on extensive laboratory experiments and computational analysis, we haven’t found a virus that can break it.”Akos Nyerges

Next steps: The team next wants to use their codon engineering to create infection-resistant bacteria that can make important materials that would need complicated chemistry otherwise, without the constant risk of contamination by even a single virus.

The work may also prove foundational for genetic engineering going forward.

“Our results may provide the basis for a general strategy to make any organism safely resistant to all natural viruses and prevent genetic information flow into and out of genetically modified organisms,” the authors wrote.

How close are we to reversing paralysis?


People with “permanent” paralysis are regaining the ability to use their limbs.

a composition of a man getting out of a wheelchair to use a walker

If you want to do so much as lift a finger, an electrical signal needs to be able to travel from your brain down to the digit. A problem with the nervous system anywhere along this path — in your brain, spinal cord, arm, etc. — can cause paralysis in your hand, meaning you can no longer move it voluntarily.

Nearly 2% of people living in the US experience some level of paralysis in their arms or legs, but thanks to groundbreaking innovations in neuroscience, we’re seeing that forms of paralysis long assumed to be permanent can be reversed — and even more exciting breakthroughs are on the horizon.

Brain computer interfaces

How it works: Brain computer interfaces (BCIs) can bypass damage in the spinal cord, limbs, or nerves, using electronics to get commands from the brain to muscles.

The process starts by detecting the electrical signals in the brain when a person thinks about moving their paralyzed body part — this can be done using implanted electrodes or an electroencephalogram (EEG) cap.

The signals are then sent to a computer that turns them into commands for the muscles. Those signals are delivered to the muscles using electrodes positioned on the outside of the affected limb or under the skin, right on top of the muscle.

Where we are now: BCIs aren’t science fiction — we’ve seen many examples of these devices actually restoring arm and leg movement in people paralyzed by spinal cord injuries. Still, BCIs are a very experimental approach to reversing paralysis, limited mostly to small trials in labs.

The drawbacks: Implanted electrodes are more invasive than EEG caps, but they can record brain signals at much higher resolutions, which can make it easier for a BCI to accurately translate thoughts into commands.

But scar tissue tends to form around implants, degrading the signal strength over time — the longest we’ve seen an implanted electrode power a BCI is seven years.

Several promising implant advances could extend the durability of BCIs.

Looking ahead: A number of promising implant advances could extend the durability of BCIs and help get them out of labs and into patients.

Tech company Synchron is currently trialing a matchstick-sized implant called the “Stentrode” in people — it’s designed to record signals from inside a blood vessel in the brain, which Synchron says makes the implantation procedure much less risky and eliminates the issue of scar tissue.

A new implant developed at the University of Cambridge and tested in mice places a layer of stem cells between the electrodes and living tissue. These cells can be programmed to become brain cells, spinal cells, or muscle cells, depending on where the implant is being placed.

“By putting cells in between the electronics and the living body, the body doesn’t see the electrodes, it just sees the cells, so scar tissue isn’t generated,” said co-lead researcher Damiano Barone.

a wire mesh implant
A Stentrode implant. Credit: Synchron

Spinal stimulators

How it works: Spinal cord injuries are a common cause of paralysis, but you don’t always need a BCI to completely reroute a signal around the damage.

Spinal stimulators deliver precise electrical stimulation to the nerves of the spinal cord — this general stimulation appears to boost any natural brain signals that do make it through the damage, helping restore movement in paralyzed limbs.

The electrodes connect to a battery pack, typically implanted in a patient’s abdomen or the top of their buttocks. The stimulation is usually controlled by an external device, such as a tablet, and it can be adjusted for different activities. 

Where we are now: Over the past decade, we’ve seen significant progress in reversing paralysis with spinal stimulation, with some patients regaining some movement in their lower extremities in 2014.

Now, at least a dozen people with spinal cord injuries that had left them paralyzed from the waist down can walk again thanks to a spinal cord stimulator developed in Switzerland — in some cases, they were able to take steps on a treadmill the day after implantation. 

It can also work for other causes of paralysis. Two patients with partial paralysis due to stroke were able to regain some voluntary movement thanks to spinal cord stimulators placed in their necks by researchers from Carnegie Mellon University and the University of Pittsburgh.

Some of their added mobility was even retained for weeks after the implants were removed when the study ended.

Spinal stimulators boost the brain signals that make it past the site of a spinal cord injury.

The drawbacks: The Swiss spinal cord stimulator uses an electrode array that’s 6 centimeters long. Unlike spinal stimulators designed to treat back pain, which can be put in place with a needle, patients must undergo invasive surgery to have this larger device implanted.

Once in place, researchers say the electrode should be able to deliver stimulation indefinitely, but patients will need to undergo surgery to have the battery pack that powers it — which was implanted in their abdomens — replaced every nine years.

The spinal stimulator used in the stroke patients can be implanted with minimally invasive surgery, but was removed less than a month after implantation, so we don’t know how well it would work long term.

Looking ahead: The use of spinal stimulators to reverse paralysis is still new territory, and larger trials are needed to prove their efficacy and safety — the Pittsburgh team’s ongoing trial is expected to enroll 13 more people, and the Swiss team is gearing up for trials involving 50 to 100 participants.

a surgeon showing a woman an image of where an implant will go in her neck
A researcher in the Pittsburgh trial explaining the implantation surgery to one of the first two participants. Credit: Tim Betler / UPMC / University of Pittsburgh Schools of the Health Sciences

Stem cells

How it works: We have about 200 different types of cells in our bodies, but before they differentiated into one type or another, they all started out as stem cells. When stem cells are injected at the site of a paralysis-causing injury, they can help regenerate damaged tissue.

Where we are now: In 2016, a small trial out of Stanford University found that injecting stem cells into the brains of stroke survivors could improve their motor function, giving some participants who had been in wheelchairs the ability to walk.

Stem cell injections into the spine have also successfully reversed paralysis in some people with spinal cord injuries, and adding stem cells to the patches used to treat the birth defect spina bifida appears to help prevent paralysis in children.

Stem cells can help regenerate damaged tissue at the site of a paralysis-causing injury.

The drawbacks: Stem cell treatments are still experimental, and while we’ve seen some trial participants gain significant mobility, others in the same trials see little to no benefit at all — the therapy simply isn’t reliable.

In some cases, patients even end up in a worse position after stem cell treatment — a type of stem cell transplant that had shown promise in animal models caused a man with paralysis additional pain and led to the development of a benign tumor in his spine years later.

“The worst-case scenario is not necessarily that [stem cell therapy] doesn’t work,” Nanette Hache, one of the man’s physicians and a professor of radiology at Memorial University of Newfoundland, told STAT in 2019. “There can be other complications, such as tumor formation.”

Looking ahead: With more research, we might be able to answer the many lingering questions preventing stem cells from being a reliable, safe way to reverse paralysis, such as what types of cells to use, in what dosages, and when and how to administer them.

Today’s solutions

BCIs, spinal stimulators, and stem cells are a few of the most promising avenues to reversing paralysis, but they aren’t the only ones — researchers are also seeing encouraging results from gene therapies, synthetic molecules, and nerve transfers.

While researchers continue to develop these therapies to reverse paralysis, we can lean on technologies such as exoskeletons, home robots, and mind-controlled prosthetics to improve the lives of people living with the condition.

If there is a maximum human lifespan, we’ve yet to reach it


The record for longest life is most likely to be broken by people in one particular demographic.

an older woman wearing a party hat

The current record for longest life is likely to be broken in the coming decades, according to a new study that found we’ve yet to reach the maximum human lifespan — if one even exists.

The question: The average human life expectancy has been rising in most countries for decades, thanks to better healthcare, hygiene, and diets, but the maximum human lifespan hasn’t changed since 1997, when France’s Jeanne Calment (born in 1875) died at the age of 122.

This has led some to speculate that perhaps there’s a set limit on how long people can live — the average lifespan might continue to trend toward this maximum age, but no matter how many advances we see as a society, people just aren’t going to survive beyond it.

“If there is a maximum limit to the human lifespan, we are not yet approaching it.”David McCarthy and Po-Lin Wang

What’s new? While much of the research on the maximum human lifespan has focused on biology, a new study, published in PLOS One, approached the topic from the perspective of statistics — and reached a heartening conclusion for anyone hoping to live a long life.

“Our results confirm prior work suggesting that if there is a maximum limit to the human lifespan, we are not yet approaching it,” write authors David McCarthy and Po-Lin Wang from the University of Georgia and the University of South Florida, respectively.

The approach: For their study, the authors analyzed mortality records from the US and 18 other industrialized nations, looking at people with a shared birth year.

They noticed that, while the dominant pattern throughout history is the average age at death skewing higher, there are also periods when the maximum age appears to jump up, a phenomenon dubbed “mortality postponement.”

They noticed one example of this in women born between 1855 and 1875, and they see signs of it happening in groups born between 1900 and 1950, too. We just haven’t seen any of those people break the maximum human lifespan record yet because most of them are still too young.

“This depends on whether … there is a stable economic, political, and environmental environment that continues to support extreme longevity.”David McCarthy

One group in particular seems to be at the forefront of the phenomenon, according to the researchers’ analysis.

“The model suggests that the oldest Japanese woman born in 1940 has a 50% chance of living past 130,” McCarthy told ZME Science.

“Of course, this depends on whether our model is an accurate description of how old-age morality will change, and whether there is a stable economic, political, and environmental environment that continues to support extreme longevity,” he continued.

The big picture: While the idea of there being no maximum human lifespan is exciting — for one thing, it means immortality isn’t theoretically off the table — it also means a situation we’re already grappling with, the world’s aging population, could get much more pronounced in the future.

This just emphasizes the importance of figuring out now how we’re going to take care of this growing group of seniors in their twilight years, perhaps with wearable tech, advanced home robots, and autonomous food deliveries being part of the solution.

New lithium recycling method is cleaner and cheaper


It could help ensure we have all the batteries we need for the clean energy future.

a scientist looking at material in a small jar

An inexpensive, environmentally friendly technique for lithium recycling could help ensure we have enough of the valuable metal to power the clean energy future — if it works as well in the real world as it does in the lab.

The challenge: Transitioning to electric vehicles (EVs) is a key part of combating climate change, and because lithium-ion batteries can store a lot of energy for their size, they’re our best option (so far) for powering them.

The lithium needed to create those batteries is a finite resource, though, and mining it is environmentally destructive. Demand for lithium extends beyond EVs, too, as lithium-ion batteries are used in laptops, smartphones, and even TVs.

“This method … enables inexpensive, energy-efficient, and environmentally compatible recycling.”Oleksandr Dolotko

Lithium recycling: While we can extract lithium and other useful metals from old batteries after they die, the standard recycling process is expensive, inefficient, and requires extreme heat or corrosive chemicals. It’s one reason why the vast majority of lithium batteries still don’t get recycled, and many end up in landfills.

Researchers at the Karlsruhe Institute of Technology (KIT) have developed a new lithium recycling technique that works at low temperatures, without harsh chemicals.

“The method can be applied … for a large range of commercially available lithium-ion batteries,” said first author Oleksandr Dolotko. “It enables inexpensive, energy-efficient, and environmentally compatible recycling.”

How it works: The KIT team’s lithium recycling method starts with adding some aluminum foil to a batteries’ cathode and grinding it all up in a “ball mill” — a hollow, spinning cylinder containing balls that smash up whatever is in the container.

The mechanical force of the milling causes a chemical reaction between the aluminum and the cathode materials — though even the researchers aren’t exactly sure why.

“It is really hard to say how it happens,” Dolotko told Nature.

“One of the most challenging parts of the invention is finished – the technique works.”Oleksandr Dolotko

After taking the ground up mixture out of the mill, they add hot water (194 F). When the water is evaporated off, lithium carbonate — a useful material that can be used to make new batteries — is left.

Using this process, the researchers say they were able to recover 70% of the lithium in the cathode materials, and it worked with the cathode materials found in a range of different lithium-ion batteries.

For comparison, some battery recycling companies in the US claim to recover 95%-98% of the critical materials from lithium-ion batteries.

The cold water: In their study, published in Communications Chemistry, the researchers attempted to remove lithium from the materials typically found in the cathodes of lithium-ion batteries — they didn’t actually start with used up battery cathodes, which might contain impurities.

However, they don’t believe those impurities would have much of an effect on the lithium recycling process. 

“The discovered technology presented in this article can be applied to these materials without significant adjustments,” they wrote. “The reaction conditions and final recycling products are expected to be similar to the ones investigated in this work.”

The cathode is also just one part of a lithium-ion battery, and the authors concede that the recycling process might not work — or work as well — if we tried to just grind up entire batteries, meaning we’d still need the tedious step of breaking them down for recycling.

“These extra components, like a binder, graphitic anode, copper, or other additives or side products of the black mass preparation, might affect the mechanochemically-induced recycling process,” they write.

Looking ahead: The researchers are now focused on answering important questions that will determine whether their lithium recycling technique will be able to play a major role in shaping our clean energy future.

“Currently, we are taking part in two European consortia where this technology will be applied to industrially treated lithium-ion battery wastes, scaled up, and evaluated for its profitability and environmental impact,” said Dolotko.

“As inventors, we believe adopting this technology in the industry is real and achievable … One of the most challenging parts of the invention is finished – the technique works,” he continued. “Now it is time to bring it to another level, which is our next exciting step.”

Nasal COVID-19 Vaccine Shows Promise


COVID-19 vaccination continues to be a topic of intense research, even if it lacks the urgency experienced at the height of the pandemic. One area of focus is the development of mucosal vaccines that can be administered through the nose. These nasal vaccines are inexpensive to produce, easy to store and transport, and useful in places with limited access to trained medical staff.

Now, scientists have developed a live attenuated SARS-CoV-2 vaccine for the nose that shows promise by targeting the mucous membranes of the nose, mouth, throat, and lungs and confers better immunity than vaccines injected into muscle.

This work is published in Nature Microbiology in the paper, “Live-attenuated vaccine sCPD9 elicits superior mucosal and systemic immunity to SARS-CoV-2 variants in hamsters.

In the fall of last year, two nasal COVID-19 vaccination formulations were approved for use in India and China though these have not yet applied for approval in Europe. These contain modified adenoviruses that are self-attenuating.

The benefits of a nasal vaccine go far beyond being a needleless option. When a vaccine is injected, it infers immunity primarily in the blood and throughout the entire body. In this case, the immune system only detects and combats coronaviruses relatively late in an infection, as they enter the body via the mucous membranes of the upper respiratory tract. “It is here, therefore, that we need local immunity if we want to intercept a respiratory virus early on,” explained Jakob Trimpert, PhD, a research group leader at the Institute of Virology at Freie Universität Berlin.

Scientists tested the efficacy of the newly developed intranasal COVID-19 vaccine on hamster models. They compared “immune responses and preclinical efficacy of the mRNA vaccine BNT162b2, the adenovirus-vectored spike vaccine Ad2-spike and the live-attenuated virus vaccine candidate sCPD9 in Syrian hamsters.”

After double vaccination with the live attenuated vaccine (A), the nasal mucosa in the hamster model is well protected and shows hardly any changes from SARS-CoV-2 (B). The combination of live and mRNA vaccines (C) is also very effective, but the virus still finds small sites to attack (stained brown) in the nasal mucosa (D). In comparison, double intramuscular vaccines perform much worse in terms of protecting the nasal mucosa (E+F and G+H). They allow the virus to damage the upper tissue layers.
[Anne Voß, Institute of Veterinary Pathology, Freie Universität Berlin]

They found that after two doses of the vaccine, the virus could no longer replicate in the model organism. “We witnessed strong activation of the immunological memory, and the mucous membranes were very well protected by the high concentration of antibodies,” Trimpert explained. The vaccine could therefore also significantly reduce the transmissibility of the virus.

In addition, the scientists compared the efficacy of the live attenuated vaccine with that of vaccines injected into the muscle. To do so, they vaccinated the hamsters either twice with the live vaccine, once with the mRNA and once with the live vaccine, or twice with an mRNA or adenovirus-based vaccine. Then, after the hamsters were infected with SARS-CoV-2, they used tissue samples from the nasal mucosa and lungs to see how strongly the virus was still able to attack the mucosal cells. They also determined the extent of the inflammatory response using single-cell sequencing.

The live attenuated vaccine performed better than other vaccines. The best protection against SARS-CoV-2 was provided by double nasal vaccination, followed by the combination of a muscular injection of the mRNA vaccine and the subsequent nasal administration of the live attenuated vaccine.

The next step is safety testing: The researchers are collaborating with RocketVax, a Swiss start-up based in Basel for a Phase I clinical trial in humans.

Have scientists found a “brake pedal” for aging?


A protein found in the brain may be able to slow the speed of aging.

florid illustration of a brain

With the passage of time, our body’s repair systems break down; nasty glitches accumulate in our DNA and proteins, metabolism stutters, and cells stop dividing.

We are all on a slippery slope to the grave, but research in worms, flies, mice, and monkeys shows that there is nothing inevitable about how fast we slide. Dietary and lifestyle changes – and, perhaps, anti-aging drugs – can slow aging and boost our span of healthy years.

A new discovery suggests that a protein in the brain may be a switch for controlling inflammation and, with it, a host of symptoms of aging. If scientists can figure out how to safely target it in humans, it could slow down the aging process.

A protein in the brain may be a switch for controlling inflammation and a host of symptoms of aging.

The inflamed brain: One promising technique to combat aging is reducing inflammation. Many diseases of old age are associated with chronic, low-level inflammation in the brain, organs, joints, and circulatory system — sometimes called “inflammageing.”

Inflammation in a part of the brain called the ventromedial hypothalamus, or VMH, seems to play a particularly important role in promoting aging throughout the body. That may be because the VMH has a wide range of functions, including control of appetite, body temperature, and glucose metabolism.

For the first time, research in mice has discovered that a protein in VMH cells acts like a brake pedal to reduce inflammation and slow the pace of aging. 

High levels of the protein, called Menin, protected the mice against thinning skin, declining bone mass, and failing memory, whereas low levels accelerated aging. This may be because Menin is a “scaffold protein,” which regulates the activity of multiple enzymes and genes involved in inflammation and metabolism.

“We speculate that the decline of Menin expression in the hypothalamus with age may be one of the driving factors of aging, and Menin may be the key protein connecting the genetic, inflammatory, and metabolic factors of aging,” explained lead researcher Lige Leng from the Institute of Neuroscience at Xiamen University in China.

Illustrated 3D structure of Menin.

Previous research by Leng and his colleagues had revealed that Menin in the brains of mice inhibited inflammation that was associated with depression-like behaviors in the animals.

Intriguingly, they found that Menin promoted the production of a neurotransmitter called D-serine, which in turn helped to slow cognitive decline. D-serine is an amino acid that can be taken as a dietary supplement and is also found naturally in soybeans, eggs, and fish.

“D-serine is a potentially promising therapeutic for cognitive decline,” Leng speculated.

The experiment: In the new study, the scientists established that the concentration of Menin in nerves within the VMH area of the brain also declined in lockstep with increasing age.

To explore further, they created “conditional knockout” mice, allowing them to switch off the gene that makes Menin in the VMH, while keeping it switched on everywhere else in the body.

When they turned off Menin production in the VMH of middle-aged mice, this led to multiple signs of premature aging. For example, compared with control animals, these mice had more inflammation, reduced bone mass, and thinner skin. They also performed worse on cognitive tests and had a shorter lifespan.

Conversely, when the scientists restored Menin production in the VMH of aged mice, this not only reduced inflammation but also improved their learning and memory, skin thickness, and bone mass. These animals also lived longer.

The improvements correlated with a boost in the concentration of D-serine in their hippocampus, a brain region that is crucial for learning and memory.

When the researchers gave aged mice supplemental D-serine for 3 weeks, the supplement appeared to reverse some of their cognitive decline, although other signs of aging were unaffected.

The idea that chronic, low-level inflammation in the hypothalamus drives aging is not new. In 2013, a different group of researchers revealed that they could slow aging in mice — and increase their lifespan — by inhibiting certain inflammatory immune molecules in the hypothalamus.

Following their discovery, Dongsheng Cai and his colleagues at the Albert Einstein College of Medicine in New York speculated that suppressing inflammation in the hypothalamus could optimize lifespan and combat age-related disease.

Cai told Freethink that the new study identifying Menin as a key player in this process was “interesting and novel.”

“Menin is known for being anti-inflammatory,” said Cai, who was not involved in the new research, “and this study found its physiological significance in hypothalamic control of aging.”

However, Cai said the role of “hypothalamic microinflammation” in aging was subtle, complex and dynamic, so it remained unclear how best to target it in humans.

“Whether Menin could represent an applicable target remains to be investigated,” he said.

It’s worth noting that aging involves a buildup of “senescent cells” – cells that have stopped dividing and reproducing – and at the same time a breakdown in the body’s ability to clear them away. Tellingly, senescent cells churn out molecules that promote chronic inflammation.

What we can do now: On the plus side, there is abundant evidence from studies in nematode worms, fruit flies, rodents, and monkeys that severe restriction of calorie intake – without skimping on essential nutrients – can combat age-related disease and increase lifespan in these animals by revitalizing the body’s repair systems.

Unfortunately, for humans, severe caloric restriction causes side effects, such as perpetual hunger, lack of energy, and reduced libido. However, a recent trial found that more moderate reductions in calorie intake can provide some improvements in signs of aging without as many of these downsides.

Intermittent fasting and time-restricted feeding also aim to reproduce the benefits of caloric restriction, in particular weight loss. But it remains to be seen whether such diets are safe and effective in the long term.

Drugs such as rapamycin, metformin, and resveratrol, which mimic some of the metabolic effects of calorie restriction, look like promising candidates for reducing age-related disease and extending lifespan. However, their long-term safety and efficacy for otherwise healthy people remains to be established.

Quenching inflammation seems to be the common denominator behind the efficacy of calorie restriction and anti-aging drugs.

For those with an aversion to strict diets and unproven anti-aging drugs, however, there are simpler ways to combat inflammation, such as exercise. Research suggests that eating less saturated fat and more polyunsaturated fat can also minimize chronic inflammation in the hypothalamus.