Stanford Researchers Unveil New Ultrafast Charging Aluminum-Ion Battery


Last week, Stanford University researchers unveiled a new aluminum-ion battery chemistry with the unique ability to charge or discharge in less than a minute.

The battery’s incredibly fast charging and discharging times are not its only breakthrough. It is also the first aluminum-based battery to achieve an operating voltage sufficient for common applications and last longer than a few hundred charge-discharge cycles. In other words, it’s the first aluminum-ion battery to really work.

At the same time, the new battery is not without its limitations. There are a number of reasons why we probably won’t see it in our smart phones or electric vehicles anytime soon.

This post will introduce the new aluminum-ion battery technology, and then examine its key performance metrics, and how they affect its potential applications.

What’s Inside the Aluminum-Ion Battery?

To store energy, a battery requires two materials with an electrochemical voltage difference between them and an electrolyte that impedes the flow of electrons but enables the flow of ions between the two materials.

The aluminum-ion battery introduced last week uses simple aluminum metal in its negative side (anode) and a specialized three-dimensional graphite foam in its positive side (cathode). The positive and negative sides of the battery are separated by a liquid electrolyte of 1-ethyl-3-methylimidazolium chloride and anhydrous aluminum chloride. This electrolyte was selected because it contains mobile AlCl4- ions, which are exchanged between the two sides of the battery as it charges and discharges.

To test the viability of their proposed battery cell, the Stanford researchers constructed an experimental cell, and then charged and discharged it at various current rates to determine: 1) how much energy the cell can store, 2) how quickly the cell can charge or discharge, and 3) how many times the cell can be repeatedly charged and discharged.

How much energy can it store?

The amount of energy a battery can store is determined by two factors: the inherent voltage difference between its positive and negative sides, and the amount of charge the battery materials can store in the form of ions and electrons.

The voltage difference between the two sides of the aluminum-ion battery is approximately 2-2.5 volts, depending on the battery’s state of charge. This is less than the typical voltage of a lithium-ion battery, which varies from approximately 3.5-4 volts. This means about twice as many aluminum-ion battery cells would have to be placed in series to match the voltage of a comparable lithium-ion battery pack.

The aluminum-ion battery can store about 70 ampere-hours of charge per kilogram of battery material. This is approximately one half a lithium-ion battery’s charge capacity, which ranges from 120-160 ampere-hours per kilogram.

Put together, the aluminum-ion battery’s lower voltage and lower charge capacity give it about one quarter the energy density of a typical lithium-ion battery (about 40 Watt-hours per kilogram versus about 160 Watt-hours per kilogram for lithium-ion). Thus, powering your smart phone, laptop, or electric vehicle with an aluminum-ion battery would require a battery that weighs about four times the weight of a comparable lithium-ion battery.

How Much Electric Power Can It Produce?

Energy storage capacity is one important battery metric, but it isn’t the only one. Another crucial metric is a battery’s power capacity, or how quickly it can safely and reliably charge and discharge.

How quickly a battery can charge or discharge is determined by how quickly its materials can undergo an electrochemical reaction, and how quickly ions can diffuse inside the battery cell itself.

The Stanford researchers specifically designed their aluminum-ion battery to charge and discharge quickly. To speed up the motion of ions inside the negative side of the battery, they developed a unique three-dimensional graphite foam cathode with the internal gaps and surface area required to enable very fast ion movement.

Stanford’s aluminum-ion battery uses a unique three-dimensional graphite foam to speed up the movement of ions inside the battery, and unlock its unprecedented charging and discharging times. (Source: Lin et al., 2015)

This specialized cathode enables the aluminum-ion battery to charge and discharge at unprecedented rates. Researchers tested discharging and charging the battery at rates corresponding to a full charge or discharge in less than one minute. They found the battery could charge within a minute and then discharge over periods ranging from 48 seconds to 1.5 hours without suffering major capacity or efficiency losses.

The aluminum-ion battery’s fast charging and discharging times give it a decisive advantage over conventional lithium-ion batteries. On a mass basis, a hypothetical one-kilogram aluminum-ion battery could produce approximately 3,000 watts of power—enough to power about two to three typical residential homes, albeit for only a minute or less. On the other hand, a typical one-kilogram lithium-ion battery could only produce about 200-300 watts of power—about a tenth the power capacity of Stanford’s aluminum-ion battery.

How Long Does It Last?

The aluminum-ion battery’s unique three-dimensional graphite foam cathode doesn’t just unlock the ability to charge and discharge quickly; it also enables the battery to charge and discharge thousands of times over without suffering significant material degradation and capacity loss.

The Stanford researchers tested how long their battery lasts under different conditions by charging it at a fast one-minute rate, and then discharging it at the same one-minute rate thousands of times over. Across over 7,500 of these fast charge-discharge cycles, the researchers observed essentially no fade in the battery’s capacity.

This stands in contrast with a lithium-ion battery, which can typically only deliver 1,000-3,000 charge-discharge cycles before its capacity fades significantly. Thus, there is potential for the aluminum-ion battery to last much longer than conventional lithium-ion batteries.

At the same time, the Stanford researchers have not shown how their battery stands up to the effects of time, so it is unclear if the aluminum-ion battery can last long enough to fulfill electric grid applications. Because each charging or discharging process tested only took one minute to complete, the 7,500 charge-discharge cycles demonstrated correspond to an operating period of only a few of weeks. If there are other passive reactions that cause the battery to fade over longer time periods, than the aluminum-ion battery might not last the years required by grid applications.

What Might It Be Used For?

Based on the performance specifications identified above, Stanford’s aluminum-ion battery will be useful for applications that require very fast charging and discharging times and the capability to charge and discharge thousands of times without suffering capacity loss. The battery won’t be useful in applications that require energy density, because it’s energy density is only about a quarter of existing lithium-ion batteries.

Thus, you shouldn’t expect to be using Stanford’s aluminum-ion battery in your smartphone, tablet, or electric vehicle anytime soon. While the battery might allow you to charge your smartphone or electric vehicle in under a minute, it would significantly increase the weight of your phone or vehicle.

However, there is a chance you will see the aluminum-ion battery deployed on the grid one day. One application that might be a perfect fit for Stanford’s aluminum-ion battery is providing balancing and reserve power to the electric grid in order to maintain the balance between total electricity supply and total electric demand. This application requires high-power batteries with the capability to charge and discharge many times without failing. If Stanford’s aluminum-ion battery can be constructed at a sufficiently low cost in the future, it might be used to provide this service on the grid.

Stanford engineer aims to connect the world with ant-sized radios


Costing just pennies to make, tiny radios-on-a-chip are designed to serve as controllers or sensors for the ‘Internet of Things.’

 

The tiny radio-on-a-chip gathers all the power it needs from the same electromagnetic waves that carry signals to its receiving antenna. (Photo courtesy of Amin Arabian)

A Stanford engineering team has built a radio the size of an ant, a device so energy efficient that it gathers all the power it needs from the same electromagnetic waves that carry signals to its receiving antenna – no batteries required.

Designed to compute, execute and relay commands, this tiny wireless chip costs pennies to fabricate – making it cheap enough to become the missing link between the Internet as we know it and the linked-together smart gadgets envisioned in the “Internet of Things.”

“The next exponential growth in connectivity will be connecting objects together and giving us remote control through the web,” said Amin Arbabian, an assistant professor of electrical engineering who recently demonstrated this ant-sized radio chip at the VLSI Technology and Circuits Symposium in Hawaii.

Much of the infrastructure needed to enable us to control sensors and devices remotely already exists: We have the Internet to carry commands around the globe, and computers and smartphones to issue the commands. What’s missing is a wireless controller cheap enough to so that it can be installed on any gadget anywhere.

“How do you put a bi-directional wireless control system on every lightbulb?” Arbabian said. “By putting all the essential elements of a radio on a single chip that costs pennies to make.”

Cost is critical because, as Arbabian observed, “We’re ultimately talking about connecting trillions of devices.”

A three-year effort

Arbabian began the project in 2011 while he was completing a PhD program and working with Professor Ali Niknejad, director of the Wireless Research Center at the University of California, Berkeley. Arbabian’s principal collaborator was his wife, Maryam Tabesh, then also a student in Niknejad’s lab and now a Google engineer.

Arbabian joined the Stanford faculty in 2012 and brought a fourth person onto the team, Mustafa Rangwala, who was then a postgraduate student but is now with a startup company.

The work took time because Arbabian wanted to rethink radio technology from scratch.

“In the past when people thought about miniaturizing radios, they thought about it in terms of shrinking the size of the components,” he said. But Arbabian’s approach to dramatically reducing size and cost was different. Everything hinged on squeezing all the electronics found in, say, the typical Bluetooth device down into asingle, ant-sized silicon chip.

This approach to miniaturization would have another benefit – dramatically reducing power consumption, because a single chip draws so much less power than conventional radios. In fact, if Arbabian’s radio chip needed a battery – which it does not – a single AAA contains enough power to run it for more than a century.

But to build this tiny device every function in the radio had to be reengineered.

The antenna

The antenna had to be small, one-tenth the size of a Wi-Fi antenna, and operate at the incredibly fast rate of 24 billion cycles per second. Standard transistors could not easily process signals that oscillate that fast. So his team had to improve basic circuit and electronic design.

Many other such tweaks were needed but in the end Arbabian managed to put all the necessary components on one chip: a receiving antenna that also scavenges energy from incoming electromagnetic waves; a transmitting antenna to broadcast replies and relay signals over short distances; and a central processor to interpret and execute instructions. No external components or power are needed.

And this ant-sized radio can be made for pennies.

Based on his designs, the French semiconductor manufacturer STMicroelectronics fabricated 100 of these radios-on-a-chip. Arbabian has used these prototypes to prove that the devices work; they can receive signals, harvest energy from incoming radio signals and carry out commands and relay instructions.

Now Arbabian envisions networks of these radio chips deployed every meter or so throughout a house (they would have to be set close to one another because high-frequency signals don’t travel far).

He thinks this technology can provide the web of connectivity and control between the global Internet and smart household devices. “Cheap, tiny, self-powered radio controllers are an essential requirement for the Internet of Things,” said Arbabian, who has created a web page to share some ideas on what he calls battery-less radios.

 

Alzheimer’s patients to be treated with the blood of under-30s


Alzheimer’s patients in the US will be given transfusions of young people’s blood as part of a promising new treatment that’s nowhere near as crazy as it sounds.

young-blood

Image: alexskopje/Shutterstock

This October, people with mild to moderate levels of Alzheimer’s disease will receive a transfusion of blood plasma from donors aged under 30.

The trial, run by researchers at the Stanford School of Medicine in the US, follows their revolutionary study involving lab mice, where the blood plasma of young mice was injected into old mice, resulting in a marked improvement in their physical endurance and cognitive function.Completed earlier this year, their research, combined with independent studies by a handful of research teams around the world, pin-pointed a plasma-borne protein called growth differentiation factor 11 – or GDF11 – as a key factor in the young blood’s powers of rejuvenation.

“We saw these astounding effects,” lead researcher and professor of neurology at Stanford, Tony Wyss-Coray, told Helen Thomson at New Scientist. “The human blood had beneficial effects on every organ we’ve studied so far.”

Getting approval for their October trial has been fairly straightforward, he said, because blood transfusion therapy has such a long history of safe use in medical procedures, but the team will still keep a very careful eye on how the patients are progressing once they’ve received the young blood. “We will assess cognitive function immediately before and for several days after the transfusion, as well as tracking each person for a few months to see if any of their family or carers report any positive effects,” he told Thomson at New Scientist. “The effects might be transient, but even if it’s just for a day it is a proof of concept that is worth pursuing.”

Without wanting to get ahead of ourselves just yet, if the trial ends up being a raging success and the Stanford team can prove once and for all that young blood reverses the debilitating effects of Alzheimer’s and other degenerative diseases such as cancer, we’re going to need a whole lot more donors to meet demand around the world. Or, as Wyss-Coray told New Scientist, the hope is that continued research will identify the individual components in the plasma that are contributing to the positive effects – such as GDF11 – and get these synthesised into new types of drugs.

“It would be great if we could identify several factors that we could boost in older people,” he said. “Then we might be able to make a drug that does the same thing. We also want to know what organ in the body produces these factors. If we knew that, maybe we could stimulate that tissue in older people.”

Stanford scientists use lasers and carbon nanotubes to look inside living brains.


A team of Stanford scientists has developed an entirely non-invasive technique that provides a view of blood flow in the brain. The tool could provide powerful insights into strokes and possibly Alzheimer’s disease.

This illustration shows how carbon nanotubes, once injected into the subject, can be fluoresced using near-infrared light in order to visualize the brain vasculature and track cerebral blood flow.

Some of the most damaging BRAIN DISEASES can be traced to irregular blood delivery in the brain. Now, Stanford chemists have employed lasers and carbon nanotubes to capture an unprecedented look at blood flowing through a living brain.

The technique was developed for mice but could one day be applied to humans, potentially providing vital information in the study of stroke and migraines, and perhaps even Alzheimer’s and Parkinson’s diseases. The work is described in the journal Nature Photonics.

Current procedures for exploring the brain in living animals face significant tradeoffs. Surgically removing part of the skull offers a clear view of activity at the cellular level. But the trauma can alter the function or activity of the brain or even stimulate an immune response. Meanwhile, non-invasive techniques such as CT scans or MRI visualize function best at the whole-organ level; they cannot visualize individual vessels or groups of neurons.

The first step of the new technique, called near infrared-IIa imaging, or NIR-IIa, calls for injecting water-soluble carbon nanotubes into a live mouse’s bloodstream. The RESEARCHERS then shine a near-infrared laser over the rodent’s skull.

The light causes the specially designed nanotubes to fluoresce at wavelengths of 1,300-1,400 nanometers; this range represents a sweet spot for optimal penetration with very little light scattering. The fluorescing nanotubes can then be detected to visualize the blood vessels’ structure.

Amazingly, the technique allows scientists to view about three millimeters underneath the scalp and is fine enough to visualize blood coursing through single capillaries only a few microns across, said senior authorHongjie Dai, a professor of chemistry at Stanford. Furthermore, it does not appear to have any adverse affect on innate brain functions.

“The NIR-IIa light can pass through intact scalp skin and skull and penetrate millimeters into the brain, allowing us to see vasculature in an almost non-invasive way,” said first author Guosong Hong, who conducted the research as a graduate student in Dai’s lab and is now a postdoctoral fellow at Harvard. “All we have to remove is some hair.”

The technique could eventually be used in human clinical trials, Hong said, but will need to be tweaked. First, the light penetration depth needs to be increased to pass deep into the human brain. Second, injecting carbon nanotubes needs approval for clinical application; the scientists are currently investigating alternative fluorescent agents.

For now, though, the technique provides a new technique for studying human cerebral-vascular diseases, such as stroke and migraines, in animal models. Other research has shown that Alzheimer’s and Parkinson’s diseases might elicit – or be caused in part by – changes in blood flow to certain parts of the brain, Hong said, and NIR-IIa imaging might offer a means of better understanding the role of healthy vasculature in those diseases.

“We could also label different neuron types in the brain with bio-markers and use this to monitor how each neuron performs,” Hong said. “Eventually, we might be able to use NIR-IIa to learn how each neuron functions inside of the brain.”

Stanford scientists observe brain activity in real time.


A Stanford Bio-X team of scientists invented tools for watching mice brain nerves send signals in real time. The technique will make it easier to study brain functions and help develop therapies for brain diseases.

Two Stanford scientists have worked together to create tools for observing nerves in living animals that signal between themselves in real time. Observing the glowing trails of light spreading between connected nerves will help scientists understand how those individual signals add up to the complex collection of a person’s thoughts and memories.

“You want to know which neurons are firing, how they link together and how they represent information,” said Michael Lin, assistant professor of pediatrics and of bioengineering. “A good probe to do that has been on the wish list for decades.”

Lin and Mark Schnitzer, associate professor of biology and of applied physics, developed two different approaches to allow neuroscientists to read brain activity more quickly and sensitively. Their research papers on this topic will be published April 22 in Nature Neuroscience (Lin’s study) and Nature Communications (Schnitzer’s study).

Making thoughts light up

The inventions from the two research groups have a lot in common. Both involve proteins that light up as an electric current sweeps down the long tendrils that link nerves together. The scientists can insert these proteins into a specific group of brain cells that they want to study – say, cells in the part of the brain involved in memory, or cells that specifically inhibit other neurons from firing – and then watch those cells as they communicate in real time.

With these tools scientists can study how we learn, remember, navigate or any other activity that requires networks of nerves working together. The tools can also help scientists understand what happens when those processes don’t work properly, as in Alzheimer’s or Parkinson’s diseases, or other disorders of the brain.

The proteins could also be inserted in neurons in a lab dish. Scientists developing drugs, for example, could expose human nerves in a dish to a drug and watch in real time to see if the drug changes the way the nerve fires. If those neurons in the dish represent a disease, like Parkinson’s disease, a scientist could look for drugs that cause those cells to fire more normally.

For more than a decade, neuroscientists have watched a proxy of nerves firing. Each time a nerve sends a signal, calcium floods into the cell and is then pumped back out in anticipation of the next signal. In fact, Schnitzer developed a miniature camera that he has been using to peer into the brains of mice to record these calcium waves. His lab has focused on studying the region of the brain involved in learning and memory.

But what Schnitzer sees through his tiny camera isn’t the actual nerve activity. He has been watching the shadows, and like any shadows they are a good proxy – but their shapes aren’t always realistic. Calcium stays in the neuron long after a signal has swept past, and may mask a second signal as it flashes by. Also, sometimes an electrical signal won’t trigger enough calcium to enter a cell for the protein to light up.

“Sensing calcium is insufficient for a full understanding of what’s happening,” Schnitzer said. “There are also many neuronal cell types that are not well studied with calcium probes.”

Frustrated with the state of effective tools for watching nerves fire, Lin and Schnitzer applied for and received a seed grant from Stanford Bio-X to develop one. These grants support high-risk projects that bring together engineering and biology know-how to solve problems in the field.

Separate approaches

Although the two labs had the same goal and ended up developing probes with similar qualities, they took very different approaches.

Lin’s lab focuses on engineering proteins that can be used as tools to study aspects of how the cell functions. He recently received a prestigious NIH Director’s Pioneer Award for work on one such protein, which can be switched on and off using light. Lin and a postdoctoral fellow in his lab, Francois St-Pierre, had an idea for generating a protein that would light up in response to a change in voltage, such as what happens when a nerve sends a signal.

Other scientists were working on the same problem, but they were not able to create a protein that responded quickly and strongly to a change in voltage. By looking at the structure of different voltage-sensing proteins, St-Pierre thought he could generate a better signal by putting the fluorescent element in the middle of a voltage-sensing protein. Despite some concerns that a big fluorescent element in the middle of the protein might disrupt its function, the combination worked. He and Lin named their probe ASAP – an acronym for a scientific description of the protein as well as a description of the protein’s speedy light. St-Pierre was first author on the Nature Neuroscience paper.

Like St-Pierre, postdoctoral scholar Yiyang Gong in Schnitzer’s lab recognized the need for a voltage sensing protein, but he took inspiration from a different approach. He had read about work by scientists attempting to detect voltage starting with bacterial proteins called rhodopsins – but without much success. Gong made significant modifications to that approach and, like St-Pierre, ended up with a protein that will embed in the nerve cell membrane and produce light when the nerve fires. Gong was first author on the Nature Communications paper.

“The two probes actually have similar performance, which is a coincidence because we arrived at them from very different directions,” Lin said.

Both groups show that their proteins work in neurons in a lab dish. Gong also inserted his protein in a group of neurons (called Purkinje neurons) in living mice and was able to record the protein’s flashing light as those nerves sent signals. He was able to see those nerves fire through a tiny glass window into the mouse brain, but the scientists say they could use a camera like the one Schnitzer developed to observe deeper parts of the brain.

The scientists say they view their probes as a starting point. They expect to continue refining the proteins to have properties that are optimized for different cell types or to produce different colors of light.

“I think there will be exciting applications enabled by what we have developed,” Schnitzer said.

While continuing to improve their voltage sensors, the team also got funding through a Bio-X Neuroventures grant, now associated with the Stanford Neurosciences Institute, to develop a novel way of imaging neural activity deep in the brain. That work will add one more tool for understanding how the complex array of brain connections makes us who we are.

Not Getting Sleepy? Why Hypnosis Doesn’t Work for All.


Not everyone is able to be hypnotized, and new research from the Stanford University School of Medicine shows how the brains of such people differ from those who can easily be.

The study, published in the October issue of Archives of General Psychiatry, uses data from functional and structural magnetic resonance imaging to identify how the areas of the brain associated with executive control and attention tend to have less activity in people who cannot be put into a hypnotic trance.

“There’s never been a brain signature of being hypnotized, and we’re on the verge of identifying one,” said David Spiegel, MD, the paper’s senior author and a professor of psychiatry and behavioral sciences. Such an advance would enable scientists to understand better the mechanisms underlying hypnosis and how it can be used more widely and effectively in clinical settings, added Spiegel, who also directs the Stanford Center for Integrative Medicine.

Spiegel estimates that one-quarter of the patients he sees cannot be hypnotized, though a person’s hypnotizability is not linked with any specific personality trait. “There’s got to be something going on in the brain,” he said.

Hypnosis is described as a trance-like state during which a person has a heightened focus and concentration. It has been shown to help with brain control over sensation and behavior, and has been used clinically to help patients manage pain, control stress and anxiety and combat phobias.

Hypnosis works by modulating activity in brain regions associated with focused attention, and this study offers compelling new details regarding neural capacity for hypnosis.

“Our results provide novel evidence that altered functional connectivity in [the dorsolateral prefrontal cortex] and [the dorsal anterior cingulate cortex] may underlie hypnotizability,” the researchers wrote in their paper.

For the study, Spiegel and his Stanford colleagues performed functional and structural MRI scans of the brains of 12 adults with high hypnotizability and 12 adults with low hypnotizability.

The researchers looked at the activity of three different networks in the brain: the default-mode network, used when one’s brain is idle; the executive-control network, which is involved in making decisions; and the salience network, which is involved in deciding something is more important than something else.

The findings, Spiegel said, were clear: Both groups had an active default-mode network, but highly hypnotizable participants showed greater co-activation between components of the executive-control network and the salience network. More specifically, in the brains of the highly hypnotizable group the left dorsolateral prefrontal cortex, an executive-control region of the brain, appeared to be activated in tandem with the dorsal anterior cingulate cortex, which is part of the salience network and plays a role in focusing of attention. By contrast, there was little functional connectivity between these two areas of the brain in those with low hypnotizability.

Spiegel said he was pleased that he and his team found something so clear. “The brain is complicated, people are complicated, and it was surprising we were able to get such a clear signature,” he explained.

Spiegel also said the work confirms that hypnotizability is less about personality variables and more about cognitive style. “Here we’re seeing a neural trait,” he said.

The authors’ next step is to further explore how these functional networks change during hypnosis. Spiegel and his team have recruited high- and low-hypnotizable patients for another study during which fMRI assessment will be done during hypnotic states. Funding for that work is being provided by the National Center for Complementary and Alternative Medicine.

Funding for this study came from the Nissan Research Center, the Randolph H. Chase, MD Fund II, the Jay and Rose Phillips Family Foundation and the National Institutes of Health.

The study’s first-author is Fumiko Hoeft, MD, PhD, who was formerly an instructor at Stanford’s Center for Interdisciplinary Brain Sciences Research and is now an associate professor of psychiatry at UCSF. Other co-authors are John Gabrieli, PhD, a professor at MIT (then a professor of psychology at Stanford); Susan Whitfield-Gabrieli, a research scientist at MIT (then a science and engineering associate at Stanford); Brian Haas, PhD, an assistant professor at the University of Georgia (then a postdoctoral scholar in the Center for Interdisciplinary Brain Sciences Research at Stanford); Roland Bammer, PhD, associate professor of radiology; and Vinod Menon, PhD, professor of psychiatry and behavioral sciences.

Source: http://www.sciencedaily.com