Stanford Researchers Unveil New Ultrafast Charging Aluminum-Ion Battery


Last week, Stanford University researchers unveiled a new aluminum-ion battery chemistry with the unique ability to charge or discharge in less than a minute.

The battery’s incredibly fast charging and discharging times are not its only breakthrough. It is also the first aluminum-based battery to achieve an operating voltage sufficient for common applications and last longer than a few hundred charge-discharge cycles. In other words, it’s the first aluminum-ion battery to really work.

At the same time, the new battery is not without its limitations. There are a number of reasons why we probably won’t see it in our smart phones or electric vehicles anytime soon.

This post will introduce the new aluminum-ion battery technology, and then examine its key performance metrics, and how they affect its potential applications.

What’s Inside the Aluminum-Ion Battery?

To store energy, a battery requires two materials with an electrochemical voltage difference between them and an electrolyte that impedes the flow of electrons but enables the flow of ions between the two materials.

The aluminum-ion battery introduced last week uses simple aluminum metal in its negative side (anode) and a specialized three-dimensional graphite foam in its positive side (cathode). The positive and negative sides of the battery are separated by a liquid electrolyte of 1-ethyl-3-methylimidazolium chloride and anhydrous aluminum chloride. This electrolyte was selected because it contains mobile AlCl4- ions, which are exchanged between the two sides of the battery as it charges and discharges.

To test the viability of their proposed battery cell, the Stanford researchers constructed an experimental cell, and then charged and discharged it at various current rates to determine: 1) how much energy the cell can store, 2) how quickly the cell can charge or discharge, and 3) how many times the cell can be repeatedly charged and discharged.

How much energy can it store?

The amount of energy a battery can store is determined by two factors: the inherent voltage difference between its positive and negative sides, and the amount of charge the battery materials can store in the form of ions and electrons.

The voltage difference between the two sides of the aluminum-ion battery is approximately 2-2.5 volts, depending on the battery’s state of charge. This is less than the typical voltage of a lithium-ion battery, which varies from approximately 3.5-4 volts. This means about twice as many aluminum-ion battery cells would have to be placed in series to match the voltage of a comparable lithium-ion battery pack.

The aluminum-ion battery can store about 70 ampere-hours of charge per kilogram of battery material. This is approximately one half a lithium-ion battery’s charge capacity, which ranges from 120-160 ampere-hours per kilogram.

Put together, the aluminum-ion battery’s lower voltage and lower charge capacity give it about one quarter the energy density of a typical lithium-ion battery (about 40 Watt-hours per kilogram versus about 160 Watt-hours per kilogram for lithium-ion). Thus, powering your smart phone, laptop, or electric vehicle with an aluminum-ion battery would require a battery that weighs about four times the weight of a comparable lithium-ion battery.

How Much Electric Power Can It Produce?

Energy storage capacity is one important battery metric, but it isn’t the only one. Another crucial metric is a battery’s power capacity, or how quickly it can safely and reliably charge and discharge.

How quickly a battery can charge or discharge is determined by how quickly its materials can undergo an electrochemical reaction, and how quickly ions can diffuse inside the battery cell itself.

The Stanford researchers specifically designed their aluminum-ion battery to charge and discharge quickly. To speed up the motion of ions inside the negative side of the battery, they developed a unique three-dimensional graphite foam cathode with the internal gaps and surface area required to enable very fast ion movement.

Stanford’s aluminum-ion battery uses a unique three-dimensional graphite foam to speed up the movement of ions inside the battery, and unlock its unprecedented charging and discharging times. (Source: Lin et al., 2015)

This specialized cathode enables the aluminum-ion battery to charge and discharge at unprecedented rates. Researchers tested discharging and charging the battery at rates corresponding to a full charge or discharge in less than one minute. They found the battery could charge within a minute and then discharge over periods ranging from 48 seconds to 1.5 hours without suffering major capacity or efficiency losses.

The aluminum-ion battery’s fast charging and discharging times give it a decisive advantage over conventional lithium-ion batteries. On a mass basis, a hypothetical one-kilogram aluminum-ion battery could produce approximately 3,000 watts of power—enough to power about two to three typical residential homes, albeit for only a minute or less. On the other hand, a typical one-kilogram lithium-ion battery could only produce about 200-300 watts of power—about a tenth the power capacity of Stanford’s aluminum-ion battery.

How Long Does It Last?

The aluminum-ion battery’s unique three-dimensional graphite foam cathode doesn’t just unlock the ability to charge and discharge quickly; it also enables the battery to charge and discharge thousands of times over without suffering significant material degradation and capacity loss.

The Stanford researchers tested how long their battery lasts under different conditions by charging it at a fast one-minute rate, and then discharging it at the same one-minute rate thousands of times over. Across over 7,500 of these fast charge-discharge cycles, the researchers observed essentially no fade in the battery’s capacity.

This stands in contrast with a lithium-ion battery, which can typically only deliver 1,000-3,000 charge-discharge cycles before its capacity fades significantly. Thus, there is potential for the aluminum-ion battery to last much longer than conventional lithium-ion batteries.

At the same time, the Stanford researchers have not shown how their battery stands up to the effects of time, so it is unclear if the aluminum-ion battery can last long enough to fulfill electric grid applications. Because each charging or discharging process tested only took one minute to complete, the 7,500 charge-discharge cycles demonstrated correspond to an operating period of only a few of weeks. If there are other passive reactions that cause the battery to fade over longer time periods, than the aluminum-ion battery might not last the years required by grid applications.

What Might It Be Used For?

Based on the performance specifications identified above, Stanford’s aluminum-ion battery will be useful for applications that require very fast charging and discharging times and the capability to charge and discharge thousands of times without suffering capacity loss. The battery won’t be useful in applications that require energy density, because it’s energy density is only about a quarter of existing lithium-ion batteries.

Thus, you shouldn’t expect to be using Stanford’s aluminum-ion battery in your smartphone, tablet, or electric vehicle anytime soon. While the battery might allow you to charge your smartphone or electric vehicle in under a minute, it would significantly increase the weight of your phone or vehicle.

However, there is a chance you will see the aluminum-ion battery deployed on the grid one day. One application that might be a perfect fit for Stanford’s aluminum-ion battery is providing balancing and reserve power to the electric grid in order to maintain the balance between total electricity supply and total electric demand. This application requires high-power batteries with the capability to charge and discharge many times without failing. If Stanford’s aluminum-ion battery can be constructed at a sufficiently low cost in the future, it might be used to provide this service on the grid.

How Supercomputers Will Yield a Golden Age of Materials Science?


In 1878 Thomas Edison set out to reinvent electric lighting. To develop small bulbs suitable for indoor use, he had to find a long-lasting, low-heat, low-power lighting element. Guided largely by intuition, he set about testing thousands of carbonaceous materials—boxwood, coconut shell, hairs cut from his laboratory assistant’s beard. After 14 months, he patented a bulb using a filament made of carbonized cotton thread. The press heralded it as the “Great Inventor’s Triumph in Electric Illumination.” Yet there were better filament materials. At the turn of the century, another American inventor perfected the tungsten filament, which we still use in incandescent lightbulbs today. Edison’s cotton thread became history.

quantum super computer

Materials science, the process of engineering matter into new and useful forms, has come a long way since the days of Edison. Quantum mechanics has given scientists a deep understanding of the behavior of matter and, consequently, a greater ability to guide investigation with theory rather than guesswork. Materials development remains a painstakingly long and costly process, however. Companies invest billions designing novel materials, but successes are few and far between. Researchers think of new ideas based on intuition and experience; synthesizing and testing those ideas involve a tremendous amount of trial and error. It can take months to evaluate a single new material, and most often the outcome is negative. As our Massachusetts Institute of Technology colleague Thomas Eagar has found, it takes an average of 15 to 20 years for even a successful material to move from lab testing to commercial application. When Sony announced the commercialization of the lithium-ion battery in 1991, for example, it seemed like a sudden, huge advance—but in fact, it took hundreds or thousands of battery researchers nearly two decades of stumbling, halting progress to get to that point.

Yet materials science is on the verge of a revolution. We can now use a century of progress in physics and computing to move beyond the Edisonian process. The exponential growth of computer-processing power, combined with work done in the 1960s and 1970s by Walter Kohn and the late John Pople, who developed simplified but accurate solutions to the equations of quantum mechanics, has made it possible to design new materials from scratch using supercomputers and first-principle physics. The technique is called high-throughput computational materials design, and the idea is simple: use supercomputers to virtually study hundreds or thousands of chemical compounds at a time, quickly and efficiently looking for the best building blocks for a new material, be it a battery electrode, a metal alloy or a new type of semiconductor.

Most materials are made of many chemical compounds—battery electrodes, which are composites of several compounds, are good examples—but some are much simpler. Graphene, which has been widely hyped as the future of electronics, consists of a one-atom-thick sheet of carbon. Regardless of a material’s complexity, one thing is always true: its properties—density, hardness, shininess, electronic conductivity—are determined by the quantum characteristics of the atoms of which it is made. The first step in high-throughput materials design, then, is to virtually “grow” new materials by crunching thousands of quantum-mechanical calculations. A supercomputer arranges virtual atoms into hundreds or thousands of virtual crystal structures. Next, we calculate the properties of those virtual compounds. What do the crystal structures look like? How stiff are they? How do they absorb light? What happens when you deform them? Are they insulators or metals? We command the computer to screen for compounds with specific desirable properties, and before long, promising compounds rise to the top. At the end of the process, data generated during that investigation go back into a database that researchers can mine in the future.

Since 2011 we have been leading a collaboration of researchers that aims to accelerate the computer-driven materials revolution. We call it the Materials Project. The goal is to build free, open-access databases containing the fundamental thermodynamic and electronic properties of all known inorganic compounds. To date, we have calculated the basic properties (the arrangement of the crystal structure, whether it is a conductor or an insulator, how it conducts light, and so on) of nearly all of the approximately 35,000 inorganic materials known to exist in nature. We have also calculated the properties of another few thousand that exist only in theory. So far some 5,000 scientists have registered for access to the database containing this information, and they have been using it to design new materials for solar cells, batteries, and other technologies.

We are not the only ones pursuing this approach. A consortium of researchers led by Stefano Cortarolo of Duke University has calculated tens of thousands of alloy systems; their research could yield lighter, stronger car frames, structural beams for skyscrapers, airplane skins, and so on. The Quantum Materials Informatics Project, which consists of researchers at Argonne National Laboratory, Stanford University and the Technical University of Denmark, has been using high-throughput computing to study catalytic processes on metal surfaces, which is particularly useful in energy research.

In the very near future, materials scientists will use high-throughput computing to design just about everything. We believe that this will lead to technologies that will reshape our world—breakthroughs that will transform computing, eliminate pollution, generate abundant clean energy and improve our lives in ways that are hard to imagine today.

The Materials Genome

The modern world is built on the success of materials science. The advent of transparent, conductive glass led to the touch screens on our smartphones. The reason those phones can beam information around the world at the speed of light is that materials scientists found a way to make glass free of impurity ions, enabling fiber-optic communications. And the reason those phones last a full day on a charge is because in the 1970s and 1980s, materials scientists developed novel lithium-storing oxide materials—the basis for the lithium-ion battery.

It was our work on batteries that brought us to high-throughput materials design in the first place. We had spent our careers doing computational materials design, but until a 2005 conversation with executives from Proctor & Gamble (P&G), we did not think about what serious time on the world’s most powerful supercomputers could make possible. These P&G executives wanted to find a better cathode material for the alkaline batteries made by their Duracell division. They asked us a surprising question: Would it would be possible to computationally screen all known compounds to look for something better? On reflection, we realized that the only real obstacles were computing time and money. They were happy to supply both. They committed $1 million to the project and gave our small team free rein over their supercomputing center.

We called our effort the Alkaline Project. We screened 130,000 real and hypothetical compounds and gave P&G a list of 200 that met the criteria the company asked for, all of which had the potential to be significantly better than its current chemistry. By then, we were convinced that high-throughput materials design was the future of our field. We added staff, raised resources and, in 2011, launched a collaboration between M.I.T. and Lawrence Berkeley National Laboratory, which we initially called the Materials Genome Project. Teams at the University of California, Berkeley, Duke University, the University of Wisconsin–Madison, the University of Kentucky, the Catholic University of Leuven in Belgium and other institutions have since joined in the effort, all of them contributing the data they generate to our free, open-access central data repository at Lawrence Berkeley.

Before long, we dropped “Genome” from the project name to distinguish it from an initiative that the White House Office of Science and Technology Policy was launching. And to be fair, the properties of chemical compounds are not really “genes”—they are not hereditary bits of information that provide a unique sequence of data. Still, a direct relation exists between the function or property of a material and its fundamental descriptors. Just as blue eyes can be correlated to a certain gene, the electronic conductivity of a material, for example, can be traced back to the properties and arrangements of the elements it is composed of.

These kinds of correlations are the basis of materials science. Here is a simple example: we know we can “tune” the color of minerals by introducing targeted defects into their crystal structure. Consider the ruby. Its red hue comes from an accidental 1 percent substitution of a chromium ion (Cr3+) for aluminum in the common mineral corundum (Al2O3). When the Cr3+ is forced into this environment, its electronic states become altered, which changes the way the material absorbs and emits light. Once we know the origin—the fundamental descriptor—of a property (in this case, the redness of a ruby), we can target it with synthetic methods. By tweaking those chemical defects, we can design new synthetic rubies with perfectly tuned colors.

The equations of quantum mechanics can tell us how to do that tweaking—what elements to use and how to arrange them. Yet the equations are so complex that they can really only be solved by computer. Say you want to screen a group of a few hundred compounds to see which ones have the properties you need. It takes an incredible amount of computing power to crunch those equations. Until recently, it simply was not possible, which is why so much of materials science has historically proceeded by trial and error. Now that we have the computing power, however, we can finally take advantage of the full predictive potential of quantum mechanics.

Suppose we are researching thermoelectric materials, which generate an electric current if they experience a large temperature gradient. (The reverse is also true: a thermoelectric material can sustain a temperature difference if you run a current through it; think instant cooling.) Society wastes an enormous amount of heat through combustion, industrial processing and refrigeration. If we had efficient, cheap and stable thermoelectric materials, we could capture this heat and reuse it as electricity. Thermoelectric devices could transform industrial waste heat into electricity to power factories. Heat from car exhaust pipes could power the electronics in the cockpit. Thermoelectrics could also provide on-demand solid-state cooling: little devices that we could weave into our clothing that, with a flip of a switch, would cool us down, no fans or compressors required.

One of the best thermoelectrics we know of today is lead telluride, which is far too toxic and expensive to use commercially. Suppose you are a researcher looking for a better thermoelectric material. Without high-throughput computing, this is how it would go: You would start by looking for known compounds that, like lead telluride, have a high Seebeck coefficient (a measure of the amount of electricity you get out for the temperature difference that goes in) but that, unlike lead telluride, are not made of rare, toxic or expensive elements. You would pore over tables and compare numbers. If you were lucky, you would come up with some candidate chemistries that, in theory, would seem like they could work. Then you would make those compounds in a lab. Physically synthesizing materials is an expensive, time-consuming and difficult job. Generally, you have no idea going in whether the new material will even be stable. If it is, you can measure its properties only after you have synthesized the compound and then repeated the process until you have a fairly pure sample. This can take months for each compound.

So far researchers have had no luck finding alternative thermoelectric materials. But they have not yet tried high-throughput computational materials design. That will soon change. Starting this year, we will begin working with researchers at the California Institute of Technology and five other institutions to perform high-throughput searches for new thermoelectric materials. We intend to keep at it until we find the chemical compounds that could make those energy-saving, miracle-cooling technologies a reality.

The Golden Age of Materials Design

Our ability to access, search, screen and compare materials data in an automated way is in its infancy. As this field grows, what could it yield? We will venture a few guesses.

Many promising clean-energy technologies are just waiting for advanced materials to become viable. Photocatalytic compounds such as titanium dioxide can be used to turn sunlight and water into oxygen and hydrogen, which can then be processed into liquid fuels. Other photocatalysts can do the same thing with carbon dioxide. The dream is an “artificial leaf” that can turn sunlight and air into methanol-like liquid fuels we could burn in cars and airplanes [see “Reinventing the Leaf,” by Antonio Regalado; Scientific American, October 2010]. Researchers at the Joint Center for Artificial Photosynthesis, a U.S. Department of Energy research center, are using high-throughput methods to look for materials that could make this technology feasible.

What about finding new metal alloys for use in those cars and airplanes? Reducing a vehicle’s weight by 10 percent can improve its fuel economy by 6 to 8 percent. U.S. industry already pours billions of dollars every year into research and development for metals and alloy manufacturing. Computer-guided materials design could multiply that investment. Significant advances in high-strength, lightweight and recyclable alloys would have a tremendous impact on the world economy through increased energy efficiency in transportation and construction.

Computing is another field in need of transformative materials. Recently we have seen many serious predictions that we are nearing the end of Moore’s law, which says that computing power doubles roughly every two years. We have long known that silicon is not the best semiconductor. It just happens to be abundant and well understood. What could work better? The key is to find materials that can quickly switch from conducting to insulating states. A team at U.C.L.A. has made extremely fast transistors from graphene. Meanwhile a group at Stanford has reported that it can flip the electrical on/off switch in magnetite in one trillionth of a second—thousands of times faster than transistors now in use. High-throughput materials design will enable us to sort through these possibilities.

This list is much longer. Researchers are using computational materials design to develop new superconductors, catalysts and scintillator materials. Those three things would transform information technology, carbon capture and sequestration, and the detection of nuclear materials.

Computer-driven materials design could also produce breakthroughs that are hard to imagine. Perhaps we could invent a new liquid fuel based on silicon instead of carbon, which would deliver more energy than gasoline while producing environmentally benign reaction products such as sand and water. People have talked about the idea for decades, but no one has figured out a workable formula. High-throughput materials design could at least tell us if such a thing is possible or if we should focus our efforts elsewhere.

All of this is why we believe we are entering a golden age of materials design. Massive computing power has given human beings greater power to turn raw matter into useful technologies than they have ever had. It is a good thing, too. To help us deal with the challenges of a warming, increasingly crowded planet, this golden age cannot start soon enough.

New device stores electricity on silicon chips.


Solar cells that produce electricity 24/7, not just when the sun is shining. Mobile phones with built-in power cells that recharge in seconds and work for weeks between charges.

These are just two of the possibilities raised by a novel supercapacitor design invented by material scientists at Vanderbilt University that is described in a paper published in the Oct. 22 issue of the journal Scientific Reports.

It is the first supercapacitor that is made out of silicon so it can be built into a silicon chip along with the microelectronic circuitry that it powers. In fact, it should be possible to construct these power cells out of the excess silicon that exists in the current generation of solar cells, sensors, mobile phones and a variety of other electromechanical devices, providing a considerable cost savings.

“If you ask experts about making a supercapacitor out of silicon, they will tell you it is a crazy idea,” said Cary Pint, the assistant professor of mechanical engineering who headed the development. “But we’ve found an easy way to do it.”

Instead of storing energy in chemical reactions the way batteries do, “supercaps” store electricity by assembling ions on the of a porous material. As a result, they tend to charge and discharge in minutes, instead of hours, and operate for a few million cycles, instead of a few thousand cycles like batteries.

These properties have allowed commercial , which are made out of activated carbon, to capture a few niche markets, such as storing energy captured by regenerative braking systems on buses and electric vehicles and to provide the bursts of power required to adjust of the blades of giant wind turbines to changing wind conditions. Supercapacitors still lag behind the electrical energy storage capability of lithium-ion batteries, so they are too bulky to power most consumer devices. However, they have been catching up rapidly.

https://i0.wp.com/cdn.physorg.com/newman/gfx/news/2013/2-newdevicesto.jpg

Research to improve the energy density of supercapacitors has focused on carbon-based nanomaterials like graphene and nanotubes. Because these devices store electrical charge on the surface of their electrodes, the way to increase their energy density is to increase the electrodes’ surface area, which means making surfaces filled with nanoscale ridges and pores.

“The big challenge for this approach is assembling the materials,” said Pint. “Constructing high-performance, functional devices out of nanoscale building blocks with any level of control has proven to be quite challenging, and when it is achieved it is difficult to repeat.”

So Pint and his research team – graduate students Landon Oakes, Andrew Westover and post-doctoral fellow Shahana Chatterjee – decided to take a radically different approach: using porous silicon, a material with a controllable and well-defined nanostructure made by electrochemically etching the surface of a silicon wafer.

This allowed them to create surfaces with optimal nanostructures for supercapacitor electrodes, but it left them with a major problem. Silicon is generally considered unsuitable for use in supercapacitors because it reacts readily with some of chemicals in the electrolytes that provide the ions that store the electrical charge.

With experience in growing carbon nanostructures, Pint’s group decided to try to coat the porous with carbon. “We had no idea what would happen,” said Pint. “Typically, researchers grow graphene from silicon-carbide materials at temperatures in excess of 1400 degrees Celsius. But at lower temperatures – 600 to 700 degrees Celsius – we certainly didn’t expect graphene-like material growth.”

When the researchers pulled the porous silicon out of the furnace, they found that it had turned from orange to purple or black. When they inspected it under a powerful scanning electron microscope they found that it looked nearly identical to the original material but it was coated by a layer of graphene a few nanometers thick.

When the researchers tested the coated material they found that it had chemically stabilized the silicon surface. When they used it to make supercapacitors, they found that the graphene coating improved energy densities by over two orders of magnitude compared to those made from uncoated and significantly better than commercial supercapacitors.

The graphene layer acts as an atomically thin protective coating. Pint and his group argue that this approach isn’t limited to graphene. “The ability to engineer surfaces with atomically thin layers of materials combined with the control achieved in designing porous materials opens opportunities for a number of different applications beyond energy storage,” he said.

“Despite the excellent device performance we achieved, our goal wasn’t to create devices with record performance,” said Pint. “It was to develop a road map for integrated energy storage. Silicon is an ideal material to focus on because it is the basis of so much of our modern technology and applications. In addition, most of the silicon in existing devices remains unused since it is very expensive and wasteful to produce thin wafers.”

Pint’s group is currently using this approach to develop that can be formed in the excess materials or on the unused back sides of and sensors. The supercapacitors would store excess the electricity that the generate at midday and release it when the demand peaks in the afternoon.

When the researchers tested the coated material they found that it had chemically stabilized the silicon surface. When they used it to make supercapacitors, they found that the graphene coating improved energy densities by over two orders of magnitude compared to those made from uncoated and significantly better than commercial supercapacitors.

The graphene layer acts as an atomically thin protective coating. Pint and his group argue that this approach isn’t limited to graphene. “The ability to engineer surfaces with atomically thin layers of materials combined with the control achieved in designing porous materials opens opportunities for a number of different applications beyond energy storage,” he said.

“Despite the excellent device performance we achieved, our goal wasn’t to create devices with record performance,” said Pint. “It was to develop a road map for integrated energy storage. Silicon is an ideal material to focus on because it is the basis of so much of our modern technology and applications. In addition, most of the silicon in existing devices remains unused since it is very expensive and wasteful to produce thin wafers.”

Pint’s group is currently using this approach to develop that can be formed in the excess materials or on the unused back sides of and sensors. The supercapacitors would store excess the electricity that the generate at midday and release it when the demand peaks in the afternoon.

When the researchers tested the coated material they found that it had chemically stabilized the silicon surface. When they used it to make supercapacitors, they found that the graphene coating improved energy densities by over two orders of magnitude compared to those made from uncoated and significantly better than commercial supercapacitors.

The graphene layer acts as an atomically thin protective coating. Pint and his group argue that this approach isn’t limited to graphene. “The ability to engineer surfaces with atomically thin layers of materials combined with the control achieved in designing porous materials opens opportunities for a number of different applications beyond energy storage,” he said.

“Despite the excellent device performance we achieved, our goal wasn’t to create devices with record performance,” said Pint. “It was to develop a road map for integrated energy storage. Silicon is an ideal material to focus on because it is the basis of so much of our modern technology and applications. In addition, most of the silicon in existing devices remains unused since it is very expensive and wasteful to produce thin wafers.”

Pint’s group is currently using this approach to develop that can be formed in the excess materials or on the unused back sides of and sensors. The supercapacitors would store excess the electricity that the generate at midday and release it when the demand peaks in the afternoon.

“All the things that define us in a modern environment require electricity,” said Pint. “The more that we can integrate power storage into existing and devices, the more compact and efficient they will become.”

Nanoparticles from rice husks set for use in batteries.


Rice farmers may soon have a more lucrative use for a common low-value byproduct: rice husks, the hard, protective coverings around the edible grains.

nano rice
The husks contain natural silicon nanoparticles that can easily be extracted and used in battery manufacture, a study shows.

The simple and low-cost process for recovering the nanoparticles and using them in the lithium-ion batteries, which are commonly found in portableelectronics, was published in Scientific Reports last month (29 May).

Silicon nanomaterials have various industrial applications but they are complicated, costly and energy-intensive to produce.

China plays an important role in battery manufacturing, so the rice nano-silicon could be locally integrated into battery manufacturing.”

Speed read

  • Inedible rice husks contain silicon nanoparticles that can be extracted for use in batteries
  • Rice husks are usually low-value, but farmers could sell them to battery manufacturers
  • Researchers hope to link up with battery firms to push for rice husk use

 

Meanwhile, 120 million tonnes of rice husks are produced as byproducts of rice agriculture worldwide each year.

“The novelty of this paper is the high-yield and low-cost recovery of nano-structured silicon from an agriculturalbyproduct. And the morphology of the recovered silicon is ideal for direct application in high-energy, lithium-ion batteries,” Yi Cui, study coauthor and associate professor at Stanford University, United States, tellsSciDev.Net.

“A lot of developing countries, such as China and India, produce a huge amount of rice husks each year. Currently, the rice husks only have some low added-value applications,” he says.

The new procedure, Cui says, could allow these countries to use the husks to build batteries, and his team is trying to establish links with battery companies to achieve this.

“China plays an important role in battery manufacturing, so the rice nano-silicon could be locally integrated into battery manufacturing,” he adds.

Jie Xiao, a senior scientist at the Pacific Northwest National Laboratory, United States, says the “approach is interesting and promising” but warns that “more research needs to take place before this method would be useful on a broad scale”.

Farmers will probably be unable to directly sell rice husks to battery companies since most of these firms do not make their own raw materials, she says. “However, companies that supply [battery] electrode materials, or chemical factories, could build [production] lines to process husks and harvest [their] silicon for battery use,” she adds.

Cary Hayner, chief technology officer of SiNode Systems — a materialsventure based out of Northwestern University that is commercialising novel silicon-based battery anode technology — says the study demonstrates what could be a tremendous opportunity to make use of an abundant agricultural byproduct.

“Farmers would be best served by selling their rice husks to a company that will transform the husks into the useful silicon,” he says.

Source: http://www.scidev.net