New formula for fast, abundant hydrogen production may help power fuel cells.


Scientists in Lyon, a French city famed for its cuisine, have discovered a quick-cook recipe for copious volumes of hydrogen (H2).

The breakthrough suggests a better way of producing the  that propels rockets and energizes battery-like fuel cells. In a few decades, it could even help the world meet key energy needs—without carbon emissions contributing to the greenhouse effect and climate change.

It also has profound implications for the abundance and distribution of life, helping to explain the astonishingly widespread microbial communities that dine on hydrogen deep beneath the continents and seafloor.

Describing how to greatly speed up nature’s process for producing hydrogen will be a highlight among many presentations by Deep Carbon Observatory (DCO) experts at the American Geophysical Union‘s annual Fall Meeting in San Francisco Dec. 9 to 13.

The DCO is a global, 10-year international science collaboration unraveling the mysteries of Earth’s inner workings—deep life, energy, chemistry, and fluid movements.

Muriel Andreani, Isabelle Daniel, and Marion Pollet-Villard of University Claude Bernard Lyon 1 discovered the quick recipe for producing hydrogen:

In a microscopic high-pressure cooker called a diamond anvil cell (within a tiny space about as wide as a pencil lead), combine ingredients: aluminum oxide, water, and the mineral olivine. Set at 200 to 300 degrees Celsius and 2 kilobars pressure—comparable to conditions found at twice the depth of the deepest ocean. Cook for 24 hours. And voilà.

Dr. Daniel, a DCO leader, explains that scientists have long known nature’s way of producing hydrogen. When water meets the ubiquitous mineral olivine under pressure, the rock reacts with oxygen (O) atoms from the H2O, transforming olivine into another mineral, serpentine—characterized by a scaly, green-brown surface appearance like snake skin. Olivine is a common yellow to yellow-green mineral made of magnesium, iron, silicon, and oxygen.

The process also leaves hydrogen (H2) molecules divorced from their marriage with oxygen atoms in water.

The novelty in the discovery, quietly published in a summer edition of the journal American Mineralogist, is how aluminum profoundly accelerates and impacts the process.

Finding the reaction completed in the diamond-enclosed micro space overnight, instead of over months as expected, left the scientists amazed. The experiments produced H2 some 7 to 50 times faster than the natural “serpentinization” of olivine.

Over decades, many teams looking to achieve this same quick hydrogen result focused mainly on the role of iron within the olivine, Dr. Andreani says. Introducing aluminum into the hot, high-pressure mix produced the eureka moment.

Dr. Daniel notes that aluminum is Earth’s 5th most abundant element and usually is present, therefore, in the natural serpentinization process. The experiment introduced a quantity of aluminum unrealistic in nature.

Jesse Ausubel, of The Rockefeller University and a founder of the DCO program, says current methods for commercial hydrogen production for fuel cells or to power rockets “usually involve the conversion of methane (CH4), a process that produces the greenhouse gas carbon dioxide (CO2) as a byproduct. Alternatively, we can split water molecules at temperatures of 850 degrees Celsius or more—and thus need lots of energy and extra careful engineering.”

“Aluminum’s ability to catalyze hydrogen production at a much lower temperature could make an enormous difference. The cost and risk of the process would drop a lot.”

“Scaling this up to meet global energy needs in a carbon-free way would probably require 50 years,” he adds. “But a growing market for hydrogen in fuel cells could help pull the process into the market.”

“We still need to solve problems for a hydrogen economy, such as storing the hydrogen efficiently as a gas in compact containers, or optimizing methods to turn it into a metal, as pioneered by Russell Hemley of the Carnegie Institution‘s Geophysical Laboratory, another co-founder of the DCO.”

Deep energy, Dr. Hemley notes, is typically thought of in terms of geothermal energy available from heat deep within Earth, as well as subterranean fluids that can be burned for energy, such as methane and petroleum. What may strike some as new is that there is also chemical energy in the form of hydrogen produced by serpentinization.

At the time of the AGU Fall Meetings, Dr. Andreani will be taking a lead role with Javier Escartin of the Centre National de la Recherche Scientifique in a 40-member international scientific exploration of fault lines along the Mid-Atlantic Ridge. It is a place where the African and American continents continue to separate at an annual rate of about 20 mm (1.5 inches) and rock is forced up from the mantle only 4 to 6 km (2.5 to 3.7 miles) below the thin ocean floor crust. The study will advance several DCO goals, including the mapping of world regions where deep life-supporting H2 is released through serpentinization.

Aboard the French vessel Pourquoi Pas?, using a deep sea robot from the French Research Institute for Exploitation of the Sea (IFREMER), and a deep-sea vehicle from Germany’s Leibniz Institute of Marine Sciences (GEOMAR), the team includes researchers from France, Germany, USA, Wales, Spain, Norway and Greece.

Notes Dr. Daniel, until now it has been a scientific mystery how the rock + water + pressure formula produces enough hydrogen to support the chemical-loving microbial and other forms of life abounding in the hostile environments of the deep.

With the results of the experiment in France, “for the first time we understand why and how we have H2 produced at such a fast rate. When you take into account aluminum, you are able to explain the amount of life flourishing on hydrogen,” says Dr. Daniel.

Indeed, DCO scientists hypothesize that hydrogen was what fed the earliest life on primordial planet Earth—first life’s first food.

And, she adds: “We believe the serpentinization process may be underway on many planetary bodies—notably Mars. The reaction may take one day or one million years but it will occur whenever and wherever there is some water present to react with olivine—one of the most abundant minerals in the solar system.”

Enigmatic evidence of a deep subterranean microbe network

Meanwhile, the genetic makeup of Earth’s deep microbial life is being revealed through DCO research underway by Matt Schrenk of Michigan State University, head of DCO’s “Rock-Hosted Communities” initiative, Tom McCollom of the University of Colorado, Boulder, Steve D’Hondt of the University of Rhode Island, and many other associates.

At AGU, they will report the results of deep sampling from opposite sides of the world, revealing enigmatic evidence of a deep subterranean microbe network.

Using DNA, researchers are finding hydrogen-metabolizing microbes in rock fractures deep beneath the North American and European continents that are highly similar to samples a Princeton University group obtained from deep rock fractures 4 to 5 km (2.5 to 3 miles) down a Johannesburg-area mine shaft. These DNA sequences are also highly similar to those of microbes in the rocky seabeds off the North American northwest and northeastern Japanese coasts.

“Two years ago we had a scant idea about what microbes are present in subsurface rocks or what they eat,” says Dr. Schrenk. “Since then a number of studies have vastly expanded that database. We’re getting this emerging picture not only of what sort of organisms are found in these systems but some consistency between sites globally—we’re seeing the same types of organisms everywhere we look.”

“It is easy to understand how birds or fish might be similar oceans apart, but it challenges the imagination to think of nearly identical microbes 16,000 km apart from each other in the cracks of hard rock at extreme depths, pressures, and temperatures” he says.

A hydrogen bubble is quickly released as olivine meets water and aluminum oxide under extreme pressure and heat. Credit: Muriel Andreani, University of Lyon-1

“In some deep places, such as deep-sea hydrothermal vents, the environment is highly dynamic and promotes prolific biological communities,” says Dr. McCollom. “In others, such as the deep fractures, the systems are isolated with a low diversity of microbes capable of surviving such harsh conditions.”

“The collection and coupling of microbiological and geochemical data made possible through the Deep Carbon Observatory is helping us understand and describe these phenomena.”

How water behaves deep within Earth’s mantle

Among other major presentations, DCO investigators will introduce a new model that offers new insights into water / rock interactions at extreme pressures 150 km (93 miles) or more below the surface, well into Earth’s upper mantle. To now, most models have been limited to 15 km, one-tenth the depth.

“The DCO gives a happy twist to the phrase ‘We are in deep water’,” says researcher Dimitri Sverjensky of Johns Hopkins University, Baltimore MD.

Dr. Sverjensky’s work, accepted for publication by the Elsevier journal Geochimica et Cosmochimica Acta, is expected to revolutionize understanding of deep Earth water chemistry and its impacts on subsurface processes as diverse as diamond formation, hydrogen accumulation, the transport of diverse carbon-, nitrogen- and sulfur-fed species in the mantle, serpentinization, mantle degassing, and the origin of Earth’s atmosphere.

In deep Earth, despite extreme high temperatures and pressures, water is a fluid that circulates and reacts chemically with the rocks through which it passes, changing the minerals in them and undergoing alteration itself—a key agent for transporting carbon and other chemical elements. Understanding what water is like and how it behaves in Earth’s deep interior is fundamental to understanding the deep carbon cycle, deep life, and deep energy.

This water-rock interaction produces valuable ore deposits, creates the chemicals on which deep life and deep energy depend, influences the generation of magma that erupts from volcanoes—even the occurrence of earthquakes. Humanity gets glimpses of this water in hot springs.

Says Dr. Sverjensky: “The new model may enable us to predict water-rock interaction well into Earth upper mantle and help visualize where on Earth H2 production might be underway.”

The DCO is now in the 5th year of a decade-long adventure to probe Earth’s deepest geo-secrets: How much carbon is stored inside Earth? What are the reservoirs of that carbon? How does carbon move among reservoirs? How much carbon released from Earth’s deep interior is primordial and how much is recycled from the surface? Are there deep abiotic sources of hydrocarbons? What is the nature and extent of deep microbial life? And did deep Earth chemistry play a role in life’s origins?

The $500 million global collaboration is led by Dr. Robert Hazen, Senior Staff Scientist at the Geophysical Laboratory, Carnegie Institution of Washington.

Says Dr. Hazen: “Bringing together experts in microbes, volcanoes, the micro-structure of rocks and minerals, fluid movements, and more is novel. Typically these experts don’t connect with each other. Integrating such diversity in a single scientific endeavor is producing insights unavailable until the DCO.”

Ninety percent or more of Earth’s carbon is thought to be locked away or in motion deep underground, he notes, a hidden dimension of the planet as poorly understood as it is profoundly important to life on the surface.

 

The galaxy’s ancient brown dwarf population revealed.


A team of astronomers led by Dr David Pinfield at the University of Hertfordshire have discovered two of the oldest brown dwarfs in the Galaxy. These ancient objects are moving at speeds of 100-200 kilometres per second, much faster than normal stars and other brown dwarfs and are thought to have formed when the Galaxy was very young, more than 10 billion years ago. Intriguingly the scientists believe they could be part of a vast and previously unseen population of objects. The researchers publish their results in the Oxford University Press journal Monthly Notices of the Royal Astronomical Society.

Brown dwarfs are star-like objects but are much less massive (with less than 7% of the Sun’s mass), and do not generate internal heat through nuclear fusion like . Because of this simply cool and fade with time and very old brown dwarfs become very cool indeed – the new discoveries have temperatures of 250-600 degrees Celsius, much cooler than stars (in comparison the Sun has a surface temperature of 5600 degrees Celsius).

The Galaxy's ancient brown dwarf population revealed

Pinfield’s team identified the new objects in the survey made by the Wide-field Infrared Survey Explorer (WISE), a NASA observatory that scanned the mid-infrared sky from orbit in 2010 and 2011. The  names are WISE 0013+0634 and WISE 0833+0052, and they lie in the Pisces and Hydra constellations respectively. Additional measurements confirming the nature of the objects came from large ground-based telescopes (Magellan, Gemini, VISTA and UKIRT). The infrared sky is full of faint red sources, including reddened stars, faint background  (large distances from our own Milky Way) and nebulous gas and dust. Identifying cool brown dwarfs in amongst this messy mixture is akin to finding needles in a haystack. But Pinfield’s team developed a new method that takes advantage of the way in which WISE scans the sky multiple times. This allowed them to identify cool brown dwarfs that were fainter than other searches had revealed.

The team of scientists then studied the infrared light emitted from these objects, which are unusual compared to typical slower moving brown dwarfs. The spectral signatures of their light reflects their ancient atmospheres, which are almost entirely made up of hydrogen rather than having the more abundant heavier elements seen in younger stars. Pinfield comments on their venerable ages and high speeds, “Unlike in other walks of life, the Galaxy’s oldest members move much faster than its younger population”.

Stars near to the Sun (in the so-called local volume) are made up of 3 overlapping populations – the thin disk, the thick disk and the halo. The thick disk is much older than the thin disk, and its stars move up and down at a higher velocity. Both these disk components sit within the halo that contains the remnants of the first stars that formed in the Galaxy.

Thin disk objects dominate the local volume, with thick disk and halo objects being much rarer. About 97% of local stars are thin disk members, while just 3% are from the thick-disk or halo. Brown dwarfs population numbers probably follow those of stars, which explains why these fast-moving thick-disk/halo objects are only now being discovered.

There are thought to be as many as 70 billion brown dwarfs in the Galaxy’s thin disk, and the thick disk and halo occupy much larger Galactic volumes. So even a small (3%) local population signifies a huge number of ancient brown dwarfs in the Galaxy. “These two brown dwarfs may be the tip of an iceberg and are an intriguing piece of astronomical archaeology”, said Pinfield. “We have only been able to find these objects by searching for the faintest and coolest things possible with WISE. And by finding more of them we will gain insight into the earliest epoch of the history of the Galaxy.”

Silicon Supercapacitor Powers Phones for Weeks on Single Charge.


Charge

Material scientists at Vanderbilt University have developed a supercapacitor made out of silicon. Previously thought to be kind of a crazy idea, the silicon capacitor can be built into a chip — which could give cellphones weeks of life from one charge, or solar cells that produce energy with or without the sun. Pretty sweet deal.Published in Scientific Reports, the first-ever silicon supercap stores energy by gathering ions on the surface of the porous material. Different from batteries, which work on chemical reactions, the silicon supercaps can be charged in minutes and last way longer. Silicon had been considered unsuitable for supercaps because of the way it reacts with the electrolytes that make the energy-storing ions.

“If you ask experts about making a supercapacitor out of silicon, they will tell you it is a crazy idea,” said assistant professor Cary Pint, who headed the development team at Vanderbilt. “But we’ve found an easy way to do it.”

Pint’s team coated the silicon in carbon — well, technically a few nanometers of graphene — and it stabilized the surface of the silicon, making it perfect for storing energy.

“All the things that define us in a modern environment require electricity,” said Pint. “The more that we can integrate power storage into existing materials and devices, the more compact and efficient they will become.”

Geekosystem is a Mashable publishing partner that aims to unite all the tribes of geekdom under one common banner. This article is reprinted with the publisher’s permission.

Uncovering the tricks of nature’s ice-seeding bacteria


Like the Marvel Comics superhero Iceman, some bacteria have harnessed frozen water as a weapon. Species such as Pseudomonas syringae have special proteins embedded in their outer membranes that help ice crystals form, and they use them to trigger frost formation at warmer than normal temperatures on plants, later invading through the damaged tissue. When the bacteria die, many of the proteins are wafted up into the atmosphere, where they can alter the weather by seeding clouds and precipitation.

Now scientists from Germany have observed for the first time the step-by-step, microscopic-level action of P. syringae‘s ice-nucleating proteins locking water molecules in place to form ice. The team will present their findings at the AVS 60th International Symposium and Exhibition, held Oct. 27 – Nov. 1 in Long Beach, Calif.

“Ice nucleating proteins are the most effective ice nucleators known,” said Tobias Weidner, leader of the surface protein group at the Max Planck Institute for Polymer Research. The proteins jump-start the process of ice crystal formation so well that dried ice-nucleating bacteria are often used as additives in snowmakers.

Although scientists discovered ice-nucleating proteins decades ago, little is known about how they actually work. Weidner and his team tackled the mystery with a powerful tool called spectroscopy that can decipher patterns in the interaction between light and matter to visualize the freezing process in layers of materials only a few molecules thick.

The researchers prepared a sample of fragments of P. syringae bacteria that they spread over water to form a surface film. As the temperature was lowered from room temperature to near freezing levels the scientists probed the interface between the bacterial proteins and the water with two laser beams. The beams combined within the sample and a single beam was emitted back, carrying with it information about how the protein and water molecules move and interact.

By analyzing the returning light beam’s frequency components, Weidner and his colleagues found a surprisingly dramatic result: as the temperature approached zero degrees Celcius the water molecules at the ice-nucleating protein surface suddenly became more ordered and the molecular motions become sluggish. They also found that thermal energy was very efficiently removed from the surrounding water. The results indicate that ice nucleating proteins might have a specific mechanism for heat removal and ordering water that is activated at low temperatures, Weidner said.

“We were very surprised by these results,” Weidner added. “When we first saw the dramatic increase of water order with lower temperatures we believed it was an artifact.” The movements of the water molecules near the ice-nucleating protein was very different than the way water had interacted with the many other proteins, lipids, carbohydrates, and other biomolecules the team had studied.

Recent studies have shown that large numbers of bacterial ice-nucleating proteins become airborne over areas like the Amazon rainforest and can spread around the globe. The proteins are among the most effective promoters of ice particle formation in the atmosphere, and have the potential to significantly influence weather patterns. Learning how P. syringae triggers frost could help teach researchers how ice particle formation occurs in the upper atmosphere.

“Understanding at the microscopic level – down to the interaction of specific protein sites with water molecules – the mechanism of protein-induced atmospheric ice formation will help us understand biogenic impacts on atmospheric processes and the climate,” Weidner said. For a more detailed picture of protein-water interactions it will also be important to combine their spectroscopic results with computer models, he said.

New device stores electricity on silicon chips.


Solar cells that produce electricity 24/7, not just when the sun is shining. Mobile phones with built-in power cells that recharge in seconds and work for weeks between charges.

These are just two of the possibilities raised by a novel supercapacitor design invented by material scientists at Vanderbilt University that is described in a paper published in the Oct. 22 issue of the journal Scientific Reports.

It is the first supercapacitor that is made out of silicon so it can be built into a silicon chip along with the microelectronic circuitry that it powers. In fact, it should be possible to construct these power cells out of the excess silicon that exists in the current generation of solar cells, sensors, mobile phones and a variety of other electromechanical devices, providing a considerable cost savings.

“If you ask experts about making a supercapacitor out of silicon, they will tell you it is a crazy idea,” said Cary Pint, the assistant professor of mechanical engineering who headed the development. “But we’ve found an easy way to do it.”

Instead of storing energy in chemical reactions the way batteries do, “supercaps” store electricity by assembling ions on the of a porous material. As a result, they tend to charge and discharge in minutes, instead of hours, and operate for a few million cycles, instead of a few thousand cycles like batteries.

These properties have allowed commercial , which are made out of activated carbon, to capture a few niche markets, such as storing energy captured by regenerative braking systems on buses and electric vehicles and to provide the bursts of power required to adjust of the blades of giant wind turbines to changing wind conditions. Supercapacitors still lag behind the electrical energy storage capability of lithium-ion batteries, so they are too bulky to power most consumer devices. However, they have been catching up rapidly.

https://i0.wp.com/cdn.physorg.com/newman/gfx/news/2013/2-newdevicesto.jpg

Research to improve the energy density of supercapacitors has focused on carbon-based nanomaterials like graphene and nanotubes. Because these devices store electrical charge on the surface of their electrodes, the way to increase their energy density is to increase the electrodes’ surface area, which means making surfaces filled with nanoscale ridges and pores.

“The big challenge for this approach is assembling the materials,” said Pint. “Constructing high-performance, functional devices out of nanoscale building blocks with any level of control has proven to be quite challenging, and when it is achieved it is difficult to repeat.”

So Pint and his research team – graduate students Landon Oakes, Andrew Westover and post-doctoral fellow Shahana Chatterjee – decided to take a radically different approach: using porous silicon, a material with a controllable and well-defined nanostructure made by electrochemically etching the surface of a silicon wafer.

This allowed them to create surfaces with optimal nanostructures for supercapacitor electrodes, but it left them with a major problem. Silicon is generally considered unsuitable for use in supercapacitors because it reacts readily with some of chemicals in the electrolytes that provide the ions that store the electrical charge.

With experience in growing carbon nanostructures, Pint’s group decided to try to coat the porous with carbon. “We had no idea what would happen,” said Pint. “Typically, researchers grow graphene from silicon-carbide materials at temperatures in excess of 1400 degrees Celsius. But at lower temperatures – 600 to 700 degrees Celsius – we certainly didn’t expect graphene-like material growth.”

When the researchers pulled the porous silicon out of the furnace, they found that it had turned from orange to purple or black. When they inspected it under a powerful scanning electron microscope they found that it looked nearly identical to the original material but it was coated by a layer of graphene a few nanometers thick.

When the researchers tested the coated material they found that it had chemically stabilized the silicon surface. When they used it to make supercapacitors, they found that the graphene coating improved energy densities by over two orders of magnitude compared to those made from uncoated and significantly better than commercial supercapacitors.

The graphene layer acts as an atomically thin protective coating. Pint and his group argue that this approach isn’t limited to graphene. “The ability to engineer surfaces with atomically thin layers of materials combined with the control achieved in designing porous materials opens opportunities for a number of different applications beyond energy storage,” he said.

“Despite the excellent device performance we achieved, our goal wasn’t to create devices with record performance,” said Pint. “It was to develop a road map for integrated energy storage. Silicon is an ideal material to focus on because it is the basis of so much of our modern technology and applications. In addition, most of the silicon in existing devices remains unused since it is very expensive and wasteful to produce thin wafers.”

Pint’s group is currently using this approach to develop that can be formed in the excess materials or on the unused back sides of and sensors. The supercapacitors would store excess the electricity that the generate at midday and release it when the demand peaks in the afternoon.

When the researchers tested the coated material they found that it had chemically stabilized the silicon surface. When they used it to make supercapacitors, they found that the graphene coating improved energy densities by over two orders of magnitude compared to those made from uncoated and significantly better than commercial supercapacitors.

The graphene layer acts as an atomically thin protective coating. Pint and his group argue that this approach isn’t limited to graphene. “The ability to engineer surfaces with atomically thin layers of materials combined with the control achieved in designing porous materials opens opportunities for a number of different applications beyond energy storage,” he said.

“Despite the excellent device performance we achieved, our goal wasn’t to create devices with record performance,” said Pint. “It was to develop a road map for integrated energy storage. Silicon is an ideal material to focus on because it is the basis of so much of our modern technology and applications. In addition, most of the silicon in existing devices remains unused since it is very expensive and wasteful to produce thin wafers.”

Pint’s group is currently using this approach to develop that can be formed in the excess materials or on the unused back sides of and sensors. The supercapacitors would store excess the electricity that the generate at midday and release it when the demand peaks in the afternoon.

When the researchers tested the coated material they found that it had chemically stabilized the silicon surface. When they used it to make supercapacitors, they found that the graphene coating improved energy densities by over two orders of magnitude compared to those made from uncoated and significantly better than commercial supercapacitors.

The graphene layer acts as an atomically thin protective coating. Pint and his group argue that this approach isn’t limited to graphene. “The ability to engineer surfaces with atomically thin layers of materials combined with the control achieved in designing porous materials opens opportunities for a number of different applications beyond energy storage,” he said.

“Despite the excellent device performance we achieved, our goal wasn’t to create devices with record performance,” said Pint. “It was to develop a road map for integrated energy storage. Silicon is an ideal material to focus on because it is the basis of so much of our modern technology and applications. In addition, most of the silicon in existing devices remains unused since it is very expensive and wasteful to produce thin wafers.”

Pint’s group is currently using this approach to develop that can be formed in the excess materials or on the unused back sides of and sensors. The supercapacitors would store excess the electricity that the generate at midday and release it when the demand peaks in the afternoon.

“All the things that define us in a modern environment require electricity,” said Pint. “The more that we can integrate power storage into existing and devices, the more compact and efficient they will become.”

Future sea-level rise from Greenland’s main outlet glaciers in a warming climate.


greenalnad

Over the past decade, ice loss from the Greenland Ice Sheet increased as a result of both increased surface melting and ice discharge to the ocean12. The latter is controlled by the acceleration of ice flow and subsequent thinning of fast-flowing marine-terminating outlet glaciers3. Quantifying the future dynamic contribution of such glaciers to sea-level rise (SLR) remains a major challenge because outlet glacier dynamics are poorly understood4. Here we present a glacier flow model that includes a fully dynamic treatment of marine termini. We use this model to simulate behaviour of four major marine-terminating outlet glaciers, which collectively drain about 22 per cent of the Greenland Ice Sheet. Using atmospheric and oceanic forcing from a mid-range future warming scenario that predicts warming by 2.8 degrees Celsius by 2100, we project a contribution of 19 to 30 millimetres to SLR from these glaciers by 2200. This contribution is largely (80 per cent) dynamic in origin and is caused by several episodic retreats past overdeepenings in outlet glacier troughs. After initial increases, however, dynamic losses from these four outlets remain relatively constant and contribute to SLR individually at rates of about 0.01 to 0.06 millimetres per year. These rates correspond to ice fluxes that are less than twice those of the late 1990s, well below previous upper bounds5. For a more extreme future warming scenario (warming by 4.5 degrees Celsius by 2100), the projected losses increase by more than 50 per cent, producing a cumulative SLR of 29 to 49 millimetres by 2200.

Source: http://www.scientificamerican.com

Has the Time Come to Try Geoengineering?


Earth’s average temperature has warmed by 0.8 degree Celsius over the last 100 years or so. The reason is increasing concentrations of greenhouse gases, particularly carbon dioxide, in the atmosphere. The concentration of CO2 has now reached 394 parts-per-million in the air we breathe—and would be even higher, roughly 450 ppm, if the oceans weren’t absorbing a good deal of the CO2 we create by burning fossil fuels, clearing forests and the like.

The basic physics have been understood for 150 years. Global warming has been observed for at least 30 years. International negotiations to restrain greenhouse gas emissions have been ongoing since 1992. And yet, other than during economic recessions, emissions have steadily marched up. If global warming is a problem—one likely already producing weird weather, rising seas and extinctions, among other effects that could be considered dangerous—we are not addressing it.

So is it time to consider something a little more radical? Specifically, the family of ideas for restraining climate change captured under the rubric of geoengineering? Or, as the U.K.’s Royal Society puts it: the deliberate, large-scale manipulation of the planetary environment. As the guest editors of a special issue of Philosophical Transactions of the Royal Society A note: “Geoengineering is no longer the realm of science fiction.”

The science fiction-y schemes vary from proposals to block out the sun via mimicking volcanic eruptions to massive machines the size of power-plant cooling towers to strip CO2 from the air at an accelerated rate. Or maybe you prefer creating CO2-storing peatlands by raising water tables, or engineering Sphagnum moss to better fend off microbial decomposition when dead. While we’re at it, the crops that cover 11 percent of Earth’s continental surface could be engineered to reflect more sunlight, or the ocean near Antarctica could be fertilized with iron to promote diatom blooms that ultimately bury carbon at sea.

In the end, there is a set amount of greenhouse gases that can be dumped into the atmosphere if we want to avoid catastrophic climate change. Scientists’ best guess is that we can emit 1,000 petagrams, or 1 trillion metric tons, of carbon if we want to stay below 2 degrees Celsius of warming (less than the amount of warming that characterized the shift from the ice-ridden Pleistocene to the milder epoch that birthed human civilization known as the Holocene). We have already emitted more than half of that and will emit the rest of that limit within a few decades if we continue to burn fossil fuels, clear forests and such at anything like present rates.

As climate modeler Ken Caldeira of Stanford University discusses in the September issue of Scientific American in his article “The Great Climate Experiment,” we are now effectively setting the temperature of the planet for the next several millenn

If the world collectively fails to restrain pollution, then we might need to deploy geoengineering techniques in a hurry to prevent catastrophic climate change. So doesn’t it make sense to investigate the promise of various techniques promise and perils? This is not a new idea—geoengineering hit President Lyndon Johnson’s desk in the 1960s along with a report on climate change that suggested he might deal with the problem by spreading reflective particles on the oceans—just a relatively unexplored one.

All this points to a more fundamental philosophical question about geoengineering, which, as the name implies, is global in scope: Who controls the thermostat? If greenhouse gas emissions are unlikely to turn Earth into Venus, technical remedies are quite sufficient to induce another Ice Age. In fact, weather control was first explored as a weapon during the Cold War. The barriers to entry are relatively low: an island nation, say, with a battery of big guns could start shooting sulphates into the air to block sunlight and cool the climate until somebody stopped them. Or sulphates could be used regionally to stave off, say, a heat wave. Scientists have already begun the task of assessing which method (existing aircraft or, maybe, tethered balloons) and particles might serve best (it’s not sulphate, it’s diamonds or, even better, the minerals you find in your sunblock!) Bonus: these other particles might let the sky stay blue rather than the hazy white expected from stratospheric sulphates, though the impacts of such particles falling out of the sky and covering the planet are unknown.

Such schemes have an apocalyptic feel and bring up images of Dr. Strangelove or other mad scientists. As one respondent to a survey of public attitudes toward geoengineering in England, Scotland and Wales in 2010 put it: “I don’t think you should mess about with the climate… I think that’s very dodgy to be honest.” Of course, we already are messing about with the climate. And that means the question that can’t be dodged is: What are we going to do about it?

Source: Scientific American