No, You Shouldn’t Fear Nuclear Power


IN BRIEF
  • Micheal Shellenberger, environmental policy expert and co-founder of the Breakthrough Institute, believes public fear over nuclear energy is actually hurting the environment.
  • Nuclear energy provides low carbon emissions — about 12 grams of CO2 per kWh — compared to most power sources, renewables included.

There are many negative connotations associated with the phrase ‘nuclear energy.’ People fear it because of its potential for meltdown, its waste product, and its association with weapons. However, Micheal Shellenberger, environmental policy expert and co-founder of the Breakthrough Institute, believes that shouldn’t be the case.

Shellenberger sees nuclear energy as an underutilized (and safe) energy source. There’s no doubt that nuclear energy provides a lot of power. But, nuclear energy is also clean. It provides low carbon emissions — about 12 grams of CO2 per kWh, according to data from the UN’s International Panel on Climate Change (IPCC) — compared to most power sources, renewables included.

Lately, nuclear technology has seen a boost, with projects similar to Bill Gates’ TerraPower. Engineers have also developed, and are continually working on, reactors that don’t meltdown. Essentially, many fears people previously held about nuclear energy are really no longer issues today. Despite this, these negative associations persist.

In a TED Talk, Shellenberger describes how this fear is hurting the environment. He also explains why there’s no reason to fear this energy source, which is potentially cheaper, more viable, and more efficient than any other renewable sources around.

Watch the video discussion. URL:https://youtu.be/LZXUR4z2P9w

New Study Casts Doubt on the Future of Nuclear Power


While it’s been touted by some energy experts as a so-called “bridge” to help slash carbon emissions, a new study suggests that a commitment to nuclear power may in fact be a path towards climate failure.

For their study, researchers at the University of Sussex and the Vienna School of International Studies grouped European countries by levels of nuclear energy usage and plans, and compared their progress with part of the European Union’s 2020 Strategy.

That 10-year strategy, proposed in 2010, calls for reducing greenhouse gas emissions by least 20 percent compared to 1990 levels and increasing the share of renewable energy in final energy consumption to 20 percent.

The researchers found that “progress in both carbon emissions reduction and in adoption of renewables appears to be inversely related to the strength of continuing nuclear commitments.”

For the study, the authors looked at three groupings. First is those with no nuclear energy. Group 1 includes Denmark, Ireland and Portugal. Group 2, which counts Germany and Sweden among its members, includes those with some continuing nuclear commitments, but also with plans to decommission existing nuclear plants. The third group, meanwhile, includes countries like Hungary and the UK which have plans to maintain current nuclear units or even expand nuclear capacity.

“With reference to reductions in carbon emissions and adoption of renewables, clear relationships emerge between patterns of achievement in these 2020 Strategy goals and the different groupings of nuclear use,” they wrote.

For non-nuclear Group 1 countries, the average percentage of reduced emissions was 6 percent and they had an average of a 26 percent increase in renewable energy consumption.

Group 2 had the highest average percentage of reduced emissions at 11 percent and they also boosted renewable energy to 19 percent.

Pro-nuclear Group 3, meanwhile, had their emissions on average go up 3 percent and they had the smallest increase in renewable shares—16 percent.

“Looked at on its own, nuclear power is sometimes noisily propounded as an attractive response to climate change,” said Andy Stirling, professor of science and technology policy at the University of Sussex, in a media statement. “Yet if alternative options are rigorously compared, questions are raised about cost-effectiveness, timeliness, safety and security.”

“Looking in detail at historic trends and current patterns in Europe, this paper substantiates further doubts,” he continued. “By suppressing better ways to meet climate goals, evidence suggests entrenched commitments to nuclear power may actually be counterproductive.”

The new study focused on Europe and Benjamin Sovacool, professor of energy policy and director of the Sussex Energy Group at the University of Sussex, stated, “If nothing else, our paper casts doubt on the likelihood of a nuclear renaissance in the near-term, at least in Europe.”

Yet advocates of clean energy over on the other side of the Atlantic said the recent plan to close the last remaining nuclear power plant in California and replace it with renewable energy marked the “end of an atomic era” and said it could serve as “a clear blueprint for fighting climate change.”

Natural Resources Defense Council President Rhea Suh wrote of the proposal: “It proves we can cut our carbon footprint with energy efficiency and renewable power, even as our aging nuclear fleet nears retirement. And it strikes a blow against the central environmental challenge of our time, the climate change that threatens our very future.”

How safe is nuclear power? A statistical study suggests less than expected


After the Fukushima disaster, the authors analyzed all past core-melt accidents and estimated a failure rate of 1 per 3704 reactor years. This rate indicates that more than one such accident could occur somewhere in the world within the next decade. The authors also analyzed the role that learning from past accidents can play over time. This analysis showed few or no learning effects occurring, depending on the database used. Because the International Atomic Energy Agency (IAEA) has no publicly available list of nuclear accidents, the authors used data compiled by the Guardian newspaper and the energy researcher Benjamin Sovacool. The results suggest that there are likely to be more severe nuclear accidents than have been expected and support Charles Perrow’s “normal accidents” theory that nuclear power reactors cannot be operated without major accidents. However, a more detailed analysis of nuclear accident probabilities needs more transparency from the IAEA. Public support for nuclear power cannot currently be based on full knowledge simply because important information is not available.

 

In his essay, “A skeptic’s view of nuclear energy,” Princeton University nuclear expert Harold A. Feiveson writes that he is not anti-nuclear, and he lauds improvements in the operation and reliability of nuclear power plants in recent years as “striking.” However, he notes, “Even if the chance of a severe accident were, say, one in a million per reactor year, a future nuclear capacity of 1,000 reactors worldwide would be faced with a 1 percent chance of such an accident each 10-year period – low perhaps, but not negligible considering the consequences” (Feiveson 2009, 65).
The 2011 Fukushima disaster in Japan suggested once more that severe nuclear accidents could be even more frequent than safety studies had predicted and Feiveson had hoped. So we decided to estimate the probability of a severe accident – that is, a core-melt accident – by relating the number of past core-melt accidents to the total number of years reactors have been operating (i.e. “reactor years”).
This type of prediction often runs up against the argument that nuclear operators learn from the past. Therefore we also tried to account for any learning effects in our analysis. We restricted our analysis to accidents related to civil nuclear reactors used for power generation, as arguments about trade-offs for using nuclear technology differ depending on the application. And, because the International Atomic Energy Agency (IAEA) does not distribute comprehensive, long-term reports on nuclear incidents and accidents because of confidentiality agreements with the countries it works with, we have had to use alternative sources for information on nuclear accidents over time.
By our calculations, the overall probability of a core-melt accident in the next decade, in a world with 443 reactors, is almost 70%. (Because of statistical uncertainty, however, the probability could range from about 28% to roughly 95%.) The United States, with 104 reactors, has about a 50% probability of experiencing one core-melt accident within the next 25 years.1

Measuring core melts

In 1954 the Soviet Union connected the first nuclear power reactor to the grid; Calder Hall in England followed 2 years later. The number of reactors in the world then increased steadily until the mid-1980s. From then until 2011, the number grew only from about 420 to nearly 450. A more precise calculation takes into account data from the IAEA (2006, 46–51, 81), given until 2005, assuming the number did not change significantly after 2005. Thus we estimate that there were 14,816 cumulative reactor years from 1954 until March 2011.
Since 1990, the IAEA has used a seven-level International Nuclear Event Scale (INES) to measure the severity of nuclear incidents and accidents (IAEA 2008). Two of the three reactor accidents at Fukushima rank at Level 7, as does Chernobyl. According to the IAEA treaty, “countries are strongly encouraged” to report any events “at Level 2 or above” or “events attracting international public interest” (IAEA 2009, 10). It is not possible to assign INES levels to all accidents prior to 1990.
In the literature there are slightly different definitions of a minor, major, or severe accident. We use as the indicator for a severe nuclear accident the melting of nuclear fuel within the reactor. These core-melt accidents are the ones we analyze further.
One further hurdle came from the IAEA itself. Despite its encouragement of countries to report nuclear accidents, the agency makes INES information public for only 1 year after publication. Thus while the number of reactors connected to the grid is well known (IAEA 2006, 81), information on accidents at nuclear sites is hard to get. We tried several times to acquire better data from the IAEA without success.2 As Rejane Spiegelberg Planer, a senior safety officer with the agency and an INES coordinator, informed one of the authors in an e-mail on 1 April 2011, “There is no publicly available list of events rated using INES.” We therefore collected our data from two publicly available lists of nuclear accidents, one published by the Guardiannewspaper (Rogers 2011), the other in two papers by Benjamin K. Sovacool (2008, 2010) and in his book Contesting the Future of Nuclear Power (2011). The Guardian list contains 35 incidents and accidents, whereas Sovacool lists 99 major accidents.
Both the Guardian and Sovacool lists include the same eight core-melt accidents since 1952:

  1. Windscale, England, 1957: A fire ignites plutonium piles

  2. Simi Valley, California, 1959: A partial core melt takes place at the Santa Susana Field Laboratory’s Sodium Reactor Experiment

  3. Monroe, Michigan, 1966: The sodium cooling system of a demonstration breeder reactor causes partial core melt

  4. Dumfries, Scotland, 1967: Fuel rods catch fire and cause a partial core melt

  5. Lucens, Switzerland, 1969: The coolant system of an experimental reactor malfunctions

  6. Pennsylvania, 1979: Three Mile Island

  7. Soviet Union, 1986: Chernobyl

  8. Japan, 2011: Fukushima

We excluded from our analysis the Windscale military reactor accident in 1957 and three research reactor accidents (Simi Valley in 1959, Monroe in 1966, and Lucens in 1969). Finally, we counted the damage of three reactors in Fukushima as one accident because they were triggered by the same cause, a tsunami. This leaves four accidents with core melts in civil reactors for power generation.
Using simple statistics, the probability of a core-melt accident within 1 year of reactor operation is 4 in 14,816 reactor years, or 1 in 3704 reactor years. But this simplistic analysis is subject to a large degree of uncertainty. First, it assumes the absence of any learning effect, and that reactors in all countries have the same failure probability. Second, the estimated failure probability is subject to statistical error: One can conclude with only 95% confidence that the true failure probability for a core-melt accident is between 1 in 14,300 reactor years and 1 in 1450 reactor years. Thus the best estimate is 1 in 3704 reactor years.
Having established this, we can calculate the probability of at least one core melt for a given number of calendar years. Within the next 10 years, the probability of a core-melt accident in a world with 443 reactors is 69.8%. Because of the statistical uncertainty mentioned above, this value could range from 27.8% to 95.3%. The United States, with 104 reactors, can therefore expect one accident within the next 25 years with a probability of 50.4%.

Did the reactor operators learn?

We also wanted to see whether accidents become less frequent with more operational experience. But simply analyzing the number of severe accidents against reactor years is not very illuminating because, luckily, these accidents are rather rare. So we examined the relationship between the cumulative number of all accidents, from severe to minor ones, and cumulative reactor years. The accident rate is then estimated as the ratio of cumulative number of accidents to cumulative reactor years. If the probability of an accident remained constant over time, then a graph of the above accident-rate estimates against reactor years would exhibit no trend, whereas a learning effect would result in a decreasing accident probability and the graph would exhibit a decreasing trend.
We began by plotting the data from the Guardian list, with a few exclusions.3 The graph shows a high accident rate at the beginning because of one accident in Russia in 1957. The accident rate then drops because the following years were accident-free. After around 500 reactor years, the plot appears to stabilize, varying around a constant value. This is confirmed by a detailed statistical analysis, which produces a probability for a (minor or major) accident in a nuclear power plant of about 1 in 1000 reactor years and shows no evidence of a learning effect.
An analysis of Sovacool’s more extensive data, however, promises more insight. Sovacool does not list his data according to INES levels and instead uses a different definition of a major accident: One that causes human deaths or more than $50,000 in damage, the same amount used by the US government to indicate a major accident (Sovacool 2010, 380).4 When plotted, Sovacool’s data shows an initial period with strong learning effects, followed by a remaining period with much weaker or even absent learning effect.
Using a generalized regression analysis, we further found some evidence of a fairly consistent rate of learning in the period from around 1962 to 2011, although the evidence to rule out “no learning effect” completely is weak. The data indicate a stronger learning effect in the first years of the nuclear age, but this effect is not significantly different from the later learning effect. If the initial and final learning rates did differ, then the estimated year when the learning rate changed would be 1961; but the data would also be consistent with a change year between 1957 and 1965.
Nevertheless, from 1962 to 2010 the probability of a minor or severe accident at a reactor decreased by a factor of 2.5 (from 10 accidents per 100 reactor years to 4 accidents per 100 reactor years), while the operational experience increased by a factor of 170.
Unfortunately, the most important ingredient for a reliable analysis of this kind would be comprehensive time-series data, which are filed at the IAEA but not available for the public. While we could only use Sovacool’s list with 94 events worldwide, Phillip Greenberg writes that “between 1990 and 1992 the US Nuclear Regulatory Commission received more than 6600 ‘Licensee Event Reports’ because US nuclear plants failed to operate as designed and 107 reports because of significant events (including safety system malfunctions and unplanned and immediate reactor shutdowns)” (Greenberg 1996, 130–131).
Furthermore, based on our regression analysis we calculated the expected numbers of accidents in each year and compared these with the actual numbers of accidents. The differences between these two sets of figures were consistent with what one would expect if all reactors had the same failure probability. If the reactors had different failure probabilities, then this would induce additional variation between the observed and expected numbers of accidents. Thus there is no indication that some reactors are less prone to failure than others.

Normal accidents and the need for more data

In his classic book Normal Accidents, Charles Perrow developed the theory that systems with tight coupling of, and complex interaction between, components and subsystems are inherently unsafe. He attributes nuclear power plants with the highest complexity and tightest coupling, in both aspects ranked above space missions or nuclear weapon accidents (Perrow 1999, 327). And Scott Sagan adds: “…what I will call ‘normal accident theory,’ presents a much more pessimistic prediction: Serious accidents with complex high technologies are inevitable” (Sagan 1995, 13). Statistical analysis supports this unsettling probability.
In conclusion, the number of core-melt accidents that can be expected over time in nuclear power stations is larger than previously expected. To assess the risk of similar events occurring in the future, it is necessary to determine whether nuclear power operators learn from their experiences. Our work shows that it is possible to investigate such learning effects through statistical analysis. Until the IAEA makes the relevant data available, however, the full story of accident probability and learning effects will remain untold.
No potential conflict of interest was reported by the authors.

Notes

1. In the past, several studies have investigated the probability of a core melt using the probabilistic risk assessment (PRA) method. This determines probability prior to accidents by analyzing possible paths toward a severe accident, rather than using existing data to determine probability empirically. Two studies by the US Nuclear Regulatory Commission (1975, 1990) as well as a German government study (Hörtner 1980) examined seven different cases or reactors. Three calculations resulted in 1 accident in more than 200,000 reactor years, and a further three resulted in 1 accident in 11,000–25,000 reactor years. Only the result for the Zion reactor had an accident rate similar to ours, with 1 accident in 3000 years. After Chernobyl, Islam and Lindgren (1986, 691) published a short note in Nature in which, based on the known accidents (Three Mile Island and Chernobyl) and reactor years (approximately 4000) at the time, they concluded that “…the probability of having one accident every two decades is more than 95%.” Regarding PRA, they wrote: “Our view is that this method should be replaced by risk assessment using the observed data.” This sparked an intensive discussion of statistical issues in the following year (Edwards1986; Schwartz 1986; Fröhner 1987; Chow and Oliver 1987; Edwards 1987); however, there was agreement on the substantive conclusions of Islam and Lindgren.

2. An October 5 2011 e-mail by an IAEA official to one of the authors read: “Please note that old NEWS reports are not made available by the IAEA Secretariat. This is so because the reports have been provided by participating INES countries under the condition that the reports be only publicly available on NEWS for a period of 12 months (formerly 6 months). This condition has been agreed among the participating countries to prevent inappropriate use of the information (such as trying to use the information as a basis for statistical analyses and comparisons of safety performance of participating countries…”.

3. We excluded three accidents, namely Ikitelli in 1999, Yanangio in 1999, and Fleurus in 2006, because they were related to medical use. We also excluded the 1952 research reactor accident in Chalk River, Ontario. That left 16 accidents of Level 2 or higher.

4. From Sovacool’s list of 99 nuclear accidents, we excluded five: Chalk River in 1952, Windscale in 1957, Simi Valley in 1959, Monroe in 1966, and Lucens in 1969.

Saving the Planet Requires Nuclear Power


Climate change activists can be deniers, too.

For years, climate-change activists have been ripping into skeptics for having closed minds. And for good reason. Generally speaking, the typical global-warming doubter takes the view that the convergence of evidence<http://www.vox.com/2015/12/11/9898098/climate-skeptics-consilience> from decades of peer-reviewed research by the world’s best scientific minds is a bunch of politically driven horse-puckey – but any random blog post or talk-radio factoid challenging that scientific consensus is so plainly true it’s not even worth checking. This is not what anyone would call an intellectually rigorous stance.

This does not mean that any given statement about climate change is irrefutable. Or that skeptics do not raise some good points. Or that scientists can never be wrong. It simply means that many climate-change skeptics don’t like to confront facts that contradict their cherished beliefs. (Who does?) So they seize on evidence that might confirm their beliefs instead.

But climate-change activists fall prey to this confirmation bias, too. And a lot of them seem to be suffering from it with regard to nuclear power.

If you truly believe global warming is the greatest threat facing human civilization, then you ought to consider nuclear power a godsend. Restricting carbon-dioxide emissions to levels that can keep global warming within two degrees Celsius is immensely easier with nuclear power, and perhaps impossible without it.

Some climate activists are quick to say so. The Intergovernmental Panel on Climate Change’s Fifth Assessment Report noted that global-warming mitigation scenarios anticipate at least a doubling, and perhaps a tripling, of nuclear power by 2050. The “stabilization of (greenhouse-gas concentrations) at a level consistent with (earlier agreements) requires a fundamental transformation of the energy supply system,” the IPCC says. It cites research suggesting the need for “the construction of 29 to 107 new nuclear plants per year,” depending on targets. The upper figure is “historically unprecedented.”

During the Paris climate talks last month, James Hansen—one of the godfathers of the climate-change movement—joined with three other prominent climate scientists toissue a statement explaining why “nuclear power paves the only viable path forward on climate change.” The “voluntary measures put on the table at Paris” are a “welcome step,” they wrote, but far from sufficient. “The climate issue is too important for us to delude ourselves with wishful thinking. Throwing tools such as nuclear out of the box constrains humanity’s options and makes climate mitigation more likely to fail.” In fact, they conclude, “nuclear will make the difference between the world missing crucial climate targets or achieving them.”

This provokes a fair amount of pushback. One piece in The Guardian went so far as to term such views “a new, strange form of denial.” The author of that piece, Naomi Oreskes, is a history professor at Harvard. (Apparently history profs now know more about the subject than the IPCC, too.) Oreskes never even attempts to refute the conclusion about the need for more nuclear power. She simply disparages it.

It’s easy to understand why. Many environmentalists consider nuclear power—pardon the technical jargon—really yucky. So they will argue that the world can ratchet back greenhouse-gas emissions without any nuclear power at all. There are entire campaigns built around selling the idea. You can visit their websites, which claim we can power the world with nothing but renewable-energy sources such as wind, water, and solar.

And on that point they are correct—in the same sense that it is correct to say you can run a mile in under four minutes. All you have to do is run four quarter-miles in under 60 seconds each, without stopping. Mission accomplished! And just as nothing in the laws of biology prevents you from running a four-minute mile, nothing in the laws of physics prevents powering the planet with renewables alone.

In the real world, there is a little more to it than that.

Acreage, for instance. Consider Dominion Virginia Power’s new gas-fired generation plant in Warren County, which can generate 1,329 megawatts of electricity—on a slab of land measuring only 39 acres. To generate that much electricity from sunlight, you would need 36,000 acres of solar panels. That’s 56 square miles. For comparison’s sake, the entire city of Richmond is 60 square miles.

Dominion’s North Anna nuclear-power station can produce up to 1,892 megawatts. To get that much energy from sunlight would require 65,000 acres of solar panels, or 101 square miles. That’s slightly bigger than the area of Charleston, South Carolina or Milwaukee, Wisconsin.

Dominion can generate slightly more than 24,000 megawatts of power all together. To get that from solar power alone would require more than 1,000 square miles of solar panels. That’s the equivalent of putting the District of Columbia (68.3 square miles) inside the commonwealth—15 times over. And while Dominion is Virginia’s biggest electricity supplier, it is not the only one.

Granted, rooftop solar arrays and other forms of distributed generation would chip away at the need for dedicated real estate somewhat, but they can’t offer the economies of scale that industrial-scale solar plants can, and that would be necessary under any realistic transition scenario. And the issues with solar power don’t end there. In Virginia, solar facilities can provide reliable energy only about 25 percent of the time, because the sun isn’t shining strongly enough the rest of the time. For all practical purposes, this means that every megawatt of solar energy needs a megawatt of backup power from some other source, such as a natural gas-fired plant that can be switched on quickly when the clouds roll in. A quantum leap in battery technology might change that, of course. (Fingers crossed!) But hoping technological revolution will magically make a problem go away is not a sober strategy for dealing with climate change.

Wind energy can help, but wind confronts similar challenges. It’s an intermittent source of energy that takes up a gawdawful lot of space: anywhere from 30 to 141 acres per megawatt, according to the Union of Concerned Scientists, citing research by the National Renewable Energy Laboratory. And, as with solar generation, power companies can’t simply turn the wind on and off when demand for power spikes or dips. So wind also needs backup generation—which raises the question as to just how carbon-free wind energy backed up by, say, natural gas generation really is.

Keep in mind that the challenges outlined above pertain to current electricity demand. But only about 40 percent of U.S. greenhouse gas emissions come from electricity generation. Transportation accounts for another 34 percent. So switching from gasoline-fueled vehicles to electric vehicles will drive total electricity consumption much higher.

How much higher? Well, one gallon of gasoline equates to about 34 kilowatt-hours. Average American gasoline consumption is roughly 392 gallons per person and 1,000 gallons per houshold.

This means switching one household’s gas-powered cars for electric ones would require about 34,000 more kilowatt-hours of electricity per year. Right now, the average residential utility customer uses less than 11,000 kilowatt-hours a year. In other words, dispensing with the internal-combustion engine would quadruple the typical household’s electricity consumption. If solar energy provided all that new juice, then Virginia alone might need something like 4,000 square miles of solar arrays—taking up an area almost three-fourths the size of Connecticut. (And that’s for Dominion customers only.)

Note that so far, we haven’t even touched on issues such as environmental permitting—or cost. Environmentalists often point out that nuclear power plants are hugely expensive, and they are right about that. But the U.S. Energy Information Administration estimates that a quarter-century from now, the levelized cost of solar energy—that is, the total cost of all inputs over time—will remain roughly twice that of nuclear energy. Offshore wind will be another 50 percent more expensive on top of that.

For years, global-warming skeptics have accused global-warming activists of acting in bad faith. The activists don’t really want to save humanity from catastrophic climate change, the skeptics argue—they really just want to expand government’s reach and impose their own quasi-religious vision on the rest of society. Activists retort that this is nonsense: The fate of the planet hangs in the balance, and if mankind does not take radical steps soon, then we are all in extremely serious trouble. How those activists confront the need for more nuclear power could offer a good indication of who is telling the truth.

False promise of nuclear power


The need for costly upgrades post-Fukushima and for making the nuclear industry competitive, including by cutting back on generous government subsidies, underscore nuclear power’s dimming future.

New developments highlight the growing travails of the global nuclear-power industry. France — the “poster child” of atomic power — plans to cut its nuclear-generating capacity by a third by 2025 and focus instead on renewable sources, like its neighbours, Germany and Spain. As nuclear power becomes increasingly uneconomical at home because of skyrocketing costs, the U.S. and France are aggressively pushing exports, not just to India and China, but also to “nuclear newcomers,” such as the cash-laden oil sheikhdoms. Still, the bulk of the reactors under construction or planned worldwide are located in just four countries — China, Russia, South Korea and India.

Six decades after Lewis Strauss, chairman of the U.S. Atomic Energy Commission, claimed that nuclear energy would become “too cheap to meter,” nuclear power confronts an increasingly uncertain future, largely because of unfavourable economics. The International Energy Agency’s World Energy Outlook 2014, released last week, states: “Uncertainties continue to cloud the future for nuclear — government policy, public confidence, financing in liberalized markets, competitiveness versus other sources of generation, and the looming retirement of a large fleet of older plants.”

Heavily subsidy reliant

Nuclear power has the energy sector’s highest capital and water intensity and longest plant-construction time frame, making it hardly attractive for private investors. Plant construction time frame, with licensing approval, still averages almost a decade, as underscored by the new reactors commissioned in the past decade. The key fact about nuclear power is that it is the world’s most subsidy-fattened energy industry, even as it generates the most dangerous wastes whose safe disposal saddles future generations. Commercial reactors have been in operation for more than half-a-century, yet the industry still cannot stand on its own feet without major state support. Instead of the cost of nuclear power declining with the technology’s maturation — as is the case with other sources of energy — the costs have escalated multiple times.

In this light, nuclear power has inexorably been on a downward trajectory. The nuclear share of the world’s total electricity production reached its peak of 17 per cent in the late 1980s. Since then, it has been falling, and is currently estimated at about 13 per cent, even as new uranium discoveries have swelled global reserves. With proven reserves having grown by 12.5 per cent since just 2008, there is enough uranium to meet current demand for more than 100 years.

Yet, the worldwide aggregate installed capacity of just three renewables — wind power, solar power and biomass — has surpassed installed nuclear-generating capacity. In India and China, wind power output alone exceeds nuclear-generated electricity.

Fukushima’s impact

Before the 2011 Fukushima disaster, the global nuclear power industry — a powerful cartel of less than a dozen major state-owned or state-guided firms — had been trumpeting a global “nuclear renaissance.” This spiel was largely anchored in hope. However, the triple meltdown at Fukushima has not only reopened old safety concerns but also set in motion the renaissance of nuclear power in reverse. The dual imperative for costly upgrades post-Fukushima and for making the industry competitive, including by cutting back on the munificent government subsidies, underscores nuclear power’s dimming future. It is against this background that India’s itch to import high-priced reactors must be examined. To be sure, India should ramp up electricity production from all energy sources. There is definitely a place for safe nuclear power in India’s energy mix. Indeed, the country’s domestic nuclear-power industry has done a fairly good job both in delivering electricity at a price that is the envy of western firms and, as the newest indigenous reactors show, in beating the mean global plant construction time frame.

India should actually be encouraging its industry to export its tested and reliable midsize reactor model, which is better suited for the developing countries, considering their grid limitations. Instead, Prime Minister Manmohan Singh’s government, after making India the world’s largest importer of conventional arms since 2006, set out to make the country the world’s single largest importer of nuclear power reactors — a double whammy for Indian taxpayers, already heavily burdened by the fact that India is the only major economy in Asia that is import-dependent rather than export driven.

Critiquing India’s programme

To compound matters, the Singh government opted for major reactor imports without a competitive bidding process. It reserved a nuclear park each for four foreign firms (Areva of France, Westinghouse and GE of the U.S., and Atomstroyexport of Russia) to build multiple reactors at a single site. It then set out to acquire land from farmers and other residents, employing coercion in some cases.

Having undercut its leverage by dedicating a park to each foreign vendor, it entered into price negotiations. Because the imported reactors are to be operated by the Indian state, the foreign vendors have been freed from producing electricity at marketable rates. In other words, Indian taxpayers are to subsidise the high-priced electricity generated.

Westinghouse, GE and Areva also wish to shift the primary liability for any accident to the Indian taxpayer so that they have no downside risk but only profits to reap. If a Fukushima-type catastrophe were to strike India, it would seriously damage the Indian economy. A recent Osaka City University study has put Japan’s Fukushima-disaster bill at a whopping $105 billion.

To Dr. Singh’s discomfiture, three factors put a break on his reactor-import plans — the exorbitant price of French- and U.S.-origin reactors, the accident-liability issue, and grass-roots opposition to the planned multi-reactor complexes. After Fukushima, the grass-roots attitude in India is that nuclear power is okay as long as the plant is located in someone else’s backyard, not one’s own. This attitude took a peculiar form at Kudankulam, in Tamil Nadu, where a protest movement suddenly flared just when the Russian-origin, twin-unit nuclear power plant was virtually complete.

India’s new nuclear plants, like in most other countries, are located in coastal regions so that these water-guzzling facilities can largely draw on seawater for their operations and not bring freshwater resources under strain. But coastal areas are often not only heavily populated but also constitute prime real estate. The risks that seaside reactors face from global warming-induced natural disasters became evident more than six years before Fukushima, when the 2004 Indian Ocean tsunami inundated parts of the Madras Atomic Power Station. But the reactor core could be kept in a safe shutdown mode because the electrical systems had been installed on higher ground than the plant level.

One-sided

Dr. Singh invested so such political capital in the Indo-U.S. civil nuclear agreement that much of his first term was spent in negotiating and consummating the deal. He never explained why he overruled the nuclear establishment and shut down the CIRUS research reactor — the source of much of India’s cumulative historic production of weapons-grade plutonium since the 1960s. In fact, CIRUS had been refurbished at a cost of millions of dollars and reopened for barely two years when Dr. Singh succumbed to U.S. pressure and agreed to close it down.

Nevertheless, the nuclear accord has turned out to be a dud deal for India on energy but a roaring success for the U.S. in opening the door to major weapon sales — a development that has quietly made America the largest arms supplier to India. For the U.S., the deal from the beginning was more geostrategic in nature (designed to co-opt India as a quasi-ally) than centred on just energy.

Even if no differences had arisen over the accident-liability issue, the deal would still not have delivered a single operational nuclear power plant for a more than a decade for two reasons — the inflated price of western-origin commercial reactors and grass-roots opposition. Areva, Westinghouse and GE signed Memorandums of Understanding with the state-run Nuclear Power Corporation of India Limited (NPCIL) in 2009, but construction has yet to begin at any site.

India has offered Areva, with which negotiations are at an advanced stage, a power price of Rs.6.50 per kilowatt hour — twice the average electricity price from indigenous reactors. But the state-owned French firm is still holding out for a higher price. If Kudankulam is a clue, work at the massive nuclear complexes at Jaitapur in Maharashtra (earmarked for Areva), Mithi Virdi in Gujarat (Westinghouse) and Kovvada in Andhra Pradesh (GE) is likely to run into grass-roots resistance. Indeed, if India wishes to boost nuclear-generating capacity without paying through its nose, the better choice — given its new access to the world uranium market — would be an accelerated indigenous programme.

Globally, nuclear power is set to face increasing challenges due to its inability to compete with other energy sources in pricing. Another factor is how to manage the rising volumes of spent nuclear fuel in the absence of permanent disposal facilities. More fundamentally, without a breakthrough in fusion energy or greater commercial advances in the area that the U.S. has strived to block — breeder (and thorium) reactors — nuclear power is in no position to lead the world out of the fossil fuel age.

 

It’s show time for nuclear.


A growing number of environmentalists who once opposed nuclear power are now backing it as a source of energy that can significantly reduce the world’s reliance on CO2-emitting fossil fuels. That’s the point of the convincing feature-length documentary film Pandora’s Promise, released this week and now playing at cinemas in the U.S.

As I wrote in December, Pandora debuted at Robert Redford’s Sundance Film Festival. I was fortunate to see a screening recently in Chicago. I’ll deliver my thumbs-up review in a subsequent blog.

But today, I wanted to alert you to another upcoming cinematic homage to nuclear.

Get ready for the release later this year of “The Good Reactor” (at least that’s its title for now), from a couple of young Irish film makers, Frankie Fenton and Des Kelleher.

The film focuses on reactors powered by liquid thorium fuel. So-called “thorium molten salt reactors,” also known as “liquid fluoride thorium reactors” auger great improvements over conventional solid-fuel uranium reactors in safety and efficiency.

As someone once said, molten salt reactors can’t melt down, because, as liquid reactors, they are essentially “pre-melted.” If things go wrong, the fuel drains harmlessly into a holding tank. They leave less waste than do today’s reactors, and any potential bomb maker will find it much more difficult to fashion an explosion from them. Oh – you can also mix spent fuel into the thorium, solving the question of what to do with existing plutonium waste.

And get this: molten salt reactors run at much higher temperatures than today’s reactors, so heavy industry can use them as a clean source of industrial heat, replacing the fossil fuel furnaces they use today.

The Good Reactor will cover all this and more in a non-technical way intended to educate a general audience. It even presents level headed debate both for and against nuclear, although its pro-thorium nuclear message will be clear.

“It’s about new technology, and people’s attitudes – we want to give a proper voice to the nuclear discussion,” said Fenton, who I spoke with via Skype this morning, and who believes that nuclear along with renewables will help mitigate the global warming consequences of CO2-spewing fossil fuels. In a promotional video, Kelleher says the film is about the “power of human creativity to solve enormous problems like climate change or an energy crisis.”

Fenton and Kelleher are hoping to wrap their final cut by the end of the summer, and then hit the film festivals and seek distribution through broadcasters, cinemas or the Internet.

Like with many an aspiring film maker, the pair could use a little more moola to apply the finishing touches. They’re trying to raise a modest £40,000 ($63,000) in a Kickstarter campaign that ends soon, at 6 p.m. New York time Friday, June 21

 

Source: Smart planet

good-reactor-fenton-kelleher1