Today’s AI threat: More like nuclear winter than nuclear war


Robots standing with a nuclear winter landscape in distance

Last May, hundreds of leading figures in AI research and development signed a one-sentence statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” While ongoing advances in AI clearly demand urgent policy responses, recent attempts to equate AI with the sudden and extreme immediate effects of launching nuclear weapons rest on a misleadingly simple analogy—one that dates back to the early days of the Cold War and ignores important later developments in how nuclear threats are understood. Instead of an all-or-nothing thermonuclear war analogy, a more productive way to approach AI is as a disruption to global systems that more closely resembles the uncertain and complex cascades of a nuclear winter.

Over the last year and a half, headline-grabbing news has fed into the hype around the awe-inspiring potential capabilities of AI. However, while public commentators brace for the rise of the machine overlords, artificial un-intelligence is already kicking off chains of widespread societal disruption. AI-powered disinformation sows distrust, social media algorithms increase polarization, and mass-produced synthetic media degrade democratic engagement while undermining our shared sense of reality.

Uncritically equating acute nuclear attack effects and AI threats risks reproducing the same kind of all-or-nothing thinking that drove some of the most dangerous dynamics of the nuclear arms race. Drawing these analogies also unduly distracts from the dramatic damage that even a comparatively “small” nuclear war or “dumb” AI can cause to today’s interconnected social, ecological, and political systems. Rather than fear a future AI apocalypse, policymakers should recognize that the world is already living through something like the early days of an AI nuclear winter and develop effective frameworks for regulation that factor in how it is disrupting political, social, and ecological systems in unpredictable ways today. Overemphasizing speculative dangers of superintelligence (systems that exceed human intelligence) jeopardizes urgently needed efforts to regulate AI with a view to the systemic impacts of actual and emerging capacities.

Nuclear risk revisited. In 1961, John F. Kennedy warned that “every man, woman and child lives under a nuclear sword of Damocles” based on the contemporary concern that global fallout from thermonuclear war could poison every living being. The Cuban Missile Crisis of October 1962 came within a hair’s breadth of bringing the sword down, elevating nuclear fear to an unprecedented pitch. That very same month, computer pioneer Irving J. Good said “the survival of man depends on the early construction of an ultraintelligent machine.” Such a machine would surpass human intelligence, and Good proposed that human beings stood poised on the cusp of unleashing a self-reinforcing artificial intelligence explosion that could transform human existence just as totally as thermonuclear war. “Ultraintelligence,” he noted, would possess the transcendent power to either solve all human problems or destroy all human life, becoming “the last invention that man need ever make.”

Over the years, this simple and compelling vision of a sudden and transformative AI apocalypse has persisted almost unchanged. Computer scientist Vernor Vinge rechristened Good’s “intelligence explosion” the singularity in the 1990s, further warning that if it cannot be averted or contained, AI could cause “the physical extinction of the human race.” Good’s misgivings finally went mainstream a half-century later with the publication of philosopher Nick Bostrom’s book Superintelligencewhich warned of an impending runaway AI that could see “humanity deposed from its position as apex cogitator over the course of an hour or two”—a transformation so sudden and total that its only “precedent outside myth and religion” would be global thermonuclear war.

At the same time, while visions of extinction by AI explosion remained remarkably fixed, understandings of nuclear danger underwent a sea change. After realizing that radiological risks had been slightly overstated in the 1960s, scientists first began studying the global environmental effects of nuclear weapons in the 1970s. By the early 1980s, they started to realize that the global climatic impacts of nuclear war could be nearly as devastating as the radiological harm and required far fewer weapons to trigger. The firestorms of burning cities would fill the atmosphere with soot and particles that would block sunlight, causing surface temperatures to plummet and setting off a self-reinforcing cascade of collapses across interconnected ecological, agricultural, industrial, and social systems. Subsequent studies have confirmed that the resulting “nuclear winter” would likely kill the majority of those alive today, while even a limited exchange of several hundred warheads between India and Pakistan could still kill as many as two billion by starvation in the gloom of a milder “nuclear autumn.

Over the decades, advances in planetwide data collection and computer modeling transformed understandings of nuclear danger, replacing mistaken certainties about universal death by fallout with a growing awareness of the uncertain consequences that would follow from cascades of environmental and social breakdown. Similarly, the last several years have seen rapidly enhancing AI capacities spread to transform whole networks of human relations—with already destabilizing political and ecological consequences. Deep fakes intended to influence voters erode trust, and digital assistants and chatbots affect humans’ capacity for cooperative behavior and empathy, while producing immense carbon footprints. Just as it would take only a tiny fraction of today’s nuclear arsenals to initiate a chain of global-scale catastrophic events, humans do not need to wait for a moment when “machines begin to set their own objectives” to experience the global, interconnected, and potentially catastrophic harms AI could cause.

Today’s AI products contribute to, and accelerate, global warming and resource scarcity, from mining minerals for computation hardware to the consumption of massive amounts of electricity and water. Notably, the environmental burden of AI gets short shrift from those worried about the technology’s existential threat, as the “Statement of AI Risk” lists AI alongside nuclear war and pandemics but does not include climate change as an existential issue. Beyond environmental harms, existing AI systems can be used for nefarious purposes, such as developing new toxins. OpenAI’s large language model interface ChatGPT has been successfully prompted to share bomb-making instructions and tricked into outlining the steps to engineer the next pandemic. Although these examples still require more human input than many realize, an AI system is reportedly generating targets in Gaza, and the race is on to deploy lethal autonomous weapons systems that could reset the balance of power in volatile regions across the globe. These examples show that it does not take an intelligence explosion to cause immense harm. The ability to leverage automation and machine efficiency to global catastrophic effect is already here.

Arms race to the bottom. More insidiously, the analogy between nuclear weapons and the infinite risk-reward calculus of an “artificial intelligence explosion” reproduces the dynamics of the arms race. There are just enough similarities between the rush for nuclear and AI superiority to encourage repeating the same mistakes, with the phrase “AI arms race” becoming a common refrain. One of the clearest similarities between these cases might be that, much as the nuclear arms race with the Soviet Union was driven by spurious bomber and missile “gaps,” some of today’s most heated arms-race rhetoric hinges on overhyping China’s prowess.

A closer inspection shows that nuclear and AI arms races differ fundamentally. While building nuclear arsenals requires accessing a finite supply of enriched fissile material, AI models consist of binary code that can be infinitely copied, rapidly deployed, and flexibly adopted. This radically transforms the scale of the proliferation hazard of AI, particularly because—in contrast to the strict governmental oversight of nuclear weapons—AI development is highly commercialized and privatized. The difference in proliferation between nuclear technology and AI matters for approaches to their governance. The former can generate both explosions and electric power, but its weaponization can be measured and monitored. Current benchmarks for AI development, by contrast, are too far removed from real-world applications’ effects to usefully assess potential harm. In contrast to nuclear technology, AI is not merely of a dual-use nature. Instead, the remarkable range of activities it can transform makes it a general-purpose, enabling technology like electricity.

Where the vast build-up of the nuclear arms race signaled each adversary’s resolve to potentially destroy the world but otherwise left it intact, the headlong race towards an artificial intelligence explosion promises to radically transform the world regardless of whether its ultimate destination is ever reached (or even proves reachable).

Neither all nor nothing. Disarmingly simple analogies between AI and immediate nuclear risks not only make for powerful rhetoric but also good marketing. Whether or not developers genuinely believe that their products pose an existential threat, framing the near-term future of AI as such has granted executives of OpenAI, Anthropic, Microsoft, and Google access to high-level policy discussions at the White House, the US Senate, and the notoriously secretive Bilderberg conference. The result has been a flurry of promises by the tech firms to police themselves as they rush to release ever-more capable AI products. By encouraging the public to fixate on how these applications might end the world, AI CEOs divert attention from the urgent need to regulate the ways in which they are already actively unraveling the social, economic, and ecological support systems of billions in their drive to outrun their rivals and maximize market share.

While tech companies are stakeholders, they should not be the loudest—let alone only—voices in discussions on AI governance. Policymakers must not be distracted by the specter of superintelligence and take action that goes beyond gathering voluntary commitments from AI developers. Existing guidance and directives are a good start, but policymakers need to push forward to develop binding and enforceable legislation addressing both current and potential AI harms. For example, the Bletchley Declaration resulting from the recent summit on AI safety held by the United Kingdom government widens the horizons of concerns. Going beyond immediate issues of data privacy, bias, and transparency, it also considers the potential effects of AI on political stability, democratic processes, and the environment. However, critics note that it remains a largely symbolic and highly elite-focused agreement without actual enforcement mechanisms.

Looking to the early nuclear era can provide valuable lessons for throttling the pace for AI superiority, but these lessons are not directly translatable. The current and future globe-spanning effects of AI can only be addressed through international cooperation, most importantly between the United States and China as the two major antagonists. While the talks between presidents Joe Biden and Xi Jinping at the Asia-Pacific Economic Cooperation summit in San Francisco in mid-November did not yield specific agreements or commitments on AI regulation from either side, both parties recognized the need for international AI governance. They also showed willingness to establish formal bilateral cooperation on the issue.

However, because the proliferation hazards of AI fundamentally differ from those of nuclear weapons, limiting the arena to those with advanced AI programs, even only initially, is short-sighted. A framework of global AI governance is only as good as its weakest-governed element, so it must be stringent and inclusive from the start. Such an effort won’t be exhausted by one international body modeled after the International Atomic Energy Agency. The general-purpose nature of AI technology calls for more than one regulatory regime of mechanisms that are bound by common principles. In addition to bilateral dialogue, the policymakers should closely follow and support multilateral efforts, such as the newly established High-level Advisory Board on Artificial Intelligence at the United Nations.

To be sure, refocusing on the already-unfolding complex harms of AI does not mean being complacent about the long-term and existential risks it might pose. That humans have narrowly avoided nuclear war since the Cuban Missile Crisis does not diminish the urgency of managing today’s evolving nuclear threat. Similarly, decades of unfulfilled expectations about the imminent creation of an “ultraintelligent machine” does not prove it is impossible. Should a viable path to achieving greater-than-human intelligence ever open, it will be far better to be prepared. The best way to make ready for any such eventuality begins by directly addressing the cascades of planet-wide harms that AI applications are already causing. Every step taken to mitigate ongoing damage and redirect AI development towards goals of greater justice, sustainability, and fairness will help to create societies that are better able to grapple with the unresolved legacies of nuclear weapons and the undiscovered horizons of AI.

China’s toxic air pollution resembles nuclear winter, say scientists


Air pollution now impeding photosynthesis and potentially wreaking havoc on country’s food supply, experts warn
China pollution

China’s worsening air pollution has exacted a significant economic toll, grounding flights, closing highways and deterring tourists. Photograph: STR/AFP/Getty Images

Chinese scientists have warned that the country’s toxic air pollution is now so bad that it resembles a nuclear winter, slowing photosynthesis in plants – and potentially wreaking havoc on the country’s food supply.

Beijing and broad swaths of six northern provinces have spent the past week blanketed in a dense pea-soup smog that is not expected to abate until Thursday. Beijing’s concentration of PM 2.5 particles – those small enough to penetrate deep into the lungs and enter the bloodstream – hit 505 micrograms per cubic metre on Tuesday night. The World Health Organisation recommends a safe level of 25.

The worsening air pollution has already exacted a significant economic toll, grounding flights, closing highways and keeping tourists at home. On Monday 11,200 people visited Beijing’s Forbidden City, about a quarter of the site’s average daily draw.

He Dongxian, an associate professor at China Agricultural University‘s College of Water Resources and Civil Engineering, said new research suggested that if the smog persists, Chinese agriculture will suffer conditions “somewhat similar to a nuclear winter”.

Buildings are seen through thick haze in Guangzhou

Buildings in the central business district in Guangzhou seen through the thick haze.
She has demonstrated that air pollutants adhere to greenhouse surfaces, cutting the amount of light inside by about 50% and severely impeding photosynthesis, the process by which plants convert light into life-sustaining chemical energy.She tested the hypothesis by growing one group of chilli and tomato seeds under artificial lab light, and another under a suburban Beijing greenhouse. In the lab, the seeds sprouted in 20 days; in the greenhouse, they took more than two months. “They will be lucky to live at all,” He told the South China Morning Post newspaper.

She warned that if smoggy conditions persist, the country’s agricultural production could be seriously affected. “Now almost every farm is caught in a smog panic,” she said.

A farmer turning soil to plant crops surrounded by pollution

A farmer turns soil to plant crops near a state-owned lead smelter in Tianying that has made much of the land uninhabitable.
Early this month the Shanghai Academy of Social Sciences claimed in a report that Beijing’s pollution made the city almost “uninhabitable for human beings“.The Chinese government has repeatedly promised to address the problem, but enforcement remains patchy. In October, Beijing introduced a system of emergency measures if pollution levels remained hazardous for three days in a row, including closing schools, shutting some factories, and restricting the use of government cars.

People visit the Olympic Park amid thick haze in Beijing

People visiting the Olympic Park amid the thick haze in Beijing.
According to China’s state newswire Xinhua, 147 industrial companies in Beijing have cut or suspended production. Yet schools remained openand government cars remained on the road.One person not put off by the smog was President Xi Jinping, who braved the pollution to make an unannounced visit to a trendy neighbourhood popular with tourists.

Dressed in a black jacket and trousers – and no facemask – Xi made a brief walkabout in Nanluoguxiang district last Thursday morning. The visit prompted approving coverage in Chinese news reports, but also mockery on social media sites. “Xi Jinping visits Beijing’s Nanluoguxiang amid the smog: breathing together, sharing the fate,” said a Xinhua headline.

Photos and shaky video footage apparently of Xi’s visit ricocheted around Chinese social media sites. “Why isn’t he wearing a facemask?” asked one Sina Weibo user. “Isn’t it bad for his health?”

This week Chinese media reported that a man in Shijiazhuang, the capital of Hebei province near Beijing, had sued the local environmental protection bureau for failing to rein in the smog. Li Guixin filed the lawsuit asking the municipal environment protection bureau “perform its duty to control air pollution according to the law”, the Yanzhao Metropolis Daily reported.

Li is also seeking compensation for the pollution. “Besides the threat to our health, we’ve also suffered economic losses, and these losses should be borne by the government and the environmental departments because the government is the recipient of corporate taxes, it is a beneficiary,” he told the Yanzhao Metropolis Daily.

Li’s lawyer, Wu Yufen, confirmed the lawsuit but refused to comment because of the sensitivity of the case. He said: “This is the first ever case of a citizen suing the government regarding the issue of air pollution. We’re waiting for the judicial authority’s response.”

Severe pollution from chemical plants

Diseased vegetables said to be caused by pollution from a chemical plant.
Li told the newspaper that he had bought an air purifier, masks and a treadmill, but none had helped him to overcome the pernicious health effects of the smog. He is seeking RMB 10,000 (£1,000) in compensation. “I want show every citizen that we are real victims of this polluted air, which hurts us both from a health perspective and economically,” he said.Li Yan, a climate and energy expert at Greenpeace East Asia, said the case could bring exposure to polluted cities outside of Beijing, putting pressure on provincial officials to prioritise the problem. She said: “People … who live in Beijing are suffering from the polluted air, but we have the attention of both domestic and international media. Shijiazhuang’s environmental problems are far more serious, and this case could bring Shijiazhuang the attention it has deserved for a long time.”