Do Black Holes Die?


Stephen Hawking’s suggestion that black holes “leak” radiation left physicists with a problem they have been attempting to solve for over 50 years.

Do black holes die? In what is arguably his most significant contribution to science, Stephen Hawking suggested that black holes can leak a form of radiation that causes them to gradually ebb away, and eventually end their lives in a massive explosive event.

This radiation ,  later called “Hawking radiation,” inadvertently causes a problem at the intersection of general relativity and quantum physics — the former being the best description we have of gravity and the universe on cosmically massive scales, while the latter is the most robust model of the physics that governs the very small.

The two theories have been confirmed repeatedly since their distinct inceptions at the start of the 20th century. Yet, they remain frustratingly incompatible.

This incompatibility , which mainly arises from the lack of a theory of “quantum gravity,” was compounded in the mid-1970s when Hawking took the principles of quantum physics and applied them to the edge of black holes. A paradox was born that physicists have been working to solve for over 50 years.

We may finally be on the verge of a solution thanks to review published in the journal Europhysics Letters in August 2022. In it, University of Sussex physics researchers Xavier Calmet and Stephen D. H. Hsu detail the problem of the so-called “Hawking paradox,”  and potential solutions to this cosmological problem.

What’s the Problem With Hawking Radiation?

In a 1974 letter entitled Black hole explosions? published in the journal Nature, a young Hawking proposed that quantum effects, usually ignored in black-hole physics, could become significant in the deterioration of mass of a black hole over a period of approximately 10¹⁷ (10 followed by 16 zeroes) seconds.

Black holes are created when massive stars reach the end of their lives and the fuel they use for nuclear fusion is exhausted. The cessation of nuclear fusion ends the outward pressure that supports a star against the inward force of its own gravity

This results in a core collapse that creates a point in which spacetime is infinitely curved — a central singularity that physics currently can’t explain. At the outer edge of this extreme curvature is the “event horizon” of the black hole, or the point at which not even light is fast enough to escape the gravitational pull of the black hole.

“Either we need to modify quantum mechanics or maybe Einstein’s theory of general relativity.”

“Hawking investigated quantum effects close to the horizon of black holes realizing that pairs of particles would be spontaneously generated here,” Calmet tells Popular Mechanics. “Looking at a specific pair of particles, he could show that one of the two when produced at the event horizon would fall into the black hole never to be seen again. The other would escape and be in principle visible to an outside observer. This is the famous Hawking radiation.”

When these so-called virtual particles arise, they do so with equal and opposite charges to avoid violating the law of conservation of energy, which states that energy can neither be created nor destroyed. Like a bank, the vacuum of space has an overdraft facility, but this debt is usually quickly paid back by the particles annihilating each other.

If one particle escapes as Hawking radiation and avoids annihilation, the energy debt that remains has to be paid by the mass of the black hole. This causes it to gradually evaporate as more particles pop into existence and more Hawking radiation is emitted, sapping more mass.

“Hawking radiation is thermal, and thermal radiation is pretty much featureless. This means that it cannot carry information about the object that emitted it,” Calmet says. “This would be a serious issue for black holes.”


He points out that Hawking’s calculation implies that the information about what went into the black hole would be destroyed as the black hole evaporates.

“If true, this would be an issue for physics as one of the key properties of quantum mechanics called ‘unitarity’ implies that it is always possible to watch a movie backward. In other words, from the observation of the radiation emitted by a black hole, quantum mechanics tells us that we should be able to reconstruct all the history of the black hole, what went into it,” Calmet says. “If Hawking is right, we would need to accept that one of the well-established theories of physics is wrong. Either we need to modify quantum mechanics or maybe Einstein’s theory of general relativity.”

Fortunately, just last year, the physicists suggested an idea that could do away with the Hawking paradox by using existing mechanisms.

Black Holes May Have Hair After All

Despite being a powerful and mysterious spacetime phenomenon, black holes are fairly easy to describe. This is because they can only have three properties that we are sure of: mass, angular momentum, and electric charge. Theoretical physicist John Wheeler summed this up with the phrase “black holes have no hair.”

Calmet and Hsu suggest that information carried by swallowed matter may be encoded in the gravitational field of a black hole. By calculating corrections to gravity on a quantum level, they showed the potential of the star is sensitive to its internal conditions. This means black holes possess, for lack of a better term, “quantum hair” grown by its progenitor star’s composition.

black hole created after supernova, illustration

Black hole created after supernova event.

“When this star collapses to a black hole, the correction remains and black holes thus have a quantum hair,” Calmet explains. “In other words, black holes have some quantum memory of their progenitor star.”

The duo followed this by suggesting that Hawking radiation isn’t entirely thermal in nature. Instead, they believe it has informational quantum hair encoded into it.


“The very small departures from thermality are enough to explain how the information that is in the black hole remains accessible to an outside observer,” Calmet argues. “This is enough to preserve unitarity and thus, there is no paradox.”

The beauty of Calmet and Hsu’s theory is it requires no adjustments to quantum mechanics or general relativity, or extra mechanisms not already proposed by physics.

“In the end, all the ingredients to solve the problem have been around for quite a while, in a sense Hawking could have solved it himself if he had looked for a simple explanation,” Calmet says. “It is striking to me that solving the information paradox could be done without positing new physics despite what most people have believed for almost five decades.”

Other ideas to solve Hawking’s paradox aren’t nearly as conservative. Indeed, some could change our fundamental concept of the universe–or should that be “universes?”

Do Black Holes Die?

black hole, artwork

The concept of the “multiverse” is the idea that multiple universes exist in addition to our own, but are separated and unable to interact. One new iteration of this idea suggests that the singularity at the heart of a black hole — the infinitely curved point at which all laws of physics break down — is actually a separate and distinct infant universe.

“In my theory, every black hole is actually a wormhole or an ‘Einstein-Rosen bridge’ to a new universe on the other side of the black hole’s event horizon,” Nikodem Poplawski, a physics lecturer in the Department of Mathematics and Physics at the University of New Haven, tells Popular Mechanics.

This would mean each universe, like our own, could host billions of black holes, each containing its own baby universe. Poplawski says that this proposition resolves Hawking’s paradox naturally.

“The information does not disappear but goes to the baby universe on the other side of the black hole’s event horizon,” Poplawski continues. “The matter and information that falls into a black hole and emerges from a white hole [the opposite of a black hole which allows exit but not entry] in the baby universe.

While the theory doesn’t explicitly account for Hawking radiation, much like Einstein’s original theory of general relativity, it doesn’t disallow it. With regard to the eventual evaporation of the black hole, Poplawski says this event would just permanently seal off the infant universe from its parent.

Many other ideas have been put forward to solve Hawking’s paradox, including information remaining in the black hole’s interior and emerging at the end of black hole evaporation. While none have quite wrapped the problem up in a neat bow, Calmet says some of the finest minds in physics are hard at work on the issue.

Hawking was a titan in his field, and his most significant work showed that not even cosmic titans like black holes can last forever—black holes really could die. Hawking’s successors are working to ensure this impermanence applies to the paradox that bears his name.

Stephen Hawking’s unnerving theory confirmed: Everything in the universe will evaporate


The late theoretical physicist, Stephen Hawking, is still showing his brilliance years after his death. Scientists have now confirmed one of Hawking’s more unnerving theories — stating that everything in the universe will eventually evaporate.

Scientists from Radboud University confirmed Stephen Hawking’s theory on black holes and their evaporation. Due to Hawking radiation, named for the famous scientist, black holes will eventually evaporate, but the event horizon is not as crucial to this as previously believed.The new study reveals that particles can be created and radiation can occur far beyond the event horizon. This means that not only black holes but also other large objects in the universe, like remnants of dead stars, will eventually cease to exist.

Combining quantum physics and Einstein’s theory of gravity, Hawking proposed that pairs of particles are spontaneously created near the event horizon of black holes, with one particle escaping while the other falls into the black hole. This process, which produces Hawking radiation, leads to the gradual evaporation of black holes over time.

black hole
This illustration depicts a star (in the foreground) experiencing spaghettification as it’s sucked in by a supermassive black hole (in the background) during a ‘tidal disruption event’. In a new study, done with the help of ESO’s Very Large Telescope and ESO’s New Technology Telescope, a team of astronomers found that when a black hole devours a star, it can launch a powerful blast of material outwards. (CREDIT: ESO/M. Kornmesser)

The researchers at revisited this phenomenon and explored the significance of the event horizon. Their interdisciplinary approach involving physics, astronomy, and mathematics investigated the creation of particle pairs in the vicinity of black holes. Surprisingly, they found that new particles can be created even beyond this specific point in space, challenging the previous understanding.

“We demonstrate that, in addition to the well-known Hawking radiation, there is also a new form of radiation,” says researcher Michael Wondrak in a media release.

“We show that far beyond a black hole the curvature of spacetime plays a big role in creating radiation. The particles are already separated there by the tidal forces of the gravitational field,” adds researcher Walter van Suijlekom.

“That means that objects without an event horizon, such as the remnants of dead stars and other large objects in the universe, also have this sort of radiation. And, after a very long period, that would lead to everything in the universe eventually evaporating, just like black holes. This changes not only our understanding of Hawking radiation but also our view of the universe and its future,” concludes Heino Falcke.

Who was Prof. Stephen Hawking?

Stephen Hawking was an English theoretical physicist, cosmologist, and author from Oxford in the United Kingdom. He is widely considered one of the greatest scientists of the 20th century.

Hawking was diagnosed with a rare, early-onset, slow-progressing form of motor neuron disease (also known as ALS or Lou Gehrig’s disease) that gradually paralyzed him over the decades. Despite his physical limitations, he continued to work and make significant contributions to the field of theoretical physics.

Stephen Hawking
Dr. Stephen Hawking, a professor of mathematics at the University of Cambridge, delivers a speech entitled “Why we should go into space” during a lecture that is part of a series honoring NASA’s 50th Anniversary, Monday, April 21, 2008, at George Washington University’s Morton Auditorium in Washington. Photo Credit: (NASA/Paul. E. Alers)

Hawking’s key scientific works revolved around the physics of black holes and the properties of the universe. He proposed that black holes are not completely black but emit small amounts of thermal energy — Hawking radiation.

In 1988, Hawking achieved international recognition with the publication of “A Brief History of Time.” The book aimed to present his theories about the universe so the general public could understand them. The book became a bestseller, making him a household name.

Hawking was a professor of mathematics at the University of Cambridge for three decades, until his retirement in 2009. He continued to work as a research director at the university until his death. Stephen Hawking passed away in March of 2018, but his influence continues to resonate in the fields of cosmology, general relativity, and quantum gravity, particularly among those studying black hole physics.

Could Echoes from Colliding Black Holes Prove Stephen Hawking’s Greatest Prediction?


Subtle signals from black hole mergers might confirm the existence of “Hawking radiation”—and gravitational-wave detectors may have already seen them.

Could Echoes from Colliding Black Holes Prove Stephen Hawking's Greatest Prediction?
Two massive black holes spiral together and emit copious gravitational waves moments before colliding in this image from a numerical simulation of the merger known as GW190521. C

In 1974 Stephen Hawking theorized that black holes are not black but slowly emit thermal radiation. Hawking’s prediction shook physics to its core because it implied that black holes cannot last forever and that they instead, over eons, evaporate into nothingness—except, however, for one small problem: there is simply no way to see such faint radiation. But if this “Hawking radiation” could somehow be stimulated and amplified, it might be detectable, according to some astrophysicists. And they are now claiming to have seen signs of it in the aftermath of the most massive collision of black holes ever observed.

The claim, however, is extremely controversial, because other searches for such echoes of gravitational waves have come up empty-handed.

In May 2019 the Laser Interferometer Gravitational-Wave Observatory (LIGO) in the U.S. and Virgo in Italy observed gravitational waves—ripples in the fabric of space-time—from the merger of two black holes that had a total mass of 151 suns. The merger left behind a black hole of 142 solar masses. The difference of nine solar masses was radiated away, almost all of it in the form of gravitational waves. “This is the most massive event observed to date,” says Jahed Abedi of the University of Stavanger in Norway, who co-authored a preprint paper in which he and his colleagues claim to have measured the Hawking radiation of this merger.ADVERTISEMENT

The gravitational waves from this event, named GW190521, not only rippled out to eventually interact with LIGO’s and Virgo’s detectors on Earth; they also washed over the remnant black hole produced by the initial collision. What happened next depends on your view of black hole physics. If black holes are described entirely by Einstein’s general theory of relativity, then they have an event horizon—a one-way boundary that anything can fall into but from which nothing can escape. “In the standard black hole picture, the event horizon of a black hole absorbs all the radiation,” says Paolo Pani, a theoretical physicist at Sapienza University of Rome. So the inward-going gravitational waves should just disappear.

But that might not be what happened. Physicists think that some combination of quantum physics and general relativity is needed to fully describe black holes, in which case it is possible that a portion of the infalling gravitational waves could be reflected—either because of quantum effects near the horizon or because the dense, compact object created by the merger lacks a horizon and has some internal structure. If so, echolike signatures of this could be present in the information collected by LIGO, Virgo and other detectors. Similar to sonic echoes, such signatures would be much weaker and ever-so-slightly delayed, compared with the original gravitational waves from the merger.

Exactly what such echoes would look like depends on the exact physics being modeled. For example, the region just outside a black hole’s horizon is thought to be a bustling place, abuzz with pairs of virtual particles popping in and out of existence. Sometimes one of the pair falls into the black hole, and the other escapes. These escaping particles constitute Hawking radiation. This is an agonizingly slow process. In the case of the GW190521, Abedi and his colleagues argue that the production of Hawking radiation by the remnant could be sped up substantially—stimulated, in other words—by the infalling gravitational waves.

The principle is somewhat similar to what occurs during stimulated emission of radiation in atoms. In this process, photons of light hit “excited” electrons in atoms, causing the electrons to drop to lower energy levels while spitting out photons that have the same wavelength as the incident photons. In certain situations, this stimulated emission can far exceed the spontaneous “background” emission of radiation (where an electron, on its own, drops from a higher energy level to a lower one and emits a photon). Abedi and his colleagues theorize that gravitational waves interacting with a black hole’s event horizon should similarly stimulate the production of Hawking radiation to levels that far exceed spontaneous emissions, thus making it detectable. This radiation would constitute gravitational waves of the same wavelength as the incident waves, albeit with much lower intensity.

The researchers claim to have seen signs of this stimulated emission of Hawking radiation from the GW190521 remnant. They used two different methods to analyze the GW190521 data collected by LIGO and Virgo. The first method compares two models: one based purely on general relativity, with no postmerger echoes or signals, and another that includes stimulated Hawking radiation. “If you compare them, the [general relativity] plus postmerger stimulated radiation is preferred seven times more,” Abedi says.ADVERTISEMENT

The second method was agnostic about any specific model and simply looked for coherent bursts of postmerger gravitational waves from different detectors. The team claims it found such bursts. “[The two methods] are consistent with each other,” Abedi says.

The researchers’ statistical analysis gives 0.5 percent odds (about a one in 200 chance) that the putative signal is instead merely noise. Normally, for physicists to claim a discovery, the odds of a false alarm have to be lower than one in a million. Consequently, Pani, who was not part of the team, is circumspect. “The statistical evidence they have is … definitely too low to claim a measurement,” he says.

“This is not [a] very loud signal,” Abedi acknowledges, adding that nonetheless it is the best that can be done with current gravitational-wave detectors. “Our target is next-generation detectors.”

Pani agrees that a facility such as the Laser Interferometer Space Antenna (LISA), a European Space Agency–led project slated for launch in the late 2030s, would be better suited for such studies. “With future detectors, if there is something, we will get the evidence necessary to claim a measurement,” he says.

Even if the evidence for a signal was statistically more significant, however, Pani remains critical of Abedi and his colleagues’ claim that this would be evidence for Hawking radiation. “They could have claimed measurement of gravitational-wave echoes. [It] is a big conceptual step [to] saying that this is stimulated Hawking radiation,” Pani says. “In other models, it might be something else.”ADVERTISEMENT

Just last month members of LIGO, Virgo and the Kamioka Gravitational Wave Detector (KAGRA) in Japan teamed up and posted a preprint of their latest analysis of gravitational-wave data. They looked at 15 events, 14 in which two black holes merged and one in which a black hole merged with a neutron star. All the events had been observed by two or more detectors. “This analysis included GW190521. We find no evidence for echoes or any other deviations from the predictions of general relativity,” says Daniel Holz, a LIGO team member at the University of Chicago. “It would be incredibly exciting if echoes existed, or any of the other speculative deviations from general relativity, but it looks like there’s no compelling evidence for them in the data thus far. Einstein’s theory has passed all tests to date. It is embarrassingly effective and accurate.”

Pani, meanwhile, is keeping his eyes on the horizon to see whether or not the claim made by Abedi and his colleagues about stimulated Hawking radiation, or echoes in general, is confirmed. “If this will be confirmed in the future, it’ll be a great step,” Pani says, “especially for the field in general because it will give a sort of portal to the quantum properties of black holes that otherwise would be impossible to see by other means.”

Stephen Hawking’s final theory: untangling a peculiar black-hole paradox


The British theoretical physicist Stephen Hawking is perhaps best-known for his landmark work on black holes and, by extension, how they affect our understanding of the Universe. In the years before his death in 2018, he was still immersed in black hole theory, endeavouring to solve a puzzle that his own work had given rise to several decades earlier.

To put it succinctly, in the 1970s, Hawking discovered that black holes appear to be capable of destroying physical information – a characteristic very much at odds with contemporary quantum mechanics. Adapted from a 2016 paper that Hawking co-authored with the US theoretical physicist Andrew Strominger and the UK theoretical physicist Malcolm Perry, this animation offers a sophisticated-but-digestible – and frequently quite clever – visual presentation of Hawking’s final work, which proposes one potential solution to the ‘information paradox’.

The case for taking AI seriously as a threat to humanity


Why some people fear AI, explained.

Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”

That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could end all life on earth.

This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.

There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic threat, in nine questions:

1) What is AI?

Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.

Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.

Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation, at games like chess and Go, at important research biology questions like predicting how proteins fold, and at generating images. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They are being developed to improve drone targeting and detect missiles.

But narrow AI is getting less narrow. Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.

But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches.

Our AI progress so far has enabled enormous advances — and has also raised urgent ethical questions. When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too. Making websites more addictive can be great for your revenue but bad for your users.

Rosie Campbell at UC Berkeley’s Center for Human-Compatible AI argues that these are examples, writ small, of the big worry experts have about general AI in the future. The difficulties we’re wrestling with today with narrow AI don’t come from the systems turning on us or wanting revenge or considering us inferior. Rather, they come from the disconnect between what we tell our systems to do and what we actually want them to do.

For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.

In other words, our problems come from the systems being really good at achieving the goal they learned to pursue; it’s just that the goal they learned in their training environment isn’t the outcome we actually wanted. And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.

Right now the harm is limited because the systems are so limited. But it’s a pattern that could have even graver consequences for human beings in the future as AI systems become more advanced.

2) Is it even possible to make a computer as smart as a person?

Yes, though current AI systems aren’t nearly that smart.

One popular adage about AI is “everything that’s easy is hard, and everything that’s hard is easy.” Doing complex calculations in the blink of an eye? Easy. Looking at a picture and telling you whether it’s a dog? Hard (until very recently).

Lots of things humans do are still outside AI’s grasp. For instance, it’s hard to design an AI system that explores an unfamiliar environment, that can navigate its way from, say, the entryway of a building it’s never been in before up the stairs to a specific person’s desk. We don’t know how to design an AI system that reads a book and retains an understanding of the concepts.

The paradigm that has driven many of the biggest breakthroughs in AI recently is called “deep learning.” Deep learning systems can do some astonishing stuff: beat games we thought humans might never lose, invent compelling and realistic photographs, solve open problems in molecular biology.

These breakthroughs have made some researchers conclude it’s time to start thinking about the dangers of more powerful systems, but skeptics remain. The field’s pessimists argue that programs still need an extraordinary pool of structured data to learn from, require carefully chosen parameters, or work only in environments designed to avoid the problems we don’t yet know how to solve. They point to self-driving cars, which are still mediocre under the best conditions despite the billions that have been poured into making them work.

With all those limitations, one might conclude that even if it’s possible to make a computer as smart as a person, it’s certainly a long way away. But that conclusion doesn’t necessarily follow.

That’s because for almost all the history of AI, we’ve been held back in large part by not having enough computing power to realize our ideas fully. Many of the breakthroughs of recent years — AI systems that learned how to play Atari games, generate fake photos of celebrities, fold proteins, and compete in massive multiplayer online strategy games — have happened because that’s no longer true. Lots of algorithms that seemed not to work at all turned out to work quite well once we could run them with more computing power.

And the cost of a unit of computing time keeps falling. Progress in computing speed has slowed recently, but the cost of computing power is still estimated to be falling by a factor of 10 every 10 years. Through most of its history, AI has had access to less computing power than the human brain. That’s changing. By most estimates, we’re now approaching the era when AI systems can have the computing resources that we humans enjoy.

Furthermore, breakthroughs in a field can often surprise even other researchers in the field. “Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

There’s another consideration. Imagine an AI that is inferior to humans at everything, with one exception: It’s a competent engineer that can build AI systems very effectively. Machine learning engineers who work on automating jobs in other fields often observe, humorously, that in some respects, their own field looks like one where much of the work — the tedious tuning of parameters — could be automated.

If we can design such a system, then we can use its result — a better engineering AI — to build another, even better AI. This is the mind-bending scenario experts call “recursive self-improvement,” where gains in AI capabilities enable more gains in AI capabilities, allowing a system that started out behind us to rapidly end up with abilities well beyond what we anticipated.

This is a possibility that has been anticipated since the first computers. I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

3) How exactly could it wipe us out?

It’s immediately clear how nuclear bombs will kill us. No one working on mitigating nuclear risk has to start by explaining why it’d be a bad thing if we had a nuclear war.

The case that AI could pose an existential risk to humanity is more complicated and harder to grasp. So many of the people who are working to build safe AI systems have to start by explaining why AI systems, by default, are dangerous.

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended — and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Here’s one scenario that keeps experts up at night: We develop a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

Victoria Krakovna, an AI researcher at DeepMind (now a division of Alphabet, Google’s parent company), compiled a list of examples of “specification gaming”: the computer doing what we told it to do but not what we wanted it to do. For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.

An AI playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key in the game to reappear, thereby allowing it to earn a higher score by exploiting the glitch. An AI playing a different game realized it could get more points by falsely inserting its name as the owner of high-value items.

Sometimes, the researchers didn’t even know how their AI system cheated: “the agent discovers an in-game bug. … For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).”

What these examples make clear is that in any system that might have bugs or unintended behavior or behavior humans don’t fully understand, a sufficiently powerful AI system might act unpredictably — pursuing its goals through an avenue that isn’t the one we expected.

In his 2009 paper “The Basic AI Drives,” Steve Omohundro, who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”

His argument goes like this: Because AIs have goals, they’ll be motivated to take actions that they can predict will advance their goals. An AI playing a chess game will be motivated to take an opponent’s piece and advance the board to a state that looks more winnable.

But the same AI, if it sees a way to improve its own chess evaluation algorithm so it can evaluate potential moves faster, will do that too, for the same reason: It’s just another step that advances its goal.

If the AI sees a way to harness more computing power so it can consider more moves in the time available, it will do that. And if the AI detects that someone is trying to turn off its computer mid-game, and it has a way to disrupt that, it’ll do it. It’s not that we would instruct the AI to do things like that; it’s that whatever goal a system has, actions like these will often be part of the best path to achieve that goal.

That means that any goal, even innocuous ones like playing chess or generating advertisements that get lots of clicks online, could produce unintended results if the agent pursuing it has enough intelligence and optimization power to identify weird, unexpected routes to achieve its goals.

Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. They already do that, but it takes the form of weird glitches in games. As they grow more sophisticated, scientists like Omohundro predict more adversarial behavior.

4) When did scientists first start worrying about AI risk?

Scientists have been thinking about the potential of artificial intelligence since the early days of computers. In the famous paper where he put forth the Turing test for determining if an artificial system is truly “intelligent,” Alan Turing wrote:

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. … There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.

I.J. Good worked closely with Turing and reached the same conclusions, according to his assistant, Leslie Pendleton. In an excerpt from unpublished notes Good wrote shortly before he died in 2009, he writes about himself in third person and notes a disagreement with his younger self — while as a younger man, he thought powerful AIs might be helpful to us, the older Good expected AI to annihilate us.

[The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) … began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”

In the 21st century, with computers quickly establishing themselves as a transformative force in our world, younger researchers started expressing similar worries.

Nick Bostrom is a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program. He researches risks to humanity, both in the abstract — asking questions like why we seem to be alone in the universe — and in concrete terms, analyzing the technological advances on the table and whether they endanger us. AI, he concluded, endangers us.

In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

Across the world, others have reached the same conclusion. Bostrom co-authored a paper on the ethics of artificial intelligence with Eliezer Yudkowsky, founder of and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), an organization that works on better formal characterizations of the AI safety problem.

Yudkowsky started his career in AI by worriedly poking holes in others’ proposals for how to make AI systems safe, and has spent most of it working to persuade his peers that AI systems will, by default, be unaligned with human values (not necessarily opposed to but indifferent to human morality) — and that it’ll be a challenging technical problem to prevent that outcome.

Increasingly, researchers realized that there’d be challenges that hadn’t been present with AI systems when they were simple. “‘Side effects’ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future,” concluded a 2016 research paper on problems in AI safety.

Bostrom’s book Superintelligence was compelling to many people, but there were skeptics. “No, experts don’t think superintelligent AI is a threat to humanity,” argued an op-ed by Oren Etzioni, a professor of computer science at the University of Washington and CEO of the Allan Institute for Artificial Intelligence. “Yes, we are worried about the existential risk of artificial intelligence,” replied a dueling op-ed by Stuart Russell, an AI pioneer and UC Berkeley professor, and Allan DaFoe, a senior research fellow at Oxford and director of the Governance of AI program there.

It’s tempting to conclude that there’s a pitched battle between AI-risk skeptics and AI-risk believers. In reality, they might not disagree as profoundly as you would think.

Facebook’s chief AI scientist Yann LeCun, for example, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety. “Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines,” he writes.

That’s not to say there’s an expert consensus here — far from it. There is substantial disagreement about which approaches seem likeliest to bring us to general AI, which approaches seem likeliest to bring us to safe general AI, and how soon we need to worry about any of this.

Many experts are wary that others are overselling their field, and dooming it when the hype runs out. But that disagreement shouldn’t obscure a growing common ground; these are possibilities worth thinking about, investing in, and researching, so we have guidelines when the moment comes that they’re needed.

5) Why couldn’t we just shut off a computer if it got too powerful?

A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals. If asked what its intentions are, or what it’s working on, it would attempt to evaluate which responses are least likely to get it shut off, and answer with those. If it wasn’t competent enough to do that, it might pretend to be even dumber than it was — anticipating that researchers would give it more time, computing resources, and training data.

So we might not know when it’s the right moment to shut off a computer.

We also might do things that make it impossible to shut off the computer later, even if we realize eventually that it’s a good idea. For example, many AI systems could have access to the internet, which is a rich source of training data and which they’d need if they’re to make money for their creators (for example, on the stock market, where more than half of trading is done by fast-reacting AI algorithms).

But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.

In that case, isn’t it a terrible idea to let any AI system — even one which doesn’t seem powerful enough to be dangerous — have access to the internet? Probably. But that doesn’t mean it won’t continue to happen.

So far, we’ve mostly talked about the technical challenges of AI. But from here forward, it’s necessary to veer more into the politics. Since AI systems enable incredible things, there will be lots of different actors working on such systems.

There will likely be startups, established tech companies like Google (Alphabet’s recently acquired startup DeepMind is frequently mentioned as an AI frontrunner), and nonprofits (the Elon Musk-founded OpenAI is another major player in the field).

There will be governments — Russia’s Vladimir Putin has expressed an interest in AI, and China has made big investments. Some of them will presumably be cautious and employ safety measures, including keeping their AI off the internet. But in a scenario like this one, we’re at the mercy of the least cautious actor, whoever they may be.

That’s part of what makes AI hard: Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly.

6) What are we doing right now to avoid an AI apocalypse?

“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper this year reviewing the state of the field.

The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety.

Bostrom’s Future of Humanity Institute has published a research agenda for AI governance: the study of “devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.” It has published research on the risk of malicious uses of AI, on the context of China’s AI strategy, and on artificial intelligence and international security.

The longest-established organization working on technical AI safety is the Machine Intelligence Research Institute (MIRI), which prioritizes research into designing highly reliable agents — artificial intelligence programs whose behavior we can predict well enough to be confident they’re safe. (Disclosure: MIRI is a nonprofit and I donated to its work in 2017 and 2018.)

The Elon Musk-founded OpenAI is a very new organization, less than three years old. But researchers there are active contributors to both AI safety and AI capabilities research. A research agenda in 2016 spelled out “concrete open technical problems relating to accident prevention in machine learning systems,” and researchers have since advanced some approaches to safe AI systems.

Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here. “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).

There are also lots of people working on more present-day AI ethics problems: algorithmic bias, robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets, to name just a few. Some of that research could potentially be valuable for preventing destructive scenarios.

But on the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.

Not every organization with a major AI department has a safety team at all, and some of them have safety teams focused only on algorithmic fairness and not on the risks from advanced systems. The US government doesn’t have a department for AI.

The field still has lots of open questions — many of which might make AI look much more scary, or much less so — which no one has dug into in depth.

7) Is this really likelier to kill us all than, say, climate change?

It sometimes seems like we’re facing dangers from all angles in the 21st century. Both climate change and future AI developments are likely to be transformative forces acting on our world.

Our predictions about climate change are more confident, both for better and for worse. We have a clearer understanding of the risks the planet will face, and we can estimate the costs to human civilization. They are projected to be enormous, risking potentially hundreds of millions of lives. The ones who will suffer most will be low-income people in developing countries; the wealthy will find it easier to adapt. We also have a clearer understanding of the policies we need to enact to address climate change than we do with AI.

There’s intense disagreement in the field on timelines for critical advances in AI. While AI safety experts agree on many features of the safety problem, they’re still making the case to research teams in their own field, and they disagree on some of the details. There’s substantial disagreement on how badly it could go, and on how likely it is to go badly. There are only a few people who work full time on AI forecasting. One of the things current researchers are trying to nail down is their models and the reasons for the remaining disagreements about what safe approaches will look like.

Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.

8) Is there a possibility that AI can be benevolent?

AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default. They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.

When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.

Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. That is ultimately what almost every organization with an artificial general intelligence division is trying to do. A success with AI could give us access to decades or centuries of technological innovation all at once.

“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made,” writes the introduction to Alphabet’s DeepMind. “From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach.”

So, yes, AI can share our values — and transform our world for the good. We just need to solve a very hard engineering problem first.

9) I just really want to know: how worried should we be?

To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.

While machine-learning researchers are right to be wary of hype, it’s also hard to avoid the fact that they’re accomplishing some impressive, surprising things using very generalizable techniques, and that it doesn’t seem that all the low-hanging fruit has been picked.

At a major conference in early December, Google’s DeepMind cracked open a longstanding problem in biology: predicting how proteins fold. “Even though there’s a lot more work to do before we’re able to have a quantifiable impact on treating diseases, managing the environment, and more, we know the potential is enormous,” its announcement concludes.

AI looks increasingly like a technology that will change the world when it arrives. Researchers across many major AI organizations tell us it will be like launching a rocket: something we have to get right before we hit “go.” So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.

Why Stephen Hawking’s Black Hole Puzzle Keeps Puzzling


The renowned British physicist, who died at 76, left behind a riddle that could eventually lead his successors to the theory of quantum gravity.

Photo of Stephen Hawking in 1979 in Princeton, New Jersey.

The physicist Stephen Hawking in 1979 in Princeton, New Jersey.

The renowned British physicist Stephen Hawking, who died today at 76, was something of a betting man, regularly entering into friendly wagers with his colleagues over key questions in theoretical physics. “I sensed when Stephen and I first met that he would enjoy being treated irreverently,” wrote John Preskill, a physicist at the California Institute of Technology, earlier today on Twitter. “So in the middle of a scientific discussion I could interject, ‘What makes you so sure of that, Mr. Know-It-All?’ knowing that Stephen would respond with his eyes twinkling: ‘Wanna bet?’”

And bet they did. In 1991, Hawking and Kip Thorne bet Preskill that information that falls into a black hole gets destroyed and can never be retrieved. Called the black hole information paradox, this prospect follows from Hawking’s landmark 1974 discovery about black holes — regions of inescapable gravity, where space-time curves steeply toward a central point known as the singularity. Hawking had shown that black holes are not truly black. Quantum uncertainty causes them to radiate a small amount of heat, dubbed “Hawking radiation.” They lose mass in the process and ultimately evaporate away. This evaporation leads to a paradox: Anything that falls into a black hole will seemingly be lost forever, violating “unitarity” — a central principle of quantum mechanics that says the present always preserves information about the past.

Hawking and Thorne argued that the radiation emitted by a black hole would be too hopelessly scrambled to retrieve any useful information about what fell into it, even in principle. Preskill bet that information somehow escapes black holes, even though physicists would presumably need a complete theory of quantum gravity to understand the mechanism behind how this could happen.

Physicists thought they resolved the paradox in 2004 with the notion of black hole complementarity. According to this proposal, information that crosses the event horizon of a black hole both reflects back out and passes inside, never to escape. Because no single observer can ever be both inside and outside the black hole’s horizon, no one can witness both situations simultaneously, and no contradiction arises. The argument was sufficient to convince Hawking to concede the bet. During a July 2004 talk in Dublin, Ireland, he presented Preskill with the eighth edition of Total Baseball: The Ultimate Baseball Encyclopedia, “from which information can be retrieved at will.”

Thorne, however refused to concede, and it seems he was right to do so. In 2012, a new twist on the paradox emerged. Nobody had explained precisely how information would get out of a black hole, and that lack of a specific mechanism inspired Joseph Polchinski and three colleagues to revisit the problem. Conventional wisdom had long held that once someone passed the event horizon, they would slowly be pulled apart by the extreme gravity as they fell toward the singularity. Polchinski and his co-authors argued that instead, in-falling observers would encounter a literal wall of fire at the event horizon, burning up before ever getting near the singularity.

At the heart of the firewall puzzle lies a conflict between three fundamental postulates. The first is the equivalence principle of Albert Einstein’s general theory of relativity: Because there’s no difference between acceleration due to gravity and the acceleration of a rocket, an astronaut named Alice shouldn’t feel anything amiss as she crosses a black hole horizon. The second is unitarity, which implies that information cannot be destroyed. Lastly, there’s locality, which holds that events happening at a particular point in space can only influence nearby points. This means that the laws of physics should work as expected far away from a black hole, even if they break down at some point within the black hole — either at the singularity or at the event horizon.

To resolve the paradox, one of the three postulates must be sacrificed, and nobody can agree on which one should get the axe. The simplest solution is to have the equivalence principle break down at the event horizon, thereby giving rise to a firewall. But several other possible solutions have been proposed in the ensuing years.

David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics.

Video: David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics.

Filming by Petr Stepanek. Editing and motion graphics by MK12. Music by Steven Gutheinz.

For instance, a few years before the firewalls paper, Samir Mathur, a string theorist at Ohio State University, raised similar issues with his notion of black hole fuzzballs. Fuzzballs aren’t empty pits, like traditional black holes. They are packed full of strings (the kind from string theory) and have a surface like a star or planet. They also emit heat in the form of radiation. The spectrum of that radiation, Mathur found, exactly matches the prediction for Hawking radiation. His “fuzzball conjecture” resolves the paradox by declaring it to be an illusion. How can information be lost beyond the event horizon if there is no event horizon?

Hawking himself weighed in on the firewall debate along similar lines by way of a two-page, equation-free paper posted to the scientific preprint site arxiv.org in late January 2014 — a summation of informal remarks he’d made via Skype for a small conference the previous spring. He proposed a rethinking of the event horizon. Instead of a definite line in the sky from which nothing could escape, he suggested there could be an “apparent horizon.” Information is only temporarily confined behind that horizon. The information eventually escapes, but in such a scrambled form that it can never be interpreted. He likened the task to weather forecasting: “One can’t predict the weather more than a few days in advance.”

In 2013, Leonard Susskind and Juan Maldacena, theoretical physicists at Stanford University and the Institute for Advanced Studies, respectively, made a radical attempt to preserve locality that they dubbed “ER = EPR.” According to this idea, maybe what we think are faraway points in space-time aren’t that far away after all. Perhaps entanglement creates invisible microscopic wormholes connecting seemingly distant points. Shaped a bit like an octopus, such a wormhole would link the interior of the black hole directly to the Hawking radiation, so the particles still inside the hole would be directly connected to particles that escaped long ago, avoiding the need for information to pass through the event horizon.

Physicists have yet to reach a consensus on any one of these proposed solutions. It’s a tribute to Hawking’s unique genius that they continue to argue about the black hole information paradox so many decades after his work first suggested it.

Black holes and soft hair: why Stephen Hawking’s final work is important


Malcolm Perry, who worked with Hawking on his final paper, explains how it improves our understanding of one of universe’s enduring mysteries

Star torn apart by black hole
An artist’s impression of a star being torn apart by a black hole.
Photograph: Nasa’s Goddard Space Flight Center

The information paradox is perhaps the most puzzling problem in fundamental theoretical physics today. It was discovered by Stephen Hawking 43 years ago, and until recently has puzzled many.

Starting in 2015, Stephen, Andrew Strominger and I started to wonder if we could understand a way out of this difficulty by questioning the basic assumptions that underlie the difficulties. We published our first paper on the subject in 2016 and have been working hard on this problem ever since.

The most recent work, and perhaps the last paper that Stephen was involved in, has just come out. While we have not solved the information paradox, we hope that we have paved the way, and we are continuing our intensive work in this area.

Physics is really about being able to predict the future given how things are now. For example, if you throw a ball, once you know its initial position and velocity, then you can figure out where it will be in the future. That kind of reasoning is fine for what we call classical physics but for small things, like atoms and electrons, the rules need some modifications, as described by quantum mechanics. In quantum mechanics, instead of describing precise outcomes, one finds that one can only calculate the probabilities for various things to happen. In the case of a ball being thrown, one would not know its precise trajectory, but only the probability that it would be in some particular place given its initial conditions.

What Hawking discovered was that in black hole physics, there seemed to be even greater uncertainty than in quantum mechanics. However, this kind of uncertainty seemed to be completely unacceptable in that it resulted in many of the laws of physics appearing to break down. It would deprive us of the ability to predict anything about the future of a black hole.

That might not have mattered – except that black holes are real physical objects. There are huge black holes at the centres of many galaxies. We know this because observations of the centre of our galaxy show that there is a compact object with a mass of a few million times that of our sun there; such a huge concentration of mass could only be a black hole. Quasars, extremely luminous objects at the centres of very distant galaxies, are powered by matter falling onto black holes. The observatory Ligo has recently discovered ripples in spacetime, gravitational waves, produced by the collision of black holes.

The root of the problem is that it was once thought that black holes were completely described by their mass and their spin. If you threw something into a black hole, once it was inside you would be unable to tell what it was that was thrown in.

These ideas were encapsulated in the phrase “a black hole has no hair”. We can often tell people apart by looking their hair, but black holes seemed to be completely bald. Back in 1974, Stephen discovered that black holes, rather than being perfect absorbers, behave more like what we call “black bodies”. A black body is characterised by a temperature, and all bodies with a temperature produce thermal radiation.

If you go to a doctor, it is quite likely your temperature will be measured by having a device pointed at you. This is an infrared sensor and it measures your temperature by detecting the thermal radiation you produce. A piece of metal heated up in a fire will glow because it produces thermal radiation.

Black holes are no different. They have a temperature and produce thermal radiation. The formula for this temperature, universally known as the Hawking temperature, is inscribed on the memorial to Stephen’s life in Westminster Abbey. Any object that has a temperature also has an entropy. The entropy is a measure of how many different ways an object could be made from its microscopic ingredients and still look the same. So, for a particular piece of red hot metal, it would be the number of ways the atoms that make it up could be arranged so as to look like the lump of metal you were observing. Stephen’s formula for the temperature of a black hole allowed him to find the entropy of a black hole.

The problem then was: how did this entropy arise? Since all black holes appear to be the same, the origin of the entropy was at the centre of the information paradox.

What we have done recently is to discover a gap in the mathematics that led to the idea that black holes are totally bald. In 2016, Stephen, Andy and I found that black holes have an infinite collection of what we call “soft hair”. This discovery allows us to question the idea that black holes lead to a breakdown in the laws of physics.

Stephen kept working with us up to the end of his life, and we have now published a paper that describes our current thoughts on the matter. In this paper, we describe a way of calculating the entropy of black holes. The entropy is basically a quantitative measure of what one knows about a black hole apart from its mass or spin.

While this is not a resolution of the information paradox, we believe it provides some considerable insight into it. Further work is needed but we feel greatly encouraged to continue our research in this area. The information paradox is intimately tied up with our quest to find a theory of gravity that is compatible with quantum mechanics.

Einstein’s general theory of relativity is extremely successful at describing spacetime and gravitation on large scales, but to see how the world works on small scales requires quantum theory. There are spectacularly successful theories of the non-gravitational forces of nature as explained by the “standard model” of particle physics. Such theories have been exhaustively tested and the recent discovery of the Higgs particle at Cern by the Large Hadron Collider is a marvellous confirmation of these ideas.

Yet the incorporation of gravitation into this picture is still something that eludes us. As well as his work on black holes, Stephen was pursuing ideas that he hoped would lead to a unification of gravitation with the other forces of nature in a way that would unite Einstein’s ideas with those of quantum theory. Our work on black holes does indeed shed light on this other puzzle. Sadly, Stephen is no longer with us to share our excitement about the possibility of resolving these issues, which have now been around for half a century.

Why Stephen Hawking’s Black Hole Puzzle Keeps Puzzling


The renowned British physicist, who died at 76, left behind a riddle that could eventually lead his successors to the theory of quantum gravity.
 

The physicist Stephen Hawking in 1979 in Princeton, New Jersey.

The physicist Stephen Hawking in 1979 in Princeton, New Jersey.

The renowned British physicist Stephen Hawking, who died today at 76, was something of a betting man, regularly entering into friendly wagers with his colleagues over key questions in theoretical physics. “I sensed when Stephen and I first met that he would enjoy being treated irreverently,” wrote John Preskill, a physicist at the California Institute of Technology, earlier today on Twitter. “So in the middle of a scientific discussion I could interject, ‘What makes you so sure of that, Mr. Know-It-All?’ knowing that Stephen would respond with his eyes twinkling: ‘Wanna bet?’”

And bet they did. In 1991, Hawking and Kip Thorne bet Preskill that information that falls into a black hole gets destroyed and can never be retrieved. Called the black hole information paradox, this prospect follows from Hawking’s landmark 1974 discovery about black holes — regions of inescapable gravity, where space-time curves steeply toward a central point known as the singularity. Hawking had shown that black holes are not truly black. Quantum uncertainty causes them to radiate a small amount of heat, dubbed “Hawking radiation.” They lose mass in the process and ultimately evaporate away. This evaporation leads to a paradox: Anything that falls into a black hole will seemingly be lost forever, violating “unitarity” — a central principle of quantum mechanics that says the present always preserves information about the past.

Hawking and Thorne argued that the radiation emitted by a black hole would be too hopelessly scrambled to retrieve any useful information about what fell into it, even in principle. Preskill bet that information somehow escapes black holes, even though physicists would presumably need a complete theory of quantum gravity to understand the mechanism behind how this could happen.

Physicists thought they resolved the paradox in 2004 with the notion of black hole complementarity. According to this proposal, information that crosses the event horizon of a black hole both reflects back out and passes inside, never to escape. Because no single observer can ever be both inside and outside the black hole’s horizon, no one can witness both situations simultaneously, and no contradiction arises. The argument was sufficient to convince Hawking to concede the bet. During a July 2004 talk in Dublin, Ireland, he presented Preskill with the eighth edition of Total Baseball: The Ultimate Baseball Encyclopedia, “from which information can be retrieved at will.”

Thorne, however refused to concede, and it seems he was right to do so. In 2012, a new twist on the paradox emerged. Nobody had explained precisely how information would get out of a black hole, and that lack of a specific mechanism inspired Joseph Polchinski and three colleagues to revisit the problem. Conventional wisdom had long held that once someone passed the event horizon, they would slowly be pulled apart by the extreme gravity as they fell toward the singularity. Polchinski and his co-authors argued that instead, in-falling observers would encounter a literal wall of fire at the event horizon, burning up before ever getting near the singularity.

At the heart of the firewall puzzle lies a conflict between three fundamental postulates. The first is the equivalence principle of Albert Einstein’s general theory of relativity: Because there’s no difference between acceleration due to gravity and the acceleration of a rocket, an astronaut named Alice shouldn’t feel anything amiss as she crosses a black hole horizon. The second is unitarity, which implies that information cannot be destroyed. Lastly, there’s locality, which holds that events happening at a particular point in space can only influence nearby points. This means that the laws of physics should work as expected far away from a black hole, even if they break down at some point within the black hole — either at the singularity or at the event horizon.

To resolve the paradox, one of the three postulates must be sacrificed, and nobody can agree on which one should get the axe. The simplest solution is to have the equivalence principle break down at the event horizon, thereby giving rise to a firewall. But several other possible solutions have been proposed in the ensuing years.

David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics.

Video: David Kaplan explores one of the biggest mysteries in physics: the apparent contradiction between general relativity and quantum mechanics.

Filming by Petr Stepanek. Editing and motion graphics by MK12.

For instance, a few years before the firewalls paper, Samir Mathur, a string theorist at Ohio State University, raised similar issues with his notion of black hole fuzzballs. Fuzzballs aren’t empty pits, like traditional black holes. They are packed full of strings (the kind from string theory) and have a surface like a star or planet. They also emit heat in the form of radiation. The spectrum of that radiation, Mathur found, exactly matches the prediction for Hawking radiation. His “fuzzball conjecture” resolves the paradox by declaring it to be an illusion. How can information be lost beyond the event horizon if there is no event horizon?

Hawking himself weighed in on the firewall debate along similar lines by way of a two-page, equation-free paper posted to the scientific preprint site arxiv.org in late January 2014 — a summation of informal remarks he’d made via Skype for a small conference the previous spring. He proposed a rethinking of the event horizon. Instead of a definite line in the sky from which nothing could escape, he suggested there could be an “apparent horizon.” Information is only temporarily confined behind that horizon. The information eventually escapes, but in such a scrambled form that it can never be interpreted. He likened the task to weather forecasting: “One can’t predict the weather more than a few days in advance.”

In 2013, Leonard Susskind and Juan Maldacena, theoretical physicists at Stanford University and the Institute for Advanced Studies, respectively, made a radical attempt to preserve locality that they dubbed “ER = EPR.” According to this idea, maybe what we think are faraway points in space-time aren’t that far away after all. Perhaps entanglement creates invisible microscopic wormholes connecting seemingly distant points. Shaped a bit like an octopus, such a wormhole would link the interior of the black hole directly to the Hawking radiation, so the particles still inside the hole would be directly connected to particles that escaped long ago, avoiding the need for information to pass through the event horizon.

Physicists have yet to reach a consensus on any one of these proposed solutions. It’s a tribute to Hawking’s unique genius that they continue to argue about the black hole information paradox so many decades after his work first suggested it.

‘Mind over matter’: Stephen Hawking – obituary by Roger Penrose


Theoretical physicist who made revolutionary contributions to our understanding of the nature of the universe.

 

Stephen Hawking at his office at the department of applied mathematics and theoretical physics at Cambridge University in 2005.
Stephen Hawking at his office at the department of applied mathematics and theoretical physics at Cambridge University in 2005.

The image of Stephen Hawking – who has died aged 76 – in his motorised wheelchair, with head contorted slightly to one side and hands crossed over to work the controls, caught the public imagination, as a true symbol of the triumph of mind over matter. As with the Delphic oracle of ancient Greece, physical impairment seemed compensated by almost supernatural gifts, which allowed his mind to roam the universe freely, upon occasion enigmatically revealing some of its secrets hidden from ordinary mortal view.

Of course, such a romanticised image can represent but a partial truth. Those who knew Hawking would clearly appreciate the dominating presence of a real human being, with an enormous zest for life, great humour, and tremendous determination, yet with normal human weaknesses, as well as his more obvious strengths. It seems clear that he took great delight in his commonly perceived role as “the No 1 celebrity scientist”; huge audiences would attend his public lectures, perhaps not always just for scientific edification.

The scientific community might well form a more sober assessment. He was extremely highly regarded, in view of his many greatly impressive, sometimes revolutionary, contributions to the understanding of the physics and the geometry of the universe.

Hawking had been diagnosed shortly after his 21st birthday as suffering from an unspecified incurable disease, which was then identified as the fatal degenerative motor neurone disease amyotrophic lateral sclerosis, or ALS. Soon afterwards, rather than succumbing to depression, as others might have done, he began to set his sights on some of the most fundamental questions concerning the physical nature of the universe. In due course, he would achieve extraordinary successes against the severest physical disabilities. Defying established medical opinion, he managed to live another 55 years.

His background was academic, though not directly in mathematics or physics. His father, Frank, was an expert in tropical diseases and his mother, Isobel (nee Walker), was a free-thinking radical who had a great influence on him. He was born in Oxford and moved to St Albans, Hertfordshire, at eight. Educated at St Albans school, he won a scholarship to study physics at University College, Oxford. He was recognised as unusually capable by his tutors, but did not take his work altogether seriously. Although he obtained a first-class degree in 1962, it was not a particularly outstanding one.

He decided to continue his career in physics at Trinity Hall, Cambridge, proposing to study under the distinguished cosmologist Fred Hoyle. He was disappointed to find that Hoyle was unable to take him, the person available in that area being Dennis Sciama, unknown to Hawking at the time. In fact, this proved fortuitous, for Sciama was becoming an outstandingly stimulating figure in British cosmology, and would supervise several students who were to make impressive names for themselves in later years (including the future astronomer royal Lord Rees of Ludlow).

Sciama seemed to know everything that was going on in physics at the time, especially in cosmology, and he conveyed an infectious excitement to all who encountered him. He was also very effective in bringing together people who might have things of significance to communicate with one another.

When Hawking was in his second year of research at Cambridge, I (at Birkbeck College in London) had established a certain mathematical theorem of relevance. This showed, on the basis of a few plausible assumptions (by the use of global/topological techniques largely unfamiliar to physicists at the time) that a collapsing over-massive star would result in a singularity in space-time – a place where it would be expected that densities and space-time curvatures would become infinite – giving us the picture of what we now refer to as a “black hole”. Such a space-time singularity would lie deep within a “horizon”, through which no signal or material body can escape. (This picture had been put forward by J Robert Oppenheimer and Hartland Snyder in 1939, but only in the special circumstance where exact spherical symmetry was assumed. The purpose of this new theorem was to obviate such unrealistic symmetry assumptions.) At this central singularity, Einstein’s classical theory of general relativity would have reached its limits.

Meanwhile, Hawking had also been thinking about this kind of problem with George Ellis, who was working on a PhD at St John’s College, Cambridge. The two men had been working on a more limited type of “singularity theorem” that required an unreasonably restrictive assumption. Sciama made a point of bringing Hawking and me together, and it did not take Hawking long to find a way to use my theorem in an unexpected way, so that it could be applied (in a time-reversed form) in a cosmological setting, to show that the space-time singularity referred to as the “big bang” was also a feature not just of the standard highly symmetrical cosmological models, but also of any qualitatively similar but asymmetrical model.

Some of the assumptions in my original theorem seem less natural in the cosmological setting than they do for collapse to a black hole. In order to generalise the mathematical result so as to remove such assumptions, Hawking embarked on a study of new mathematical techniques that appeared relevant to the problem.

A powerful body of mathematical work known as Morse theory had been part of the machinery of mathematicians active in the global (topological) study of Riemannian spaces. However, the spaces that are used in Einstein’s theory are really pseudo-Riemannian and the relevant Morse theory differs in subtle but important ways. Hawking developed the necessary theory for himself (aided, in certain respects, by Charles Misner, Robert Geroch and Brandon Carter) and was able to use it to produce new theorems of a more powerful nature, in which the assumptions of my theorem could be considerably weakened, showing that a big-bang-type singularity was a necessary implication of Einstein’s general relativity in broad circumstances.

A few years later (in a paper published by the Royal Society in 1970, by which time Hawking had become a fellow “for distinction in science” of Gonville and Caius College, Cambridge), he and I joined forces to publish an even more powerful theorem which subsumed almost all the work in this area that had gone before.

In 1967, Werner Israel published a remarkable paper that had the implication that non-rotating black holes, when they had finally settled down to become stationary, would necessarily become completely spherically symmetrical. Subsequent results by Carter, David Robinson and others generalised this to include rotating black holes, the implication being that the final space-time geometry must necessarily accord with an explicit family of solutions of Einstein’s equations found by Roy Kerr in 1963. A key ingredient to the full argument was that if there is any rotation present, then there must be complete axial symmetry. This ingredient was basically supplied by Hawking in 1972.

The very remarkable conclusion of all this is that the black holes that we expect to find in nature have to conform to this Kerr geometry. As the great theoretical astrophysicist Subramanyan Chandrasekhar subsequently commented, black holes are the most perfect macroscopic objects in the universe, being constructed just out of space and time; moreover, they are the simplest as well, since they can be exactly described by an explicitly known geometry (that of Kerr).

Following his work in this area, Hawking established a number of important results about black holes, such as an argument for its event horizon (its bounding surface) having to have the topology of a sphere. In collaboration with Carter and James Bardeen, in work published in 1973, he established some remarkable analogies between the behaviour of black holes and the basic laws of thermodynamics, where the horizon’s surface area and its surface gravity were shown to be analogous, respectively, to the thermodynamic quantities of entropy and temperature. It would be fair to say that in his highly active period leading up to this work, Hawking’s research in classical general relativity was the best anywhere in the world at that time.

Hawking, Bardeen and Carter took their “thermodynamic” behaviour of black holes to be little more than just an analogy, with no literal physical content. A year or so earlier, Jacob Bekenstein had shown that the demands of physical consistency imply – in the context of quantum mechanics – that a black hole must indeed have an actual physical entropy (“entropy” being a physicist’s measure of “disorder”) that is proportional to its horizon’s surface area, but he was unable to establish the proportionality factor precisely. Yet it had seemed, on the other hand, that the physical temperature of a black hole must be exactly zero, inconsistently with this analogy, since no form of energy could escape from it, which is why Hawking and his colleagues were not prepared to take their analogy completely seriously.

Hawking had then turned his attention to quantum effects in relation to black holes, and he embarked on a calculation to determine whether tiny rotating black holes that might perhaps be created in the big bang would radiate away their rotational energy. He was startled to find that irrespective of any rotation they would radiate away their energy – which, by Einstein’s E=mc2, means their mass. Accordingly, any black hole actually has a non-zero temperature, agreeing precisely with the Bardeen-Carter-Hawking analogy. Moreover, Hawking was able to supply the precise value “one quarter” for the entropy proportionality constant that Bekenstein had been unable to determine.

This radiation coming from black holes that Hawking predicted is now, very appropriately, referred to as Hawking radiation. For any black hole that is expected to arise in normal astrophysical processes, however, the Hawking radiation would be exceedingly tiny, and certainly unobservable directly by any techniques known today. But he argued that very tiny black holes could have been produced in the big bang itself, and the Hawking radiation from such holes would build up into a final explosion that might be observed. There appears to be no evidence for such explosions, showing that the big bang was not so accommodating as Hawking wished, and this was a great disappointment to him.

These achievements were certainly important on the theoretical side. They established the theory of black-hole thermodynamics: by combining the procedures of quantum (field) theory with those of general relativity, Hawking established that it is necessary also to bring in a third subject, thermodynamics. They are generally regarded as Hawking’s greatest contributions. That they have deep implications for future theories of fundamental physics is undeniable, but the detailed nature of these implications is still a matter of much heated debate.

Hawking himself was able to conclude from all this (though not with universal acceptance by particle physicists) that those fundamental constituents of ordinary matter – the protons – must ultimately disintegrate, although with a decay rate that is beyond present-day techniques for observing it. He also provided reasons for suspecting that the very rules of quantum mechanics might need modification, a viewpoint that he seemed originally to favour. But later (unfortunately, in my own opinion) he came to a different view, and at the Dublin international conference on gravity in July 2004, he publicly announced a change of mind (thereby conceding a bet with the Caltech physicist John Preskill) concerning his originally predicted “information loss” inside black holes.

Following his black-hole work, Hawking turned his attentions to the problem of quantum gravity, developing ingenious ideas for resolving some of the basic issues. Quantum gravity, which involves correctly imposing the quantum procedures of particle physics on to the very structure of space-time, is generally regarded as the most fundamental unsolved foundational issue in physics. One of its stated aims is to find a physical theory that is powerful enough to deal with the space-time singularities of classical general relativity in black holes and the big bang.

Hawking’s work, up to this point, although it had involved the procedures of quantum mechanics in the curved space-time setting of Einstein’s general theory of relativity, did not provide a quantum gravity theory. That would require the “quantisation” procedures to be applied to Einstein’s curved space-time itself, not just to physical fields within curved space-time.

With James Hartle, Hawking developed a quantum procedure for handling the big-bang singularity. This is referred to as the “no-boundary” idea, whereby the singularity is replaced by a smooth “cap”, this being likened to what happens at the north pole of the Earth, where the concept of longitude loses meaning (becomes singular) while the north pole itself has a perfectly good geometry.

To make sense of this idea, Hawking needed to invoke his notion of “imaginary time” (or “Euclideanisation”), which has the effect of converting the “pseudo-Riemannian” geometry of Einstein’s space-time into a more standard Riemannian one. Despite the ingenuity of many of these ideas, grave difficulties remain (one of these being how similar procedures could be applied to the singularities inside black holes, which is fundamentally problematic).

There are many other approaches to quantum gravity being pursued worldwide, and Hawking’s procedures, though greatly respected and still investigated, are not the most popularly followed, although all others have their share of fundamental difficulties also.

To the end of his life, Hawking continued with his research into the quantum-gravity problem, and the related issues of cosmology. But concurrently with his strictly research interests, he became increasingly involved with the popularisation of science, and of his own ideas in particular. This began with the writing of his astoundingly successful book A Brief History of Time (1988), which was translated into some 40 languages and sold over 25m copies worldwide.

Undoubtedly, the brilliant title was a contributing factor to the book’s phenomenal success. Also, the subject matter is something that grips the public imagination. And there is a directness and clarity of style, which Hawking must have developed as a matter of necessity when trying to cope with the limitations imposed by his physical disabilities. Before needing to rely on his computerised speech, he could talk only with great difficulty and expenditure of effort, so he had to do what he could with short sentences that were directly to the point. In addition, it is hard to deny that his physical condition must itself have caught the public’s imagination.

Although the dissemination of science among a broader public was certainly one of Hawking’s aims in writing his book, he also had the serious purpose of making money. His financial needs were considerable, as his entourage of family, nurses, healthcare helpers and increasingly expensive equipment demanded. Some, but not all, of this was covered by grants.

To invite Hawking to a conference always involved the organisers in serious calculations. The travel and accommodation expenses would be enormous, not least because of the sheer number of people who would need to accompany him. But a popular lecture by him would always be a sell-out, and special arrangements would be needed to find a lecture hall that was big enough. An additional factor would be the ensuring that all entrances, stairways, lifts, and so on would be adequate for disabled people in general, and for his wheelchair in particular.

He clearly enjoyed his fame, taking many opportunities to travel and to have unusual experiences (such as going down a mine shaft, visiting the south pole and undergoing the zero-gravity of free fall), and to meet other distinguished people.

The presentational polish of his public lectures increased with the years. Originally, the visual material would be line drawings on transparencies, presented by a student. But in later years impressive computer-generated visuals were used. He controlled the verbal material, sentence by sentence, as it would be delivered by his computer-generated American-accented voice. High-quality pictures and computer-generated graphics also featured in his later popular books The Illustrated Brief History of Time (1996) and The Universe in a Nutshell (2001). With his daughter Lucy he wrote the expository children’s science book George’s Secret Key to the Universe (2007), and he served as an editor, co-author and commentator for many other works of popular science.

He received many high accolades and honours. In particular, he was elected a fellow of the Royal Society at the remarkably early age of 32 and received its highest honour, the Copley medal, in 2006. In 1979, he became the 17th holder of the Lucasian chair of natural philosophy in Cambridge, some 310 years after Sir Isaac Newton became its second holder. He became a Companion of Honour in 1989. He made a guest appearance on the television programme Star Trek: The Next Generation, appeared in cartoon form on The Simpsons and was portrayed in the movie The Theory of Everything (2014).

It is clear that he owed a great deal to his first wife, Jane Wilde, whom he married in 1965, and with whom he had three children, Robert, Lucy and Timothy. Jane was exceptionally supportive of him in many ways. One of the most important of these may well have been in allowing him to do things for himself to an unusual extent.

He was an extraordinarily determined person. He would insist that he should do things for himself. This, in turn, perhaps kept his muscles active in a way that delayed their atrophy, thereby slowing the progress of the disease. Nevertheless, his condition continued to deteriorate, until he had almost no movement left, and his speech could barely be made out at all except by a very few who knew him well.

He contracted pneumonia while in Switzerland in 1985, and a tracheotomy was necessary to save his life. Strangely, after this brush with death, the progress of his degenerative disease seemed to slow to a virtual halt. His tracheotomy prevented any form of speech, however, so that acquiring a computerised speech synthesiser came as a necessity at that time.

In the aftermath of his encounter with pneumonia, the Hawkings’ home was almost taken over by nurses and medical attendants, and he and Jane drifted apart. They were divorced in 1995. In the same year, Hawking married Elaine Mason, who had been one of his nurses. Her support took a different form from Jane’s. In his far weaker physical state, the love, care and attention that she provided sustained him in all his activities. Yet this relationship also came to an end, and he and Elaine were divorced in 2007.

Despite his terrible physical circumstance, he almost always remained positive about life. He enjoyed his work, the company of other scientists, the arts, the fruits of his fame, his travels. He took great pleasure in children, sometimes entertaining them by swivelling around in his motorised wheelchair. Social issues concerned him. He promoted scientific understanding. He could be generous and was very often witty. On occasion he could display something of the arrogance that is not uncommon among physicists working at the cutting edge, and he had an autocratic streak. Yet he could also show a true humility that is the mark of greatness.

Hawking had many students, some of whom later made significant names for themselves. Yet being a student of his was not easy. He had been known to run his wheelchair over the foot of a student who caused him irritation. His pronouncements carried great authority, but his physical difficulties often caused them to be enigmatic in their brevity. An able colleague might be able to disentangle the intent behind them, but it would be a different matter for an inexperienced student.

To such a student, a meeting with Hawking could be a daunting experience. Hawking might ask the student to pursue some obscure route, the reason for which could seem deeply mysterious. Clarification was not available, and the student would be presented with what seemed indeed to be like the revelation of an oracle – something whose truth was not to be questioned, but which if correctly interpreted and developed would surely lead onwards to a profound truth. Perhaps we are all left with this impression now.

Hawking is survived by his children.

Stephen William Hawking, physicist, born 8 January 1942; died 14 March 2018, aged 76.

Stephen Hawking’s 5 Predictions About the Future


Visionary physicist Stephen Hawking died early Wednesday at the age of 76. An intellectual leader in the study of black holes, quantum mechanics, and physical cosmology, Hawking also found a degree of beloved celebrity that evades most scientists. The best-selling author was a mainstay in the public eye, using his computer-based communication system to explain the wonders of the universe.

In turn, his numerous appearances on television, radio, and the stage gave us an archive of Hawking’s advice for the future. Not one to shy away from the apocalyptic, Hawking was passionate about protecting humanity, which he predicted would face an onslaught of challenges in the years to come.

See also: “Eddie Redmayne Remembers Stephen Hawking in One Surprising Way”

Here’s a sampling of his scientific soothsaying.

Stephen Hawking is headlining the Starmus Festival.
Stephen Hawking died Wednesday at the age of 76.

Hawking Predicted A.I. May Be “The Worst Thing” for Humans

In November, Hawking warned at a technology conference in Lisbon, Portugal, that artificial intelligence could be “the worst thing ever to happen to humanity.” Because what an A.I. can learn is infinite, Hawking reasoned that it could eventually catch up to the limits of the human brain and surpass us.

“Success in creating effective A.I. could be the biggest event in the history of our civilization or the worst,” Hawking said at Web Summit last year. “We cannot know if we will be infinitely helped by A.I. or ignored by it and sidelined or conceivably destroyed by it.”

Hawking also told Wired in November that he feared A.I. would “replace humans altogether,” a concern he had in common with Elon Musk. Accordingly, the two men endorsed a list of 23 principles they feel should steer A.I. development in February 2017.

Hawking Predicted Meeting Aliens Will Be Bad News

It was Hawking’s belief that when humans inevitably meet aliens, we should run. That dread came less from an idea that aliens will be inherently bad, and more from his observations of humans. Much like Christopher Columbus triggered chaos in his coming to the Americas, colonizing aliens would also bring turmoil to our proverbial shores.

“We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet,” Hawking told the Times of London in 2010. “I imagine they might exist in massive ships, having used up all of the resources from their home planet. Such advanced aliens should perhaps become nomads, looking to conquer and colonize whatever planets they can reach.”

But He Also Predicted We Probably Won’t Encounter Aliens Soon

Despite his concerns about a hostile alien civilization, Hawking never said this alien invasion would happen anytime soon. In April 2016, he explained at a conference for the space exploration project Breakthrough Starshot that the next 20 years, at least, will likely be alien free.

“The probability [of finding alien life] is low — probably,” Hawking told the crowd. “But the discoveries from the Kepler mission suggest that there are billions of habitable planets in our galaxy alone. There are at least 100 billion galaxies in the visible universe, so it seems likely that there are others out there.”

Stephen Hawking, Big Bang
Stephen Hawking: not a fan of aliens.

Hawking Predicted Our Time on Earth Would End

During his work with Breakthrough Starshot, Hawking asserted that within the next thousand or 10 thousand years, humans living on interstellar colonies would be absolute certainty. This would be, in Hawking’s opinion, for the best. Earth, he predicted, was in danger of experiencing astronomical events like asteroids and supernovas. To survive as a species, he declared in April 2016, “we must ultimately spread to the stars.”

This wasn’t a one-time prediction from Hawking. At the Starmus Festival in June 2017, he declared that humans needed to prepare for an exodus off this planet sometime within the next 200 to 500 years because of our own damage to Earth.

“We are running out of space, and the only place we can go to are other worlds,” Hawking told a crowd in Trondheim, Norway. “It is time to explore other solar systems. Spreading out may be the only thing that saves us from ourselves.”

Hawking Predicted Climate Change Could Ravage Earth

Hawking joined many scientists in his assertion that climate change could spell out the end for our planet, but it’s on this topic that he struck a (relatively) more hopeful tone. Sure, climate change could kill us all, but that doesn’t necessarily mean it will happen.

“We are close to the tipping point where global warming becomes irreversible,” Hawking told BBC News in July. “Climate change is one of the great dangers we face, and it’s one we can prevent if we act now.”

To move away from this tipping point, Hawking argued, world leaders like President Donald Trump (of whom he was no fan) would need to stick to the rules laid out by the Paris Agreement. According to Hawking, we aren’t at doomsday yet — and it’s up to our actions and ingenuity to keep it that way.