Interstellar was right. Falling into a black hole is not the end, says Stephen Hawking.


“If you feel you are in a black hole, don’t give up, there’s a way out,” Stephen Hawking told the Royal Institute of Technology in Stockholm

An artist's impression of a supermassive black hole at the centre of a distant quasar

An artist’s impression of a supermassive black hole at the centre of a distant quasar

Interstellar was right. Falling into a black hole is not the end, professor Stephen Hawking has claimed.

Although physicists had assumed that all matter must be destroyed by the huge gravitational forces of a black hole, Hawking told delegates in Sweden that it could escape and even pop into another dimension.

The theory solves the ‘information paradox’ which has puzzled scientists for decades. While quantum mechanics says that nothing can ever be destroyed, general relativity says it must be.

However under Hawking’s new theory, anything that is sucked into a black hole is effectively trapped at the event horizon – the sphere surrounding the hole from which it was thought that nothing can escape.

And he claims that anything which fell in could re-emerge back into our universe, or a parallel one, through Hawking radiation – protons which manage to escape from the black hole because of quantum fluctuations.

“If you feel you are in a black hole, don’t give up, there’s a way out,” Hawking told an audience held at the KTH Royal Institute of Technology in Stockholm

In the film Interstellar, Cooper, played by Matthew McConaughey, plunges into the black hole Gargantura. As Cooper’s ship breaks apart in the force, he evacuates and ends up in a Tesseract – a four dimensional cube. He eventually makes it out of the black hole.

The blac hole Gargantua from the film Interstellar

The black hole Gargantua from the film Interstellar

Black holes are stars that have collapsed under their own gravity, producing such extreme forces that even light can’t escape.

But Hawking claims that information never makes it inside the black hole in the first place and instead is ‘translated’ into a kind of hologram which sits in the event horizon.

“I propose that the information is stored not in the interior of the black hole as one might expect, but on its boundary, the event horizon,” said Prof Hawking

“The idea is the super translations are a hologram of the ingoing particles,” he said. “Thus they contain all the information that would otherwise be lost.”

Hawking also believes that radiation leaving the black hole can pick up some of the information stored at the event horizon and carry it back out. However it is unlikely to be in the same state in which it entered.

“The information about ingoing particles is returned, but in a chaotic and useless form,” he said. “This information paradox. For all practical purposes, the information is lost.

“The message of this lecture is that black holes ain’t as black as they are painted. They are not the eternal prisons they were once thought. Things can get out of a black hole both on the outside and possibly come out in another universe.”

Hawking and colleagues are expected to publish a paper on the work next month.

“He is saying that the information is there twice already from the very beginning, so it’s never destroyed in the black hole to begin with,” Sabine Hossenfelder of the Nordic Institute for Theoretical Physics in Stockholm told New Scientist. “At least that’s what I understood.”

NASA’s ‘impossible’ EM Drive works: German researcher confirms and it can take us to the moon in just 4 HOURS


Over the past whole year, there’s been a lot of excitement about the electromagnetic propulsion drive, also known as EM Drive – a logically impossible engine that’s challenged almost everyone’s prospects by continuing to stand up to experimental study. The EM drive is so thrilling because it yields enormous amounts of propulsion that could hypothetically blast us to Mars in only 70 days, without the need for dense and costly rocket fuel. Instead, it’s actually propelled forward by microwaves bouncing back and forth inside a sealed off chamber, and this is what makes the EM drive so powerful, and at the same time so debatable.

As effective as this kind of propulsion may sound, it challenges one of the essential concepts of physics – the conservation of momentum, which states that for anything to be propelled forward, some kind of propellant must be pushed out in the opposite direction. For that reason, the drive was generally laughed at and overlooked when it was designed by English scientist Roger Shawyer in the early 2000s. But a few years later, a group of Chinese researchers decided to construct their own version, and to everyone’s amazement, it really worked. Then an American inventor did the something just like that, and convinced NASA’s Eagleworks Laboratories, supervised by Harold ‘Sonny’ White, to give it a try. And they admitted that it actually works. Now Martin Tajmar, a well-known professor and chairman for Space Systems at Dresden University of Technology in Germany, has worked with his own EM Drive, and has once again revealed that it produces thrust – although for reasons he can’t clarify yet.

Tajmar offered his outcomes at the 2015 American Institute for Aeronautics and Astronautics’ Propulsion and Energy Forum and Exposition in Florida on 27th of July, and you can read his entire paper here. He has a long history of experimentally testing (and exposing) revolutionary propulsion systems, so his outcomes are a big deal for those looking for outside confirmation of the EM Drive.

Most importantly, his system produced a parallel amount of thrust as was initially forecast by Shawyer, which is more than a few thousand times greater than a typical photon rocket.

So where does all of this leave us with the EM Drive? While it’s fun to speculate about just how revolutionary it could be for humanity, what we really need now are results published in a peer-reviewed journal – which is something that Shawyer claims he is just a few months away from doing, as David Hambling reports for Wired.

So it might turn out that we need to modify some of our laws of physics in order to clarify how the drive actually works. But if that opens up the opportunity of human travel throughout the entire Solar System – and, more significantly, beyond – then it’s a sacrifice we’re certainly willing to make.

This new equation might finally unite the two biggest theories in physics, physicist claims.


Linking general relativity and quantum mechanics with wormholes.

One of the most stubborn problems in physics today is the fact that our two best theories to explain the Universe – general relativity and quantum mechanics – function perfectly well on their own, but as soon as you try to combine them, the maths just doesn’t work out.

But a Stanford theoretical physicist has just come up with a new equation that suggests the key to finally connecting the two could be found in bizarre spacetime tunnels called wormholes.

The equation is deceptively simple: ER = EPR.

It’s not made up of numerical values, but instead represents the names of some key players in theoretical physics.

On the left side of the equation, the ER stands for Einstein and Nathan Rosen,and refers to a 1935 paper they wrote together describing wormholes, known technically as Einstein-Rosen bridges.

On the right side of the equation, EPR stands for Einstein, Rosen and Boris Podolsky, who co-wrote another paper that year describing quantum entanglement.

Back in 2013, physicist Leonard Susskind from Stanford University and Juan Maldacena from the Institute for Advance Study at Princeton suggested that the two papers could be describing pretty much the same thing – something that no one else in the field had previously considered, including Einstein himself.

Now Susskind is back to discuss the implications if he’s in fact right.

But first, let’s look at the individual parts of this equation.

First implied by Einstein’s theory of general relativity, wormholes are like tunnels between two places in the Universe.

In theory, if you fell in one side of a wormhole, you’d appear on the other side almost instantaneously, even if it happened to be on the exact opposite side of the Universe.

But wormholes aren’t just portals to another place in the Universe, they’re portals between two times in the Universe. Like Carl Sagan once said, “You might emerge somewhere else in space, some when-else in time.”

Quantum entanglement, on the other hand, describes the way that two particles can interact in such a way that they become inexorably linked, and essentially ‘share’ an existence.

This means that whatever happens to one particle will directly and instantaneously affect the other – even if it’s light-years away.

Okay, now let’s combine the two.

In his new paper, Susskind proposes a scenario where hypothetical Alice and Bob each take a bunch of entangled particles – Alice takes one member of each pair, and Bob takes the other, and they fly off in opposite directions of the Universe in their hypothetical hypersonic jets.

Once in their separate positions, Alice and Bob smash their particles together with such great force, they create two separate black holes.

The result, says Susskind, is two entangled black holes on opposite sides of the Universe, linked in the middle by a giant wormhole.

“If ER = EPR is right, a wormhole will link those black holes; entanglement, therefore, can be described using the geometry of wormholes,” says Tom Siegfried over at Science News.

“Even more remarkable … is the possibility that two entangled subatomic particles alone are themselves somehow connected by a sort of quantum wormhole,” Siegfried adds.

“Since wormholes are contortions of spacetime geometry – described by Einstein’s gravitational equations – identifying them with quantum entanglement would forge a link between gravity and quantum mechanics.”

Is Susskind right? It’s impossible to say just yet, because while he’s published his paper on pre-press website arXiv.org to be openly scrutinised by his peers, it’s yet to go through the formal peer-review process.

But, as Siegfried reports, Susskind isn’t the only one going down this path. Earlier this year, a team of Caltech physicists came up with a similar hypothesis when they attempted to show how changes in quantum states can be linked to curves in spacetime geometry.

In a blog post describing the hypothesis, one of the team, Sean M. Carroll, says the most natural relationship between energy and spacetime curvature in this scenario is given by Einstein’s equation for general relativity.

“The claim, in its most dramatic-sounding form, is that gravity (spacetime curvature caused by energy/momentum) isn’t hard to obtain in quantum mechanics – it’s automatic! Or at least, the most natural thing to expect,” he says.

We’ll have to wait and see if ER = EPR or something closely related bears out, but it’s certainly food for thought, and Susskind for one thinks he’s on to something here.

“To me it seems obvious that if ER = EPR is true, it is a very big deal, and it must affect the foundations and interpretation of quantum mechanics,” he writes, adding that if he’s right, “quantum mechanics and gravity are far more tightly related than we (or at least I) had ever imagined”.

HOW QUANTUM MECHANICS IS CHANGING EVERYTHING WE KNOW ABOUT OUR LIVES


Quantum physics is the new physics that is pointing to something far greater than the materialistic world that we once believed to be the basis of our existence. Not only is it disproving our original perception of space and time, but it is opening the doors to the possibility of time travel, telepathy, and consciousness creating our reality.

QuantumPhysics

Below are some of the major properties of quantum mechanics and the implications they have on the world around us. You won’t be disappointed!

1. Quantum Entanglement: When two sub-atomic particles cross paths with one another, they become “entangled.”

This means that their properties become linked with one another. When these entangled particles are separated (even millions of miles or light-years apart), what happens to one particle instantaneously happens to the other particle.

What this means: This means that information is travelling far faster than the speed of light to be communicated instantaneously across vast distances.
This defies what we previously knew to be possible, and also hints at the notion of telepathy having the potential to be scientifically studied.

Einstein referred to this as “spooky action at a distance.” The new physics is currently being harnessed in an attempt to build “quantum computers” that would revolutionize technology as we know it today.

2. Atoms and subatomic particles can be in two places at once. In 2012 Dr. S. Haroche and Dr. Wineland won the Nobel Prize for using quantum mechanics to prove that electrons can be in two places at once. This theory was tested to 1 part in 100 billion in terms of accuracy. This officially made it the most successful physical theory that ever existed.

What this means: In theory, this notion correlates with the Many-Worlds Interpretation of quantum mechanics. This theory implies that all potential realities and possibilities already exist, and that there are potentially an infinite number of parallel realities. This theory was brought to life by a scientist named Hugh Everett.

This would technically mean that any actions you judge in others, you have also committed in a parallel reality. It would also mean that what is happening to you right now has already happened and will happen again.

Most importantly, it would essentially mean that we collectively create our personal reality — as the “Implicate” order of existence projects outward, the “Explicate” order around us (See David Bohm’s “Implicate and Explicate Order of Existence”)  projects back inwards. This happens through the means of our collective energy being filtered through our human consciousness to project the outer reality we experience.

3. In the quantum world, everything behaves as both waves and particles! So sub-atomic particles can behave like matter or move in wave patterns. The most mind-blowing part about this is that particles behave like waves when we aren’t looking, and like matter when we are!

The act of looking is what changes the behaviour of the particles.

What this means: This means that consciousness literally influences reality. The act of observing something can actually be responsible for bringing a potential reality to life. You may have heard of the Law of Attraction; quantum physics essentially supports this idea.

When you focus on a desired outcome, it is almost as if you “reel in” that already existing quantum super-position of reality that brings it to life!

4. Quantum particles have the ability to move back and forth through time. Very recently, scientists at the University of Queensland were able to simulate photons travelling through time. In one case, the photon was sent through a wormhole to interact with itself in a previous state. In another, a photon traveled through regular space-time to interact with a different photon.

What this means: Quite simply, this means that our five senses are very limited in the way that they allow us to perceive the world. Classical physics and much of science is based on finding proof from conducting experiments and observing their results.
But we interpret our observations based on our limited five senses. Quantum physics is beginning to demonstrate that there is so much more outside of our current mainstream perception of reality.

This also means that time and space aren’t what we have made them out to be. They are not linear functions.
This also opens the hypothetical doors to the possibility of time travel.
If these sub-atomic particles are able to do this, and we are made of sub-atomic particles, what implication does this have?

One thing I have always found funny about mainstream science is that it prefers to ignore the miraculous and study the ordinary. When astonishing events take place that challenge our foundation of scientific beliefs, scientists that wish to explore them are often condemned by their peers and community.

We know that science is based on proving things through measurement and calculation.
But if what we are perceiving is limited because our five senses only provide a piece of the entire picture, maybe it is time we began to focus on studying what is out of the ordinary.

Quantum mechanics is currently challenging and uprooting some of our belief systems that have previously been based on the limitations of our senses. Rather than looking at the external world, science is now beginning to investigate the internal world in the form of microscopic particle behavior.

To contemplate that these particles are behaving this way and these particles are what form the world around us is truly mind-boggling.

Physicists investigate unusual form of quantum mechanics


In a new study, physicists at Penn State University have for the first time proposed a way to test a little-understood form of quantum mechanics called nonassociative quantum mechanics. So far, all other tests of quantum mechanics have dealt with the associative form, so the new test provides a way to explore this relatively obscure part of the theory.

quantum mechanics

“Nonassociative has been of mathematical interest for some time (and has recently shown up in certain models of String Theory), but it has been impossible to obtain a physical understanding,” coauthor Martin Bojowald at Penn State told Phys.org. “We have developed methods which allow us to do just that, and found a first application with a characteristic and instructive result. One of the features that makes this setting interesting is that much of the usual mathematical toolkit of quantum mechanics is inapplicable.”

Standard quantum mechanics is considered associative because mathematically it obeys the associative property. One of the fundamental concepts of standard quantum mechanics is the wave function, which gives the probability of finding a quantum system in a particular state. (The wave function is what determines the likelihood of Schrödinger’s cat being dead or alive, before the box is opened.) Mathematically, wave functions are vectors, and the mathematical operations involving vectors and the operators that act on them always obey the associative property (AB)C=A(BC), where the way that the parentheses are set doesn’t matter.

However, a few exotic quantum systems cannot be represented by wave functions, and so do not obey the associative property but instead are described by nonassociative algebra. One example of a nonassociative quantum system is a group of , which are hypothetical magnetic particles that have only a north or a south pole, not both like ordinary magnets.

In the new study, the physicists found theoretical evidence pointing to the existence of new, potentially observable, quantum effects that are not found in associative quantum mechanics. These new effects are predicted to cause a charged particle to move in a stable, circular motion in a situation where it otherwise would not in standard quantum mechanics.

More specifically, this situation involves the combination of a magnetic field with a linear force. If there is only a magnetic field, stable circular motion is possible also in standard quantum mechanics, but the linear force disrupts this motion. With the nonassociative effect, there is a new force that can compensate for the external linear force, making it again possible to have stable circular motion.

Although it would be difficult to experimentally realize a containing monopoles (as monopoles may not even exist), the scientists hope that the effects predicted here may be tested in the not-so-distant future.

“Fundamental monopoles are hypothetical, but there has recently been much research on constructing condensed matter systems consisting of quasi-particles which have properties similar to monopoles,” said coauthor Suddhasattwa Brahma. “In this setting, our new predictions may well be testable. Our equations have to be analyzed in more detail in order to see what should happen in the presence of magnetic fields realized in this context, and how strong the new effects would then be. This process may take a few years, but not much longer.”

Overall, the results could lead to a better understanding of nonassociative quantum mechanics. One of the intriguing consequences of the nonassociative property in quantum mechanics is a “triple” uncertainty relation.

“The usual uncertainty relation limits the precision of simultaneous measurements of position and momentum,” said coauthor Umut Büyükçam. “The triple one limits the precision of simultaneous measurements of all three components of the momentum vector, provided there are magnetic monopoles.

“Just as the standard uncertainty relation played an important role in the development of quantum mechanics, one can expect the triple uncertainty relation to be helpful in further improving our understanding of nonassociative quantum mechanics.”

Quantum computing will bring immense processing possibilities


Quantum computing will bring immense processing possibilities

The one thing everyone knows about quantum mechanics is its legendary weirdness, in which the basic tenets of the world it describes seem alien to the world we live in. Superposition, where things can be in two states simultaneously, a switch both on and off, a cat both dead and alive. Or entanglement, what Einstein called “spooky action-at-distance” in which objects are invisibly linked, even when separated by huge.

But weird or not, is approaching a century old and has found many applications in daily life. As John von Neumann once said: “You don’t understand quantum mechanics, you just get used to it.” Much of electronics is based on quantum physics, and the application of quantum theory to computing could open up huge possibilities for the complex calculations and data processing we see today.

Imagine a computer processor able to harness super-position, to calculate the result of an arbitrarily large number of permutations of a complex problem simultaneously. Imagine how entanglement could be used to allow systems on different sides of the world to be linked and their efforts combined, despite their physical separation. Quantum computing has immense potential, making light work of some of the most difficult tasks, such as simulating the body’s response to drugs, predicting weather patterns, or analysing big datasets.

Such processing possibilities are needed. The first transistors could only just be held in the hand, while today they measure just 14 nm – 500 times smaller than a red blood cell. This relentless shrinking, predicted by Intel founder Gordon Moore as Moore’s law, has held true for 50 years, but cannot hold indefinitely. Silicon can only be shrunk so far, and if we are to continue benefiting from the performance gains we have become used to, we need a different approach.

Quantum fabrication

Advances in have made it possible to mass-produce quantum-scale semiconductors – electronic circuits that exhibit quantum effects such as super-position and entanglement.

Quantum computing will bring immense processing possibilities
Replica of the first ever transistor, manufactured at Bell Labs in 1947. 

The image, captured at the atomic scale, shows a cross-section through one potential candidate for the building blocks of a quantum computer, a semiconductor nano-ring. Electrons trapped in these rings exhibit the strange properties of , and semiconductor fabrication processes are poised to integrate these elements required to build a quantum computer. While we may be able to construct a quantum computer using structures like these, there are still major challenges involved.

In a classical computer processor a huge number of transistors interact conditionally and predictably with one another. But quantum behaviour is highly fragile; for example, under quantum physics even measuring the state of the system such as checking whether the switch is on or off, actually changes what is being observed. Conducting an orchestra of quantum systems to produce useful output that couldn’t easily by handled by a classical computer is extremely difficult.

But there have been huge investments: the UK government announced £270m funding for quantum technologies in 2014 for example, and the likes of Google, NASA and Lockheed Martin are also working in the field. It’s difficult to predict the pace of progress, but a useful quantum computer could be ten years away.

The basic element of is known as a qubit, the quantum equivalent to the bits used in traditional computers. To date, scientists have harnessed to represent qubits in many different ways, ranging from defects in diamonds, to semiconductor nano-structures or tiny superconducting circuits. Each of these has is own advantages and disadvantages, but none yet has met all the requirements for a quantum computer, known as the DiVincenzo Criteria.

The most impressive progress has come from D-Wave Systems, a firm that has managed to pack hundreds of qubits on to a small chip similar in appearance to a traditional processor.

Quantum computing will bring immense processing possibilities
Quantum circuitry.

Quantum secrets

The benefits of harnessing aren’t limited to computing, however. Whether or not quantum computing will extend or augment digital computing, the same can be harnessed for other means. The most mature example is quantum communications.

Quantum physics has been proposed as a means to prevent forgery of valuable objects, such as a banknote or diamond, as illustrated in the image below. Here, the unusual negative rules embedded within prove useful; perfect copies of unknown states cannot be made and measurements change the systems they are measuring. These two limitations are combined in this quantum anti-counterfeiting scheme, making it impossible to copy the identity of the object they are stored in.

The concept of quantum money is, unfortunately, highly impractical, but the same idea has been successfully extended to communications. The idea is straightforward: the act of measuring quantum super-position states alters what you try to measure, so it’s possible to detect the presence of an eavesdropper making such measurements. With the correct protocol, such as BB84, it is possible to communicate privately, with that privacy guaranteed by fundamental laws of physics.

Quantum computing will bring immense processing possibilities
Adding a quantum secret to a standard barcode prevents tampering or forgery of valuable goods.

Quantum communication systems are commercially available today from firms such as Toshiba and ID Quantique. While the implementation is clunky and expensive now it will become more streamlined and miniaturised, just as transistors have miniaturised over the last 60 years.

Improvements to nanoscale fabrication techniques will greatly accelerate the development of quantum-based technologies. And while useful quantum computing still appears to be some way off, it’s future is very exciting indeed.

 

Quantum mechanics 101: Demystifying tough physics in 4 easy lessons


QUANTUM

Ready to level up your working knowledge of quantum mechanics? Check out these four TED-Ed Lessons written by Chad Orzel, Associate Professor in the Department of Physics and Astronomy at Union College and author of How to Teach Quantum Physics to Your Dog.

1. Particles and waves: The central mystery of quantum mechanics

One of the most amazing facts in physics is that everything in the universe, from light to electrons to atoms, behaves like both a particle and a wave at the same time. But how did physicists arrive at this mind-boggling conclusion? In this lesson, Orzel recounts the string of scientists who built on each other’s discoveries to arrive at this ‘central mystery’ of quantum mechanics.

2. Schrödinger’s cat: A thought experiment in quantum mechanics

Austrian physicist Erwin Schrödinger, one of the founders of quantum mechanics, posed this famous question: If you put a cat in a sealed box with a device that has a 50% chance of killing the cat in the next hour, what will be the state of the cat when that time is up? Orzel investigates this thought experiment.

 

3. Einstein’s brilliant mistake: Entangled states

When you think about Einstein and physics, E=mc^2 is probably the first thing that comes to mind. But one of his greatest contributions to the field actually came in the form of an odd philosophical footnote in a 1935 paper he co-wrote — which ended up being wrong. Here, Orzel details Einstein’s “EPR” paper and its insights on the strange phenomena of entangled states.

 

4. What is the Heisenberg Uncertainty Principle?

The Heisenberg Uncertainty Principle states that you can never simultaneously know the exact position and the exact speed of an object. Why not? Because everything in the universe behaves like both a particle and a wave at the same time. In his final lesson, Orzel navigates this complex concept of quantum physics.

Can Quantum Computing Reveal the True Meaning of Quantum Mechanics? – The Nature of Reality .


Quantum mechanics says not merely that the world is probabilistic, but that it uses rules of probability that no science fiction writer would have had the imagination to invent. These rules involve complex numbers, called “amplitudes,” rather than just probabilities (which are real numbers between 0 and 1). As long as a physical object isn’t interacting with anything else, its state is a huge wave of these amplitudes, one for every configuration that the system could be found in upon measuring it. Left to itself, the wave of amplitudes evolves in a linear, deterministic way. But when you measure the object, you see some definite configuration, with a probability equal to the squared absolute value of its amplitude. The interaction with the measuring device “collapses” the object to whichever configuration you saw.

Those, more or less, are the alien laws that explain everything from hydrogen atoms to lasers and transistors, and from which no hint of an experimental deviation has ever been found, from the 1920s until today. But could this really be how the universe operates? Is the “bedrock layer of reality” a giant wave of complex numbers encoding potentialities—until someone looks? And what do we mean by “looking,” anyway?

binary_620

Could quantum computing help reveal what the laws of quantum mechanics really mean? Adapted from an image by Flickr user Politropix under a Creative Commons license.

There are different interpretive camps within quantum mechanics, which have squabbled with each other for generations, even though, by design, they all lead to the same predictions for any experiment that anyone can imagine doing. One interpretation is Many Worlds, which says that the different possible configurations of a system (when far enough apart) are literally parallel universes, with the “weight” of each universe given by its amplitude. In this view, the whole concept of measurement—and of the amplitude waves collapsing on measurement—is a sort of illusion, playing no fundamental role in physics. All that ever happens is linear evolution of the entire universe’s amplitude wave—including a part that describes the atoms of your body, which (the math then demands) “splits” into parallel copies whenever you think you’re making a measurement. Each copy would perceive only itself and not the others. While this might surprise people, Many Worlds is seen by many (certainly by its proponents, who are growing in number) as the conservative option: the one that adds the least to the bare math.

A second interpretation is Bohmian mechanics, which agrees with Many Worlds about the reality of the giant amplitude wave, but supplements it with a “true” configuration that a physical system is “really” in, regardless of whether or not anyone measures it. The amplitude wave pushes around the “true” configuration in a way that precisely matches the predictions of quantum mechanics. A third option is Niels Bohr’s original “Copenhagen Interpretation,” which says—but in many more words!—that the amplitude wave is just something in your head, a tool you use to make predictions. In this view, “reality” doesn’t even exist prior to your making a measurement of it—and if you don’t understand that, well, that just proves how mired you are in outdated classical ways of thinking, and how stubbornly you insist on asking illegitimate questions.

But wait: if these interpretations (and others that I omitted) all lead to the same predictions, then how could we ever decide which one is right? More pointedly, does it even mean anything for one to be right and the others wrong, or are these just different flavors of optional verbal seasoning on the same mathematical meat? In his recent quantum mechanics textbook, the great physicist Steven Weinberg reviews the interpretive options, ultimately finding all of them wanting. He ends with the hope that new developments in physics will give us better options. But what could those new developments be?

In the last few decades, the biggest new thing in quantum mechanics has been the field of quantum computing and information. The goal here, you might say, is to “put the giant amplitude wave to work”: rather than obsessing over its true nature, simply exploit it to do calculations faster than is possible classically, or to help with other information-processing tasks (like communication and encryption). The key insight behind quantum computing was articulated by Richard Feynman in 1982: to write down the state of n interacting particles each of which could be in either of two states, quantum mechanics says you need 2namplitudes, one for every possible configuration of all n of the particles. Chemists and physicists have known for decades that this can make quantum systems prohibitively difficult to simulate on a classical computer, since 2n grows so rapidly as a function of n.

But if so, then why not build computers that would themselves take advantage of giant amplitude waves? If nothing else, such computers could be useful for simulating quantum physics! What’s more, in 1994, Peter Shor discovered that such a machine would be useful for more than physical simulations: it could also be used to factor large numbers efficiently, and thereby break most of the cryptography currently used on the Internet. Genuinely useful quantum computers are still a ways away, but experimentalists have made dramatic progress, and have already demonstrated many of the basic building blocks.

I should add that, for my money, the biggest application of quantum computers will be neither simulation nor codebreaking, but simply proving that this is possible at all! If you like, a useful quantum computer would be the most dramatic demonstration imaginable that our world really does need to be described by a gigantic amplitude wave, that there’s no way around that, no simpler classical reality behind the scenes. It would be the final nail in the coffin of the idea—which many of my colleagues still defend—that quantum mechanics, as currently understood, must be merely an approximation that works for a few particles at a time; and when systems get larger, some new principle must take over to stop the exponential explosion.

But if quantum computers provide a new regime in which to probe quantum mechanics, that raises an even broader question: could the field of quantum computing somehow clear up the generations-old debate about the interpretation of quantum mechanics? Indeed, could it do that even before useful quantum computers are built?

At one level, the answer seems like an obvious “no.” Quantum computing could be seen as “merely” a proposed application of quantum mechanics as that theory has existed in physics books for generations. So, to whatever extent all the interpretations make the same predictions, they also agree with each other about what a quantum computer would do. In particular, if quantum computers are built, you shouldn’t expect any of the interpretive camps I listed before to concede that its ideas were wrong. (More likely that each camp will claim its ideas were vindicated!)

At another level, however, quantum computing makes certain aspects of quantum mechanics more salient—for example, the fact that it takes 2n amplitudes to describe n particles—and so might make some interpretations seem more natural than others. Indeed that prospect, more than any application, is why quantum computing was invented in the first place. David Deutsch, who’s considered one of the two founders of quantum computing (along with Feynman), is a diehard proponent of the Many Worlds interpretation, and saw quantum computing as a way to convince the world (at least, this world!) of the truth of Many Worlds. Here’s how Deutsch put it in his 1997 book “The Fabric of Reality”:

Logically, the possibility of complex quantum computations adds nothing to a case [for the Many Worlds Interpretation] that is already unanswerable. But it does add psychological impact. With Shor’s algorithm, the argument has been writ very large. To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10500 or so times the computational resources that can be seen to be present, where was the number factorized? There are only about 1080 atoms in the entire visible universe, an utterly minuscule number compared with 10500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?

As you might imagine, not all researchers agree that a quantum computer would be “psychological evidence” for Many Worlds, or even that the two things have much to do with each other. Yes, some researchers reply, a quantum computer would take exponential resources to simulate classically (using any known algorithm), but all the interpretations agree about that. And more pointedly: thinking of the branches of a quantum computation as parallel universes might lead you to imagine that a quantum computer could solve hard problems in an instant, by simply “trying each possible solution in a different universe.” That is, indeed, how most popular articles explain quantum computing, but it’s also wrong!

The issue is this: suppose you’re facing some arbitrary problem—like, say, the Traveling Salesman problem, of finding the shortest path that visits a collection of cities—that’s hard because of a combinatorial explosion of possible solutions. It’s easy to program your quantum computer to assign every possible solution an equal amplitude. At some point, however, you need to make a measurement, which returns a single answer. And if you haven’t done anything to boost the amplitude of the answer you want, then you’ll see merely a random answer—which, of course, you could’ve picked for yourself, with no quantum computer needed!

For this reason, the only hope for a quantum-computing advantage comes frominterference: the key aspect of amplitudes that has no classical counterpart, and indeed, that taught physicists that the world has to be described with amplitudes in the first place. Interference is customarily illustrated by the double-slit experiment, in which we shoot a photon at a screen with two slits in it, and then observe where the photon lands on a second screen behind it. What we find is that there are certain “dark patches” on the second screen where the photon never appears—and yet, if we close one of the slits, then the photon can appear in those patches. In other words, decreasing the number of ways for the photon to get somewhere can increase the probability that it gets there! According to quantum mechanics, the reason is that the amplitude for the photon to land somewhere can receive a positive contribution from the first slit, and a negative contribution from the second. In that case, if both slits are open, then the two contributions cancel each other out, and the photon never appears there at all. (Because the probability is the amplitude squared, both negative and positive amplitudes correspond to positive probabilities.)

Likewise, when designing algorithms for quantum computers, the goal is always to choreograph things so that, for each wrong answer, some of the contributions to its amplitude are positive and others are negative, so on average they cancel out, leaving an amplitude close to zero. Meanwhile, the contributions to the right answer’s amplitude should reinforce each other (being, say, all positive, or all negative). If you can arrange this, then when you measure, you’ll see the right answer with high probability.

It was precisely by orchestrating such a clever interference pattern that Peter Shor managed to devise his quantum algorithm for factoring large numbers. To do so, Shor had to exploit extremely specific properties of the factoring problem: it was not just a matter of “trying each possible divisor in a different parallel universe.” In fact, an important 1994 theorem of Bennett, Bernstein, Brassard, and Vazirani shows that what you might call the “naïve parallel-universe approach” never yields an exponential speed improvement. The naïve approach can reveal solutions in only the square root of the number of steps that a classical computer would need, an important phenomenon called the Grover speedup. But that square-root advantage turns out to be the limit: if you want to do better, then like Shor, you need to find something special about your problem that lets interference reveal its answer.

What are the implications of these facts for Deutsch’s argument that only Many Worlds can explain how a quantum computer works? At the least, we should say that the “exponential cornucopia of parallel universes” almost always hides from us, revealing itself only in very special interference experiments where all the “universes” collaborate, rather than any one of them shouting above the rest. But one could go even further. One could say: To whatever extent the parallel universes do collaborate in a huge interference pattern to reveal (say) the factors of a number, to that extent they never had separate identities as “parallel universes” at all—even according to the Many Worlds interpretation! Rather, they were just one interfering, quantum-mechanical mush. And from a certain perspective, all the quantum computer did was to linearly transform the way in which we measured that mush, as if we were rotating it to see it from a more revealing angle. Conversely, whenever the branches do act like parallel universes, Many Worlds itself tells us that we only observe one of them—so from a strict empirical standpoint, we could treat the others (if we liked) as unrealized hypotheticals. That, at least, is the sort of reply a modern Copenhagenist mightgive, if she wanted to answer Deutsch’s argument on its own terms.

There are other aspects of quantum information that seem more “Copenhagen-like” than “Many-Worlds-like”—or at least, for which thinking about “parallel universes” too naïvely could lead us astray. So for example, suppose Alice sends n quantum-mechanical bits (or qubits) to Bob, then Bob measures qubits in any way he likes. How many classical bits can Alice transmit to Bob that way? If you remember that n qubits require 2n amplitudes to describe, you might conjecture that Alice could achieve an incredible information compression—“storing one bit in each parallel universe.” But alas, an important result called Holevo’s Theorem says that, because of the severe limitations on what Bob learns when he measures the qubits, such compression is impossible. In fact, by sending n qubits to Bob, Alice can reliably communicate only n bits (or 2n bits, if Alice and Bob shared quantum correlations in advance), essentially no better than if she’d sent the bits classically. So for this task, you might say, the amplitude wave acts more like “something in our heads” (as the Copenhagenists always said) than like “something out there in reality” (as the Many-Worlders say).

But the Many-Worlders don’t need to take this lying down. They could respond, for example, by pointing to other, more specialized communication problems, in which it’s been proven that Alice and Bob can solve using exponentially fewer qubits than classical bits. Here’s one example of such a problem, drawing on a 1999 theorem of Ran Raz and a 2010 theorem of Boaz Klartag and Oded Regev: Alice knows a vector in a high-dimensional space, while Bob knows two orthogonal subspaces. Promised that the vector lies in one of the two subspaces, can you figure out which one holds the vector? Quantumly, Alice can encode the components of her vector as amplitudes—in effect, squeezing n numbers into exponentially fewer qubits. And crucially, after receiving those qubits, Bob can measure them in a way that doesn’t reveal everything about Alice’s vector, but does reveal which subspace it lies in, which is the one thing Bob wanted to know.

So, do the Many Worlds become “real” for these special problems, but retreat back to being artifacts of the math for ordinary information transmission?

To my mind, one of the wisest replies came from the mathematician and quantum information theorist Boris Tsirelson, who said: “a quantum possibility is more real than a classical possibility, but less real than a classical reality.” In other words, this is a new ontological category, one that our pre-quantum intuitions simply don’t have a good slot for. From this perspective, the contribution of quantum computing is to delineate for which tasks the giant amplitude wave acts “real and Many-Worldish,” and for which other tasks it acts “formal and Copenhagenish.” Quantum computing can give both sides plenty of fresh ammunition, without handing an obvious victory to either.

So then, is there any interpretation that flat-out doesn’t fare well under the lens of quantum computing? While some of my colleagues will strongly disagree, I’d put forward Bohmian mechanics as a candidate. Recall that David Bohm’s vision was of real particles, occupying definite positions in ordinary three-dimensional space, but which are jostled around by a giant amplitude wave in a way that perfectly reproduces the predictions of quantum mechanics. A key selling point of Bohm’s interpretation is that it restores the determinism of classical physics: all the uncertainty of measurement, we can say in his picture, arises from lack of knowledge of the initial conditions. I’d describe Bohm’s picture as striking and elegant—as long as we’re only talking about one or two particles at a time.

But what happens if we try to apply Bohmian mechanics to a quantum computer—say, one that’s running Shor’s algorithm to factor a 10,000-digit number, using hundreds of thousands of particles? We can do that, but if we do, talking about the particles’ “real locations” will add spectacularly little insight. The amplitude wave, you might say, will be “doing all the real work,” with the “true” particle positions bouncing around like comically-irrelevant fluff. Nor, for that matter, will the bouncing be completely deterministic. The reason for this is technical: it has to do with the fact that, while particles’ positions in space are continuous, the 0’s and 1’s in a computer memory (which we might encode, for example, by the spins of the particles) are discrete. And one can prove that, if we want to reproduce the predictions of quantum mechanics for discrete systems, then we need to inject randomness at many times, rather than only at the beginning of the universe.

But it gets worse. In 2005, I proved a theorem that says that, in any theory like Bohmian mechanics, if you wanted to calculate the entire trajectory of the “real” particles, you’d need to solve problems that are thought to be intractable even for quantum computers. One such problem is the so-called collision problem, where you’re given a cryptographic hash function (a function that maps a long message to a short “hash value”) and asked to find any two messages with the same hash. In 2002, I proved that, at least if you use the “naïve parallel-universe” approach, any quantum algorithm for the collision problem requires at least ~H1/5 steps, where H is the number of possible hash values. (This lower bound was subsequently improved to ~H1/3 by Yaoyun Shi, exactly matching an upper bound of Brassard, Høyer, and Tapp.) By contrast, if (with godlike superpower) you could somehow see the whole histories of Bohmian particles, you could solve the collision problem almost instantly.

What makes this interesting is that, if you ask to see the locations of Bohmian particles at any one time, you won’t find anything that you couldn’t have easily calculated with a standard, garden-variety quantum computer. It’s only when you ask for the particles’ locations at multiple times—a question that Bohmian mechanics answers, but that ordinary quantum mechanics rejects as meaningless—that you’re able to see multiple messages with the same hash, and thereby solve the collision problem.

My conclusion is that, if you believe in the reality of Bohmian trajectories, you believe that Nature does even more computational work than a quantum computer could efficiently simulate—but then it hides the fruits of its labor where no one can ever observe it. Now, this sits uneasily with a principle that we might call “Occam’s Razor with Computational Aftershave.” Namely: In choosing a picture of physical reality, we should be loath to posit computational effort on Nature’s part that vastly exceeds what could ever in principle be observed. (Admittedly, some people would probably argue that the Many Worlds interpretation violates my “aftershave principle” even more flagrantly than Bohmian mechanics does! But that depends, in part, on what we count as “observation”: just our observations, or also the observations of any parallel-universe doppelgängers?)

Could future discoveries in quantum computing theory settle once and for all, to every competent physicist’s satisfaction, “which interpretation is the true one”? To me, it seems much more likely that future insights will continue to do what the previous ones did: broaden our language, strip away irrelevancies, clarify the central issues, while still leaving plenty to argue about for people who like arguing. In the end, asking how quantum computing affects the interpretation of quantum mechanics is sort of like asking how classical computing affects the debate about whether the mind is a machine. In both cases, there was a range of philosophical positions that people defended before a technology came along, and most of those positions still have articulate defenders after the technology. So, by that standard, the technology can’t be said to have “resolved” much! Yet the technology is so striking that even the idea of it—let alone the thing itself—can shift the terms of the debate, which analogies people use in thinking about it, which possibilities they find natural and which contrived. This might, more generally, be the main way technology affects philosophy.

WE’RE ONE STEP CLOSER TO TELEPORTATION, ATOMS CAN BE IN TWO PLACES AT THE SAME TIME.


Researchers of the University of Bonn have shown that Caesium atoms do not follow well-defined paths8814740_orig

Can a penalty kick simultaneously score a goal and miss? For very small objects, at least, this is possible: according to the predictions of quantum mechanics, microscopic objects can take different paths at the same time. The world of macroscopic objects follows other rules: the football always moves in a definite direction. But is this always correct? Physicists of the University of Bonn have constructed an experiment designed to possibly falsify this thesis. Their first experiment shows that Caesium atoms can indeed take two paths at the same time.

Almost 100 years ago physicists Werner Heisenberg, Max Born und Erwin Schrödinger created a new field of physics: quantum mechanics. Objects of the quantum world – according to quantum theory – no longer move along a single well-defined path. Rather, they can simultaneously take different paths and end up at different places at once. Physicists speak of quantum superposition of different paths.

At the level of atoms, it looks as if objects indeed obey quantum mechanical laws. Over the years, many experiments have confirmed quantum mechanical predictions. In our macroscopic daily experience, however, we witness a football flying along exactly one path; it never strikes the goal and misses at the same time. Why is that so?
“There are two different interpretations,” says Dr. Andrea Alberti of the Institute of Applied Physics of the University of Bonn. “Quantum mechanics allows superposition states of large, macroscopic objects. But these states are very fragile, even following the football with our eyes is enough to destroy the superposition and makes it follow a definite trajectory.”
Do “large” objects play by different rules?
But it could also be that footballs obey completely different rules than those applying for single atoms. “Let us talk about the macro-realistic view of the world,” Alberti explains. “According to this interpretation, the ball always moves on a specific trajectory, independent of our observation, and in contrast to the atom.”

But which of the two interpretations is correct? Do “large” objects move differently from small ones? In collaboration with Dr. Clive Emary of the University of Hull in the U.K., the Bonn team has come up with an experimental scheme that may help to answer this question. “The challenge was to develop a measurement scheme of the atoms’ positions which allows one to falsify macro-realistic theories,” adds Alberti.

The physicists describe their research in the journal “Physical Review X:” With two optical tweezers they grabbed a single Caesium atom and pulled it in two opposing directions. In the macro-realist’s world the atom would then be at only one of the two final locations. Quantum-mechanically, the atom would instead occupy a superposition of the two positions.
“We have now used indirect measurements to determine the final position of the atom in the most gentle way possible,” says the PhD student Carsten Robens. Even such an indirect measurement (see figure) significantly modified the result of the experiments. This observation excludes – falsifies, as Karl Popper would say more precisely – the possibility that Caesium atoms follow a macro-realistic theory. Instead, the experimental findings of the Bonn team fit well with an interpretation based on superposition states that get destroyed when the indirect measurement occurs. All that we can do is to accept that the atom has indeed taken different paths at the same time.
“This is not yet a proof that quantum mechanics hold for large objects,” cautions Alberti. “The next step is to separate the Caesium atom’s two positions by several millimetres. Should we still find the superposition in our experiment, the macro-realistic theory would suffer another setback.”

Quantum mechanics breakthrough, 3-D printed human heart, and paraplegia therapy.


It’s also been a very big week for the . A breakthrough therapy has allowed four paraplegic men to voluntarily move their legs. Funded by the Christopher Reeve Foundation and NIH, the therapy is based on an implanted epidural stimulator that delivers electric current to the lower spine. Thus far, it has allowed for movement of hips, ankles and toes. And speaking of rejuvenation, researchers at Edinburgh University in Scotland have rejuvenated a living organ for the first time—they increased levels of a protein that controls gene switching in a mouse, resulting in the rejuvenation of a thymus that had deteriorated due to age—afterward, the organ was once again able to produce T-cells. 

Also making big news this week, scientists confirmed that a scroll that mentions Jesus’s wife is ancient. After studying the ancient papyrus sheet, a team of researchers working in the U.S. concluded that it was not a forgery, a finding that is likely to cause a stir in the Christian community as it suggests that a woman played a far more important role in the life of Jesus than has been mentioned in the New Testament. 

A team at the University of Tokyo has found a way to control individual neurons in the brain of a mouse by sending reward signals to its hypothalamus, one of the brain’s pleasure centers. In so doing, the researchers discovered that they were able to get the mouse to turn on  in its own hippocampus. 

Elsewhere, another team of researchers at NYU Langone Medical Center has found that memory accuracy and strength can be manipulated during sleep by exposing rats to certain odors while they snooze. The hope is that such therapy may forestall certain neurodegenerative disorders. 



Also, there is news out of the University of Louisville as scientists try 3-D printing to build a human heart—they’ve already printed out small veins and heart valves. The research team believes they may be able to print all of the major heart parts, ready for assembly, in as little as five years. 

And at the Georgia Institute of Technology, a new study explains evolution of duplicate genes—researchers there have shown explicitly how the processes of DNA methylation and duplicate gene evolution are related and how some duplicate genes could have escaped elimination long ago from the genome, leading to the genetic innovation we see now in modern life. 

In other news, physicists created lightning in a race to develop a quantum technology microchip. Physicists working in England have developed a new microchip that can hold the voltage equivalent of a micron-scale lightning strike—it could very well prove to be the key for developing the next generation of super-fast quantum computers. And finally, scientists discovered a novel way to make ethanol without corn or other plants—they’ve used a metal catalyst that can produce ethanol from carbon monoxide at room temperature and pressure. If it can be scaled up and shown to be cost effective, the technique could prove to be a true game changer.