Single photon detected but not destroyed.


First instrument built that can witness the passage of a light particle without absorbing it.

Physicists have seen a single particle of light and then let it go on its way. The feat was possible thanks to a new technique that, for the first time, detects optical photons without destroying them. The technology could eventually offer perfect detection of photons, providing a boost to quantum communication and even biological imaging.

Plenty of commercially available instruments can identify individual light particles, but these instruments absorb the photons and use the energy to produce an audible click or some other signal of detection.

Quantum physicist Stephan Ritter and his colleagues at the Max Planck Institute of Quantum Optics in Garching, Germany, wanted to follow up on a 2004 proposal of a nondestructive method for detecting photons. Instead of capturing photons, this instrument would sense their presence, taking advantage of the eccentric realm of quantum mechanics in which particles can exist in multiple states and roam in multiple places simultaneously.

Ritter and his team started with a pair of highly reflective mirrors separated by a half-millimeter-wide cavity. Then they placed a single atom of rubidium in the cavity to function as a security guard. They chose rubidium because it can take on two distinct identities, which are determined by the arrangement of its electrons. In one state, it’s a 100 percent effective sentry, preventing photons from entering the cavity. In the other, it’s a totally useless lookout, allowing photons to enter the cavity. When photons get in, they bounce back and forth about 20,000 times before exiting.

The trick was manipulating the rubidium so that it was in a so-called quantum superposition of these two states, allowing one atom to be an overachiever and a slacker at the same time. Consequently, each incoming photon took multiple paths simultaneously, both slipping into the cavity undetected and being stopped at the door and reflected away. Each time the attentive state of the rubidium turned away a photon, a measurable property of the atom called its phase changed. If the phases of the two states of the rubidium atom differed, the researchers knew that the atom had encountered a photon.

To confirm their results, the researchers placed a conventional detector outside the apparatus to capture photons after their rubidium rendezvous, the team reports November 14 in Science.

“It’s a very cool experiment,” says Alan Migdall, who leads the quantum optics group at the National Institute of Standards and Technology in Gaithersburg, Md. But he warns that identifying photons without destroying them does not mean that the outgoing photon is the same as it was prior to detection. “You’ve pulled some information out of it, so you do wind up affecting it,” he says. Ritter says he expects the photons’ properties are largely unchanged, but he acknowledges that his team needs to perform more measurements to confirm that hypothesis.

Ritter notes that no photon detector is perfect, and his team’s is no exception: It failed to detect a quarter of incoming photons, and it absorbed a third of them. But he says the power of the technique is that, for many applications of single-photon detectors, each detector wouldn’t have to be perfect. Ritter envisions a nested arrangement of improved detectors that, as long as they did not absorb photons, would almost guarantee that every photon is counted. Ultimately, that could benefit fields such as medicine and molecular biology, in which scientists require precise imaging of objects in low-light environments.

Quantum ‘sealed envelope’ system enables ‘perfectly secure’ information storage.


A breakthrough in quantum cryptography demonstrates that information can be encrypted and then decrypted with complete security using the combined power of quantum theory and relativity – allowing the sender to dictate the unveiling of coded information without any possibility of intrusion or manipulation.

Scientists sent encrypted data between pairs of sites in Geneva and Singapore, kept “perfectly secure” for fifteen milliseconds – putting into practice what cryptographers call a ‘bit ‘ protocol, based on theoretical work by study co-author Dr Adrian Kent, from Cambridge’s Department of Applied Mathematics and Theoretical Physics.

Researchers describe it as the first step towards impregnable information networks controlled by “the combined power of Einstein’s relativity and ” which might one day, for example, revolutionise financial trading and other markets across the world.

‘Bit commitment’ is a mathematical version of a securely sealed envelope. Data are delivered from party A to party B in a locked state that cannot be changed once sent and can only be revealed when party A provides the key – with security guaranteed, even if either of the parties tries to cheat.

The technique could one day be used for everything from global financial trading to secure voting and even long-distance gambling, although researchers point out that this is the “very first step into new territory”.

This is a significant breakthrough in the world of ‘quantum cryptography’ – one that was once believed to be impossible. The results are published in the journal Physical Review Letters.

“This is the first time perfectly secure bit commitment – relying on the laws of and nothing else – has been demonstrated,” said Adrian Kent.

“It is immensely satisfying to see these theoretical ideas at last made practical thanks to the ingenuity of all the theorists and experimenters in this collaboration.”

Any signal between Geneva and Singapore takes at least fifteen milliseconds – with a millisecond equal to a thousandth of a second. This blink-of-an-eye is long enough with current technology to allow data to be handed over encrypted at both sites, and later decrypted – with security “unconditionally guaranteed” by the laws of physics, say the team.

The researchers have exploited two different areas of physics: Einstein’s special relativity – which interprets uniform motion between two objects moving at relative speeds – combined with the power of quantum theory, the new physics of the subatomic world that Einstein famously dismissed as “spooky”.

Completely secure ‘bit commitment’ using quantum theory alone is known to be impossible, say researchers, and the “extra control” provided by relativity is crucial.

Professor Gilles Brassard FRS of the Universit’e de Montr’eal, one of the co-inventors of quantum cryptography who was not involved in this study, spoke of the “vision” he had fifteen years ago – when trying to combine quantum ‘bit commitment’ with relativity to “save” the theory – in which Einstein and early quantum physicist Niels Bohr “rise from their graves and shake hands at last”:

“Alas, my idea at the time was flawed. I am so thrilled to see this dream finally come true, not only in theory but also as a beautiful experiment!” he said.

Bit commitment is a building block – what researchers call a “primitive” – that can be put together in lots of ways to achieve increasingly complex tasks, they say. “I see this as the first step towards a global network of information controlled by the combined power of and quantum theory,” Kent said.

One possible future use of relativistic could be global stock markets and other trading networks. It might be a way of leveling the technological ‘arms race’ in which traders acquire and exploit information as fast as possible, the team suggest, although they stress at such an early stage these suggestions are speculative.

The new study builds on previous experiments that, while successful, had to assume limitations in the technology of one or both parties – and consequently not entirely “safe or satisfactory” says Kent, “since you never really know what technology is out there”.

Scientists propose quantum wells as high-power, easy-to-make energy harvesters.


Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first experimental results to prove it

When the new ideas of quantum mechanics spread through science like wildfire in the first half of the 20th century, one of the first things physicists did was to apply them to gravity and general relativity. The result were not pretty.

It immediately became clear that these two foundations of modern physics were entirely incompatible. When physicists attempted to meld the approaches, the resulting equations were bedeviled with infinities making it impossible to make sense of the results.

Then in the mid-1960s, there was a breakthrough. The physicists John Wheeler and Bryce DeWitt successfully combined the previously incompatible ideas in a key result that has since become known as the Wheeler-DeWitt equation. This is important because it avoids the troublesome infinites—a huge advance.

But it didn’t take physicists long to realise that while the Wheeler-DeWitt equation solved one significant problem, it introduced another. The new problem was that time played no role in this equation. In effect, it says that nothing ever happens in the universe, a prediction that is clearly at odds with the observational evidence.

This conundrum, which physicists call ‘the problem of time’, has proved to be thorn in flesh of modern physicists, who have tried to ignore it but with little success.

Then in 1983, the theorists Don Page and William Wooters came up with a novel solution based on the quantum phenomenon of entanglement. This is the exotic property in which two quantum particles share the same existence, even though they are physically separated.

Entanglement is a deep and powerful link and Page and Wooters showed how it can be used to measure time. Their idea was that the way a pair of entangled particles evolve is a kind of clock that can be used to measure change.

But the results depend on how the observation is made. One way to do this is to compare the change in the entangled particles with an external clock that is entirely independent of the universe. This is equivalent to god-like observer outside the universe measuring the evolution of the particles using an external clock.

In this case, Page and Wooters showed that the particles would appear entirely unchanging—that time would not exist in this scenario.

But there is another way to do it that gives a different result. This is for an observer inside the universe to compare the evolution of the particles with the rest of the universe. In this case, the internal observer would see a change and this difference in the evolution of entangled particles compared with everything else is an important a measure of time.

This is an elegant and powerful idea. It suggests that time is an emergent phenomenon that comes about because of the nature of entanglement. And it exists only for observers inside the universe. Any god-like observer outside sees a static, unchanging universe, just as the Wheeler-DeWitt equations predict.

Of course, without experimental verification, Page and Wooter’s ideas are little more than a philosophical curiosity. And since it is never possible to have an observer outside the universe, there seemed little chance of ever testing the idea.

Until now. Today, Ekaterina Moreva at the Istituto Nazionale di Ricerca Metrologica (INRIM) in Turin, Italy, and a few pals have performed the first experimental test of Page Wooters ideas. And they confirm that time is indeed an emergent phenomenon for ‘internal’ observers but absent for external ones.

The experiment involves the creation of a toy universe consisting of a pair of entangled photons and an observer that can measure their state in one of two ways. In the first, the observer measures the evolution of the system by becoming entangled with it. In the second, a god-like observer measures the evolution against an external clock which is entirely independent of the toy universe.

The experimental details are straightforward. The entangled photons each have a polarisation which can be changed by passing it through a birefringent plate. In the first set up, the observer measures the polarisation of one photon, thereby becoming entangled with it. He or she then compares this with the polarisation of the second photon. The difference is a measure of time.

In the second set up, the photons again both pass through the birefringent plates which change their polarisations. However, in this case, the observer only measures the global properties of both photons by comparing them against an independent clock.

In this case, the observer cannot detect any difference between the photons without becoming entangled with one or the other. And if there is no difference, the system appears static. In other words, time does not emerge.

“Although extremely simple, our model captures the two, seemingly contradictory, properties of the Page-Wooters mechanism,” say Moreva and co.

That’s an impressive experiment. Emergence is a popular idea in science. In particular, physicists have recently become excited about the idea that gravity is an emergent phenomenon. So it’s a relatively small step to think that time may emerge in a similar way.

What emergent gravity has lacked, of course, is an experimental demonstration that shows how it works in in practice. That’s why Moreva and co’s work is significant. It places an abstract and exotic idea on firm experimental footing for the first time.

Perhaps most significant of all is the implication that quantum mechanics and general relativity are not so incompatible after all. When viewed through the lens of entanglement, the famous ‘problem of time’ just melts away.

The next step will to extend the idea further, particularly to the macroscopic scale. It’s one thing to show how time emerges for photons, it’s quite another to show how it emerges for larger things such as humans and train timetables.

And therein lies another challenge.

New particle might make quantum condensation at room temperature possible.


Researchers from FOM Institute AMOLF, Philips Research, and the Autonomous University of Madrid have identified a new type of particle that might make quantum condensation possible at room temperature. The particles, so called PEPs, could be used for fundamental studies on quantum mechanics and applications in lasers and LEDs. The researchers published their results on 18 October in Physical Review Letters.

https://i0.wp.com/cdn.physorg.com/newman/gfx/news/2013/pepssystem.jpg

In quantum condensation (also known as Bose-Einstein condensation) microscopic with different energy levels collapse into a single macroscopic quantum state. In that state, particles can no longer be distinguished. They lose their individuality and so the matter can be considered to be one ‘superparticle’.

Quantum condensation was predicted in the 1920s by Bose and Einstein, who theorised that particles will form a condensate at very low temperatures. The first experimental demonstration of the quantum condensate followed in the 1990s, when a gas of atoms was cooled to just a few billionths of a degree above absolute zero (-273°C). The need for such an extremely low temperature is related to the mass of the particles: the heavier the particles, the lower the temperature at which condensation occurs. This motivated an ongoing search for that may condense at higher temperatures than atoms. The eventual goal is to find particles that form a condensate at .

PEPs

The researchers have created a particle that is a potential candidate for fulfilling the quest: the extremely light plasmon-exciton-polariton (PEP). This particle is hybrid between light and . It consists of photons (light particles), plasmons (particles composed of electrons oscillating in metallic nanoparticles) and excitons (charged particles in ).

The researchers made PEPs using an array of metallic nanoparticles coated with molecules that emit light. This system generates PEPs when it is loaded with energy. Through a careful design of the coupling between plasmons, excitons and photons, the researchers created PEPs with a mass a trillion times smaller than the mass of atoms.

Because of their small mass, these PEPs are suitable candidates for quantum condensation even at room temperature. However, due to losses in the system (such as absorption in the metal) PEPs have a short lifespan, which makes keeping them around long enough to condense a challenge.

First steps

The researchers have shown the first steps towards condensation of PEPs, demonstrating that PEPs cool down as their density increases. However, in the current system cooling down is limited by properties of the organic molecules used in the experiments, which lead to a saturation of the PEP density before sets in. The researchers envisage that it should be possible to overcome these challenges in the future.

Applications

To a large extent, PEPs are composed of photons. Therefore, their decay results in the emission of light. This emitted light has unique properties, which could constitute the basis of new optical devices. In view of recent advances from AMOLF and Philips Research towards improving white LEDs with similar systems, the researchers suggest that from a Bose-Einstein condensate might illuminate our living rooms in the future.

The power of one: Single photons illuminate quantum technology


Quantum mechanics, which aims to describe the nano-scale world around us, has already led to the development of many technologies ubiquitous in modern life, including broadband optical fibre communication and smartphone displays.

These devices operate using billions and billions of photons, the smallest indivisible quanta of light – but many powerful quantum effects (such as enabling quantum ) can only be harnessed when working with a single .

The quantum science community has been waiting for more than a decade for a compact optical chip that delivers exactly one photon at a time at very high rates.

With international and local collaborators, I reported today in Nature Communications the ability to combine single photon-generating devices on a single silicon chip, a breakthrough for next generation quantum technologies.

Photons as qubits

In 1982, American physicist and Nobel Prize laureate Richard Feynman proposed the idea of building a new type of computer based on the principles of .

While a regular computer represents information as a bit with a value of either 0 or 1, the quantum equivalent is the qubit, a quantum particle that has two clear binary states.

Due to its quantum nature a qubit can be in either state 0, or state 1 or superposition of them both at the same time.

Computations performed using a qubit follow a different set of rules to a regular computer – and this allows certain problems to be solved exponentially faster.

A photon is one example of a that can be used as a qubit, and ideally researchers would like to be able to generate photons one by one, as two or more photons in a bunch no longer act as a .

It is easy to generate many photons, but much harder to ensure they come out one by one – photons are gregarious by nature – and a high generation rate is desired, similar to a high central processing unit clock speed.

The creation of single photons has been possible for some years, but with poor performance and often bulky implementation. We showed that by combining multiple imperfect devices, all on a single silicon chip, we can produce a much higher quality and compact source of single photons, opening a number of new applications.

Fishing for photons

The challenge in our research was within the physical mechanism behind photon generation. There is an intrinsic link between the rate of useful single photons creation and how often two or more photons are generated instead: these bunches are unwanted.

Generating higher rates of single photons is thus accompanied by a higher proportion of unwanted additional photons, so we wanted to reduce that to a more favourable ratio.

Think about it in terms of fishing – instead of generating photons, we want to catch fish. An easy option is to send a fisherman out on a boat to cast a net; this will result in a lot of good fish, but also a lot of unwanted garbage.

This is analogous to using a conventional photon source, which generates many photons, but also a lot of unwanted photon bunches.

Alternatively, we can send two people out with fishing rods. With some luck, they could collectively catch the same number of fish in the same amount of time, but because the method is more selective, the chance of collecting garbage has been vastly reduced.

single device for generating single photons (one fisherman) when operating at a high rate (casting a large net) generated unwanted photon bunches. By combining two single photon sources (two fishermen on a boat) on a single silicon chip (the boat), the proportion of ‘garbage’ photon bunches was significantly reduced. In the future we will combine many photon sources on one chip (we want many fishermen!). Credit: SevenPixelz

This is analogous to the work done here: two single photon sources (the fishermen) were combined on a single silicon chip (the boat), with the proportion of “garbage” photon bunches significantly reduced.

More fishermen

In the future we will extend this idea and combine many more devices onto a single silicon optical chip. Even though each individual source operates at a lower rate, they can be combined to give much higher rates – you just need more fishermen!

This will allow us to generate a large number of useful single photons, which can act as optical qubits, a fundamental ingredient of complex quantum processors.

The impact of this work opens the potential for more advanced single photon technologies, including secure communication where improved single photon generation directly increases the distance and bit-rate of a quantum secure communication link.

This an active area of research at the Centre for Ultrahigh Bandwidth Devices for Optical Systems (CUDOS) within the University of Sydney.

Still more applications include metrology (the science of measurement), simulation of biological and chemical systems, and – of course – quantum computing. More information: http://www.nature.com

Quantum Computers Check Each Other’s Work.


Image courtesy of Equinox Graphics

Check it twice. Quantum computers rely on these clusters of entangled qubits—units of data that embody many states at once—to achieve superspeedy processing. New research shows one such computer can verify the solutions of another.

Quantum computers can solve problems far too complex for normal computers, at least in theory. That’s why research teams around the globe have strived to build them for decades. But this extraordinary power raises a troubling question: How will we know whether a quantum computer’s results are true if there is no way to check them? The answer, scientists now reveal, is that a simple quantum computer—whose results humans can verify—can in turn check the results of other dramatically more powerful quantum machines.

Quantum computers rely on odd behavior of quantum mechanics in which atoms and other particles can seemingly exist in two or more places at once, or become “entangled” with partners, meaning they can instantaneously influence each other regardless of distance. Whereas classical computers symbolize data as bits—a series of ones and zeroes that they express by flicking switchlike transistors either on or off—quantum computers use quantum bits (qubits) that can essentially be on and off at the same time, or in any on/off combination, such as 32% on and 68% off.

Because each qubit can embody so many different states, quantum computers could compute certain classes of problems dramatically faster than regular computers by running through every combination of possibilities at once. For instance, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the universe.

Currently, all quantum computers involve only a few qubits “and thus can be easily verified by a classical computer, or on a piece of paper,” says quantum physicist Philip Walther of the University of Vienna. But their capabilities could outstrip conventional computers “in the not-so-far future,” he warns, which raises the verification problem.

Scientists have suggested a few ways out of this conundrum that would involve computers with large numbers of qubits or two entangled quantum computers. But these still lie outside the reach of present technology.

Now, quantum physicist Stefanie Barz at the University of Vienna, along with Walther and their colleagues, has a new strategy for verification. It relies on a technique known as blind quantum computing, an idea which they first demonstrated in a 2012 Science paper. A quantum computer receives qubits and completes a task with them, but it remains blind to what the input and output were, and even what computation it performed.

To test a machine’s accuracy, the researchers peppered a computing task with “traps”—short intermediate calculations to which the user knows the result in advance. “In case the quantum computer does not do its job properly, the trap delivers a result that differs from the expected one,” Walther explains. These traps allow the user to recognize when the quantum computer is inaccurate, the researchers report online today in Nature Physics. The results show experimentally that one quantum computer can verify the results of another, and that theoretically any size of quantum computer can verify any other, Walther says.

The existence of undetectable errors will depend on the particular quantum computer and the computation it carries out. Still, the more traps users build into the tasks, the better they can ensure the quantum computer they test is computing accurately. “The test is designed in such a way that the quantum computer cannot distinguish the trap from its normal tasks,” Walther says.

The researchers used a 4-qubit quantum computer as the verifier, but any size will do, and the more qubits the better, Walther notes. The technique is scalable, so it could be used even on computers with hundreds of qubits, he says, and it can be applied to any of the many existing quantum computing platforms.

“Like almost all current quantum computing experiments, this currently has the status of a fun demonstration proof of concept, rather than anything that’s directly useful yet,” says theoretical computer scientist Scott Aaronson at the Massachusetts Institute of Technology in Cambridge. But that doesn’t detract from the importance of these demonstrations, he adds. “I’m very happy that they’re done, as they’re necessary first steps if we’re ever going to have useful quantum computers.”

Quantum Paradox Seen in Diamond.


A real-life version of Zeno’s ancient Greek conundrum could advance quantum computing

A quantum effect named after an ancient Greek puzzle has been observed in diamond, paving the way for the use of diamond crystals in quantum computer chips.

The quantum Zeno effect gets its name from the Greek philosopher Zeno of Elea, who lived in the fifth century bc and suggested that if the position of a flying arrow is well-defined for a moment of time, then it makes no progress in that moment, and so can never reach its destination.

In the quantum version of the arrow paradox, theoretical physicists posited in 1977 that if a quantum system is measured often enough, its state will be unable to progress, as if it were true that ‘a watched pot never boils’. The hypothesis arises from a fundamental postulate of quantum theory, which says that measuring a property of an object, such as its position, affects its state. The quantum Zeno effect was first observedexperimentally in 1989 in laser-cooled ions trapped by magnetic and electric fields.

Now, quantum physicist Oliver Benson and his colleagues at Humboldt University in Berlin have seen the effect in a diamond crystal — a material that would be easier to manufacture on a large scale for quantum computing. The team posted its paper on the arXiv and it has been accepted for publication in Physical Review A.

Disrupted oscillations
The researchers focused on nitrogen–vacancy (NV) centers, imperfections in diamond that arise where an atom of nitrogen and an empty space replace carbon atoms at two neighboring spots in the crystal lattice. The team used microwaves to change the magnetic spin state of an electron located at an NV center, and then used a laser beam to trigger red fluorescence that revealed which of two possible states the electron was in at any given moment. When they measured the NV center in this way, the researchers found that the oscillation between the two states was disrupted — just as would be expected if the quantum Zeno effect were operating.

“The first step is to see the effect is there, but the next step is to implement quantum gates based on diamond,” says Benson, referring to the quantum analogue of the logic gates that form the integrated circuits in ordinary computer chips. In quantum computing, information is stored in the quantum states of carriers such as photons or diamond defects. But so far, decoherence, a degradation of the delicate states caused by noise in the environment, has prevented researchers from storing more than a few bits of linked quantum information in a diamond crystal at a time. Constantly measuring the states could protect them from uncontrolled decay and allow researchers to scale up the amount of information stored, says Benson.

Ronald Walsworth, an atomic physicist at Harvard University in Cambridge, Massachusetts, whose team made a tentative suggestion in 2010 that the quantum Zeno effect operates in diamond, says that evidence is growing, but that it will probably need to be clearer that the disruption of oscillations is due to the quantum process, and not other effects, before it can be used for quantum computing.

Quantum physicist Ronald Hanson, who works with nitrogen vacancies at Delft University of Technology in the Netherlands, says that Benson’s experiment, together with an April paper showing that spins in NV centers located 3 meters apart can be linked, indicates that diamond is gaining ground as a convenient material for quantum computing. “In a few years, we will be overtaking the ion traps,” he says.

Source: http://www.scientificamerican.com

If this theory is correct, we may live in a web of alternate timelines.


The Many Worlds Interpretation of quantum physics has been around for nearly 60 years. It’s a highly controversial idea which suggests that our world — and everything in it — is constantly splitting into alternative timelines. If it’s correct, here’s what your true existence might actually be like.

Over a hundred years ago, the discovery of quantum physics ruined the party. Our comfortable, clockwork conception of universe was thrown into disarray with the realization that, at the micro-scale, there’s some crazy funky stuff going on.

Thanks to quantum mechanics, we now know that matter takes on the properties of both particles and waves. What’s more, thanks to Werner Heisenberg and Erwin Schrödinger, we can never be certain about a particle’s momentum and position, nor can we be certain about an object’s state when it’s not being observed. In other words, the universe — at least at a certain scale — appears to be completely fuzzy and nebulous. Possibly even random.

Quantum physics has royally messed up classical — and seemingly intuitive — principles of space and time, causality, and the conservation of energy. This means that Newtonian, and even Einsteinian, interpretations of the universe are insufficient. Indeed, if we’re to develop a unified and comprehensible theory of everything, we’re going to have to reconcile all of this somehow.

But some physicists, upset by the implications of quantum mechanics on our ultimate understanding of the universe and our place within it, still choose to ignore or dismiss it as a kind of messy inconvenience. And it’s hard to blame them. Quantum physics doesn’t just upset conventional physics. It also perturbs our sense of our place in the universe; it’s Copernican in scale — a paradigm changer the carries deep metaphysical and existential baggage.

Denial, however, won’t help the situation — nor will it further science. Physicists have no choice but to posit theories that try to explain the things they see in the lab, no matter how strange. And in the world of quantum mechanics, this has given rise to a number of different interpretations, including the Copenhagen Interpretation, the Ensemble Interpretation, the de Broglie-Bohm theory, and many, many others.

And of course, there’s the infamous Many Worlds Interpretation.

The “Relative State” Formulation

Back in the 1950s, a Princeton undergraduate by the name of Hugh Everett III embroiled himself in the wonderful and wacky world of quantum physics. He became familiar with the ideas of Niels Bohr, Heisenberg, and Schrödinger, and studied under Robert Dickie and Eugene Wigner. Then, in 1955, he began to write his Ph.D. thesis under the tutelage of John Archibald Wheeler.

In 1957, he published his paper under the name, “Quantum Mechanics by the Method of the Universal Wave Function.” Eventually, after further edits and trimming, it was re-published under the name, “Wave Mechanics Without Probability.” And though he referred to his theory as the “relative state formulation,” it was rebranded as the Many Worlds Interpretation (MWI) by Bryce Seligman in the 60s and 70s.

But like so many seminal theories in science, Everett’s idea was scorned. So scorned, in fact, that he gave up physics and went to work as a defense analyst and consultant.

Now, some 60 years later, his radical idea lives on among a small — but growing — subset of physicists. In a recent poll of quantum physicists, some 18% of respondents said they subscribe to the MWI(as compared to the 42% who buy into the dominant Copenhagen Interpretation).

The Everett Postulate

Essentially, Everett’s big idea was the suggestion that the entire universe is quantum mechanical in nature — and not just the spooky phenomenon found at the indeterministic microscopic scale. By bringing macroscale events into the picture, he upset the half-century’s worth of work that preceded him. The two different worlds, argued Everett, can and must be linked.

No doubt, the problem that quantum mechanics presents is the realization that we appear to live in a deterministic world (i.e. a rational, comprehensible world) that contains some non-deterministic elements. Everett worked to reconcile the micro with the macro by making the case that no arbitrary division needs to be invoked to delineate the two realms.

He considered the universal wavefunction — a mathematical list of every single configuration of a quantum object, like a hydrogen atom. It’s a description of every possible configuration of every single elementary particle in the universe (that’s a big list). What Everett did was apply Schrodinger’s wavefunction equation to theentire universe — which is now known as the Everett Postulate:

All isolated systems evolve according to the Schrodinger equation.

Everett also argued that the measurement of a quantum object doesn’t force it into one comprehensible state or another. Instead, it causes the universe to split, or branch off, for each possible outcome of the measurement; the universe literally splits into distinct worlds to accommodate every single possible outcome. And interestingly, Everett’s idea allows for randomness to be removed from quantum theory, and by consequence, all of physics (thus making physicists very happy).

It’s worth noting that the MWI stands in sharp contrast to the popular Copenhagen Interpretation, a branch of physics which says quantum mechanics cannot produce a coherent description of objective reality. Instead, we can only deal with probabilities of observing or measuring various aspects of energy quanta — entities that don’t conform to classical ideas of particles and waves. It’s proponents talk about the wavefunction collapse — which happens when a measurement is made, and which causes the set of probabilities to immediately and randomly assume only one of the possible values.

So Many Worlds

According to Everett, a “world” is a complex, causally connected sub-system that doesn’t significantly interfere with other elements of the grander superposition. These “worlds” can be called “universes,” but “universe” tends to describe the whole kit-and-kaboodle.

Needless to say, it’s a metaphysical theory that dramatically alters our understanding of the universe and our place in it. If true, the universe is comprised of an ever-evolving series of timelines that branch off to accommodate all possibilities. Subsequently, it means that a version of you — or what you think is you — is constantly branching off into other alternate histories.

For example, in the case of Schrödinger’s cat, it’s not both alive and dead when not observed. Instead, a version of it ceases to exist, while another lives on in an alternative timeline. As another example, one version of you will stop reading my article at this exact point, while another version will continue to the very end. There may even be an evil version of you somewhere. So long as it’s probable — and that it doesn’t violate physical laws at the macro-scale — a new version of the universe, and all that’s within it — will be created. In turn, those will continue to branch off based on the new contingencies contained therein. But Everett-worlds in which probability breaks down can never be realized, and by consequence, never observed.

 

So what appears to be a single individual living from moment to moment is actually a perpetually multiplying flow of experiences; there is not just one timeline. Instead, there are many, many worlds. This means that all possible alternative histories and futures are real.

This also means that there could be an infinite number of universes — and that everything that could have possibly happened in our past has in fact happened in the past of some other worlds.

Weird and Untestable

Not surprisingly, there are a number of objections to the MWI. As noted, 82% of quantum physicists don’t buy it.

One of the most common complaints is that MWI grossly violates conservation of energy (i.e. where the hell is all the energy coming from to fuel all these new universes?). Others argue that it violates Occam’s Razor, that it doesn’t account for non-local events (like an alien making an observation far, far away), or that its parameters and definitions, like “measurement,” are far too liberal or vague.

And of course, it leads to a host of strange conclusions. For example, a version of you will win the lottery every time you play it. Sure, it’s highly improbable, but not impossible. In the space of all probable worlds, a version of you will have to experience it.

Perhaps even more bizarre is the scenario in which a person — someone who cannot play a musical instrument — sits in front of a piano and plays Debussy’s Claire de Lune to perfection strictly by chance. Sure, the odds of correctly hitting each successive note gets astronomical in scale as the piece progresses — but this is the weirdness that arises when we have to consider (1) probabilities and not impossibilities, and (2) the near-infinite number of expressions of all possible worlds.

But something about this scenario just feels…wrong.

Another interesting and related perspective comes from the Rational Skepticism website:

[F]or now, the MWI is physically dependent. That is, the likelihood of an outcome is assessed from physical potential. However, we all know that the likelihood of events isn’t contingent upon physical potentials. I know, for instance, given the evolution of my own life/mind, that the likelihood of me becoming a materialist tomorrow, is zero. I have no doubt about that, given that I’ve already been there and seen the flaws thereof (not to mention everything else I’ve ‘seen’). Likewise, you all may be sure of some thing or other. Further, for example, though the physical potential exists, the likelihood of tomorrow’s papers headlining The Pope as a murderous gay atheist, seems bleak, to say the least. Therefore, are these many worlds constrained by what is physically possible, or by what is sensibly possible? That is, do mental/emotive concerns dictate what worlds are possible, or simply physical potentials? On the face of it, it would seem that the MWI doesn’t have any recourse towards mental potential/agency.

Which is a great point. At what point does probability — even within the confines of classical physics — enter into the realm of sheer improbability? In the previous example, that of our insanely lucky piano player, such a thing might never play out because the person hasn’t developed the proper finger musculature, or they may suddenly stop mid-performance, aghast at their freakish achievement.

And there’s also the issue of testability. Regrettably, we can’t communicate with our splitting selves. Each version of us can only observe one instance of the universe at any given time. Subsequently, the MWI is considered untestable — leading many to dismiss it as being unscientific or just plain bonkers.

 

Actually, there may be a way to test it. MWI implies the quantum immortality hypothesis — the argument that a version of us will always observe the universe — even in the most improbable of circumstances. To test the MWI, all one needs to do is attempt suicide based on a 50/50 probability schema. According to the theory, a version of you will survive 50 successive 50/50 suicide attempts — but it’s a one in quadrillion chance. The trick, of course, is to live the life of that particular version of you. Good luck.

 

Hugh Everett, despite his belief in quantum immortality, died in 1982. But his idea lives on — a kind of immortality unto itself.

Source: http://io9.com

Richard Feynman on the Universal Responsibility of Scientists.


feynman_pleasure

“Writers do not merely reflect and interpret life, they inform and shape life,” E. B. White wrote of the role and responsibility of the writer.

In The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman (public library) — the anthology that gave us The Great Explainer’s insights on the role of scientific culture in modern society, titled after thefamous film of the same name — Richard Feynman adds to history’s famous definitions of science and considers the responsibility of the scientist as just about the polar opposite: to be continuously informed and shaped by life, free of the despotism of opinion and the addiction to rectitude.

Speaking to the notion that “every child is a scientist,” Feynman champions the true responsibility of science education — a responsibility and purpose sadly belied by the current education system — and argues:

When we read about this in the newspaper, it says, ‘The scientist says that this discovery may have importance in the cure of cancer.’ The paper is only interested in the use of the idea, not the idea itself. Hardly anyone can understand the importance of an idea, it is so remarkable. Except that, possibly, some children catch on. And when a child catches on to an idea like that, we have a scientist. These ideas do filter down (in spite of all the conversation about TV replacing thinking), and lots of kids get the spirit — and when they have the spirit you have a scientist. It’s too late for them to get the spirit when they are in our universities, so we must attempt to explain these ideas to children.

He then moves on to the broader role of science as a cultural force. The idea thatignorance is central to science — as well as filmmedia, and design — is an enduring theme, but Feynman lives up to his reputation and articulates it more beautifully and eloquently than anyone:

The scientist has a lot of experience with ignorance and doubt and uncertainty, and this experience is of very great importance, I think. When a scientist doesn’t know the answer to a problem, he is ignorant. When he has a hunch as to what the result is, he is uncertain. And when he is pretty darn sure of what the result is going to be, he is in some doubt. We have found it of paramount importance that in order to progress we must recognize the ignorance and leave room for doubt. Scientific knowledge is a body of statements of varying degrees of certainty– some most unsure, some nearly sure, none absolutely certain.

Echoing Rilke’s counsel to “live the questions,” Feynman traces the roots of science to the vital anti-authoritarianism of brave minds like Galileo and reminds us:

Now, we scientists … take it for granted that it is perfectly consistent to be unsure — that it is possible to live and notknow. But I don’t know whether everyone realizes that this is true. Our freedom to doubt was born of a struggle against authority in the early days of science. It was a very deep and strong struggle. Permit us to question — to doubt, that’s all — not to be sure. And I think it is important that we do not forget the importance of this struggle and thus perhaps lose what we have gained. Here lies a responsibility to society.

With his signature blend of graceful language and uncompromising conviction, Feynman echoes Bertrand Russell’s contention that “without science, democracy is impossible” and aims at the bullseye of the scientist’s responsibility:

We are at the very beginning of time for the human race. It is not unreasonable that we grapple with problems. There are tens of thousands of years in the future. Our responsibility is to do what we can, learn what we can, improve the solutions and pass them on. It is our responsibility to leave the men of the future a free hand. In the impetuous youth of humanity, we can make grave errors that can stunt our growth for a long time. This we will do if we say we have the answers now, so young and ignorant; if we suppress all discussion, all criticism, saying, ‘This is it, boys, man is saved!’ and thus doom man for a long time to the chains of authority, confined to the limits of our present imagination. It has been done so many times before.

It is our responsibility as scientists, knowing the great progress and great value of a satisfactory philosophy of ignorance, the great progress that is the fruit of freedom of thought, to proclaim the value of this freedom, to teach how doubt is not to be feared but welcomed and discussed, and to demand this freedom as our duty to all coming generations.

Source: http://www.brainpickings.org

Quantum Physicists Dream Up Smallest Possible Refrigerator


ou may have a $10,000 Sub-Zero fridge in your kitchen, but this is cooler. Theoretical physicists have dreamed up a scheme to make a refrigerator out of a pair of quantum particles such as ions or atoms, or even a single particle. The fridges may be the smallest ones possible. “It’s very elegant and innovative,” says Nicolas Gisin, a theorist at the University of Geneva in Switzerland. Theo Nieuwenhuizen, a theorist at the University of Amsterdam, says “I don’t see any error, so probably this would work.”

The challenge is to make a few quantum particles act like a so-called thermal machine, the theory of which was set out by French engineer Sadi Carnot in 1824. Carnot imagined a piston filled with gas that could be compressed or expanded. The piston could make contact with either of two large bodies (say, massive steel blocks) at different temperatures, which could serve as the “hot bath” and the “cold bath.”

Carnot put the imaginary piston through a cycle of motions, including one in which the gas expands while in contact with the hot bath and another in which it is compressed while in contact with the cold bath. During the cycle, the piston does work while absorbing heat from the hot bath and releasing heat into the cold one, making it a “heat engine.” Reverse the cycle and, in response to work done on it, the piston acts as a refrigerator, absorbing heat from the cold bath and releasing it into the hot one.

Now, Noah Linden, Sandu Popescu, and Paul Skrzypczyk of the University of Bristol in the United Kingdom report that, at least in principle, they can make a refrigerator out of a few quantum particles called “qubits.” Each qubit has only two possible quantum states: a zero-energy ground state and a fixed-energy excited state. The theorists have found a way to siphon energy out of one qubit by making it interact with just two others.

The theorists arrange things so that each qubit has a different excited-state energy but the trio of qubits has two configurations with the same total energy. One is the configuration in which only the first and third qubits are in their excited states—denoted (101). The other is the configuration in which only the second qubit is in its excited state—denoted (010). If all three qubits were at the same temperature, then the system would flip with equal probability back and forth between these two configurations.

But the researchers skew that flipping, as they explain in a paper in press at Physical Review Letters. The trick is to put the first two qubits in contact with a cold bath and the third one in contact with a hot bath. The higher temperature makes it more likely that the third qubit will be in its excited state—and thus that the trio will be in the (101) state instead of the (010) state. But that means the system is more likely to flip out of (101) and into (010) than the other way around. So on average the flipping takes the first qubit from its excited state to its ground state and draws energy out of the first qubit. After a flip, the qubits essentially reset by interacting with the baths, allowing the cycle to start again.

The theorists measure the fridge’s size in terms of the number of its quantum states, and the three qubits have a total of eight possible states. That number can be clipped to six, if they replace the second and third qubits with a single “qutrit,” a particle with a ground state and two excited states—although those two states have to be in contact with different baths. “We believe that’s probably the smallest number of states you can get away with,” Linden says.

In theory, such a fridge can get arbitrarily close to absolute zero, and Popescu says that it might be possible to make one using trapped ions for the qubits and streams of laser light as the baths. Some researchers hope to use such qubits as the guts for a quantum computer, and Popescu says the refrigerator scheme might allow researchers to cool some set of qubits with a few others. David Wineland, an experimental physicist with the U.S. National Institute of Standards and Technology in Boulder, Colorado, says he believes such schemes can indeed be implemented in trapped ions.

Others suggest that such tiny quantum refrigerators might already be humming along in nature. It’s possible that one part of a biomolecule might work to cool another in such a fashion, says Hans Briegel, a theorist at the University of Innsbruck in Austria. “I don’t expect that you will have a mechanism exactly like this,” Briegel says, “but it gives you a framework valuable for telling what to search for.”

No word yet on when physicists might unveil the smallest possible beer.