Superstring Theory and Higher Dimensions: Bridging Einstein’s Relativity and Quantum Mechanics


Conceptual diagram of the calculation of density fluctuation correlations in the early universe based on a low-dimensional matter field theory using holography.

Seeing is more than believing: Holography helps our understanding of our early universe.

A team of researchers at Kyoto University is exploring the use of higher dimensions in de Sitter space to explain gravity in the early universe. By developing a method to compute correlation functions among fluctuations, they aim to bridge the gap between Einstein’s theory of general relativity and quantum mechanics. This could potentially validate superstring theory and enable practical calculations about the early universe’s subtle changes. Although initially tested in a three-dimensional universe, the analysis may be extended to a four-dimensional universe for real-world applications.

Having more tools helps; having the right tools is better. Utilizing multiple dimensions may simplify difficult problems — not only in science fiction but also in physics — and tie together conflicting theories.

For example, Einstein’s theory of general relativity — which resides in the fabric of space-time warped by planetary or other massive objects — explains how gravity works in most cases. However, the theory breaks down under extreme conditions such as those existing in black holes and cosmic primordial soups.

An approach known as superstring theory could use another dimension to help bridge Einstein’s theory with quantum mechanics, solving many of these problems. But the necessary evidence to support this proposal has been lacking.

Now, a team of researchers led by Kyoto University is exploring de Sitter space to invoke a higher dimension to explain gravity in the expanding early universe. They have developed a concrete method to compute correlation functions among fluctuations on expanding universe by making use of holography.

“We came to realize that our method can be applied more generically than we expected while dealing with quantum gravity,” says Yasuaki Hikida, from the Yukawa Institute for Theoretical Physics.

Dutch astronomer Willem de Sitter’s theoretical models describe space in a way that fits with Einstein’s general theory of relativity, in that the positive cosmological constant accounts for the expansion of the universe.

Starting with existing methods for handling gravity in anti-de Sitter space, Hikida’s team reshaped them to work in expanding de Sitter space to more precisely account for what is already known about the universe.

“We are now extending our analysis to investigate cosmological entropy and quantum gravity effects,” adds Hikida.

Although the team’s calculations only considered a three-dimensional universe as a test case, the analysis may easily be extended to a four-dimensional universe, allowing for the extraction of information from our real world.

“Our approach possibly contributes to validating superstring theory and allows for practical calculations about the subtle changes that rippled across the fabric of our early universe.”

The weirdness of quantum mechanics forces scientists to confront philosophy


Though quantum mechanics is an incredibly successful theory, nobody knows what it means. Scientists now must confront its philosophical implications.

KEY TAKEAWAYS

  • Despite the tremendous success of quantum physics, scientists and philosophers still disagree on what it’s telling us about the nature of reality. 
  • Central to the dispute is whether the theory is describing the world as it is or is merely a mathematical model. 
  • Attempts to reconcile the theory with reality have led physicists to some strange places, forcing scientists to grapple with matters of philosophy.

The world of the very small is like nothing we see in our everyday lives. We do not think of people or rocks being in more than one place at the same time until we look at them. They are where they are, in one place only, whether or not we know where that place is. Nor do we think of a cat locked in a box as being both dead and alive before we open the box to check. But such dualities are the norm for quantum objects like atoms or subatomic particles, or even larger ones like a cat. Before we look at them, these objects exist in what we call a superposition of states, each state with an assigned probability. When we measure many times their position or some other physical property, we will find it in one of such states with certain probabilities. 

The crucial question that still haunts or inspires physicists is this: Are such possible states real — is the particle really in a superposition of states — or is this way of thinking just a mathematical trick we invented to describe what we measure with our detectors? To take a stance on this question is to choose a certain way of interpreting quantum mechanics and our take on the world. It is important to stress that quantum mechanics works beautifully as a mathematical theory. It describes the experiments incredibly well. So we are not debating whether quantum mechanics works or not, because we are well past that point. The issue is whether it describes physical reality as it is or whether it does not, and we need something more if we are to arrive at a deeper understanding of how nature operates in the world of the very small.

States of thinking about the quantum world

Even though quantum mechanics works, the debate about its nature is fierce. The subject is vast, and I could not possibly do it justice here. My goal is to give a flavor of what is at stake. (For more details, see The Island of Knowledge.) There are many schools of thought and many nuanced arguments. But in its most general form, the schools line up along two ways of thinking about reality, and they both depend on the protagonist of the quantum world: the famous wavefunction.

In one corner stands those who think that the wavefunction is an element of reality, that it describes reality as it is. This way of thinking is sometimes called the ontic interpretation, from the term ontology, which in philosophy means the stuff that makes up reality. People who follow the ontic school would say that even though the wavefunction does not describe something palpable, like the particle’s position or its momentum, its absolute square represents the probability of measuring this or that physical property — the superpositions that it does describe are a part of reality. 

In the other corner stand those who think that the wavefunction is not an element of reality. Instead, they see a mathematical construct that allows us to make sense of what we find in experiments. This way of thinking is sometimes called the epistemic interpretation, from the term epistemology in philosophy. In this view, measurements taken as objects and detectors interact and people read the results are the only way we can figure out what goes on at the quantum level, and the rules of quantum physics are fantastic at describing the results of these measurements. There is no need to attribute any kind of reality to the wavefunction. It simply represents potentialities — the possible outcomes of a measurement. (The great physicist Freeman Dyson once told me that he considered the whole debate a huge waste of time. To him, the wavefunction was never intended to be a real thing.) 

Note the importance in all this of measurements. Historically, the epistemic view goes back to the Copenhagen interpretation, the hodgepodge of ideas spearheaded by Niels Bohr and carried forward by his younger, powerhouse colleagues such as Werner Heisenberg, Wolfgang Pauli, Pascual Jordan, and many others. 

This school of thought is sometimes unjustly called the “shut up and calculate approach” due to its insistence that we do not know what the wavefunction is, only what it does. It tells us we accept the superpositions of possible states, coexisting before a measurement is made, as a pragmatic description of what we cannot know. Upon measurement, the system collapses into just one of the possible states: the one that is measured. Yes, it is weird to state that a wavy thing, spread across space, instantaneously goes into a single position (a position that lies within what is allowed by the Uncertainty Principle). Yes, it is weird to contemplate the possibility that the act of measurement somehow defines the state in which the particle is found. It introduces the possibility that the measurer has something to do with determining reality. But the theory works, and for all practical purposes, that is what really matters.

Forks in the quantum road

At its essence, the ontic vs. epistemic debate hides the ghost of objectivity in science. Onticists deeply dislike the notion that observers could have anything to do with determining the nature of reality. Is an experimenter really determining whether an electron is here or there? One ontic school known as the Many Worlds interpretation would say instead that all possible outcomes are realized when a measurement is performed. It’s just that they are realized in parallel worlds, and we only have direct access to one of them — namely, the one we exist in. In Borgean style, the idea here is that the act of measurement forks reality into a multiplicity of worlds, each realizing a possible experimental outcome. We do not need to speak of the collapse of the wavefunction since all outcomes are realized at once.

Unfortunately, these many worlds are not accessible to observers in different worlds. There have been proposals to test the Many Worlds experimentally, but the obstacles are huge, for example requiring the quantum superposition of macroscopic objects in the laboratory. It is also not clear how to assign different probabilities to the different worlds related to the outcomes of the experiment. For example, if the observer is playing a game of Russian roulette with options triggered by a quantum device, he will only survive in one world. Who would be willing to be the subject of this experiment? I certainly would not. Still, Many Worlds has many adherents.

Smarter faster: the Big Think newsletter

Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

Fields marked with an * are required

Other ontic approaches require, for example, adding elements of reality to the quantum mechanical description. For example, David Bohm proposed expanding the quantum mechanical prescription by adding a pilot wave with the explicit role of guiding the particles into their experimental outcomes. The price for experimental certainty, here, is that this pilot wave acts everywhere at once, which in physics means that it has nonlocality. Many people, including Einstein, have found this impossible to accept.

The agent and the nature of reality 

On the epistemic side, interpretations are just as varied. The Copenhagen interpretation leads the pack. It states that the wavefunction is not a thing in this world, but rather a mere tool to describe what is essential, the outcomes of experimental measurements. Views tend to diverge on the meaning of the observer, about the role the mind exerts on the act of measuring and thus on defining the physical properties of the object being observed, and on the dividing line between classical and quantum. 

Due to space, I will only mention one more epistemic interpretation, Quantum Bayesianism, or as it is now called, QBism. As the original name implies, QBism takes the role of an agent as central. It assumes that probabilities in quantum mechanics reflect the current state of the agent’s knowledge or beliefs about the world, as he or she makes bets about what will happen in the future. Superpositions and entanglements are not states of the world, in this view, but expressions of how an agent experiences the world. As such, they are not as mysterious as they may sound. The onus of quantum weirdness is transferred to an agent’s interactions with the world. 

A common criticism levied against QBism is its reliance on a specific agent’s relation to the experiment. This seems to inject a dose of subjectivism, placing it athwart the usual scientific goal of observer-independent universality. But as Adam Frank, Evan Thompson, and myself argue in The Blind Spot, a book to be published by MIT Press in 2024, this criticism relies on a view of science that is unrealistic. It is a view rooted in an account of reality outside of us, the agents that experience this reality. Perhaps that is what quantum mechanics’ weirdness has been trying to tell us all along. 

What really matters

The beautiful discoveries of quantum physics reveal a world that continues to defy and inspire our imaginations. It continues to surprise us, just as it has done for the past century. As said by Democritus, the Greek philosopher who brought atomism to the forefront over 24 centuries ago, “In reality we know nothing, for truth is in the depths.” That may very well be the case, but we can keep trying, and that is what really matters.

Understanding Quantum Mechanics: What is Electromagnetism?


On the face of it, both electricity and magnetism are remarkably similar to gravity. Just as two masses are attracted to each other by an inverse square force, the force between two charged objects or two poles of a magnet  are also inverse square. The difference is that gravity is always attractive, whereas electricity and magnetism can be either attractive or repulsive.For example, two positive charges will push away from each other, while a positive and negative charge will pull toward each other.

As with gravity, electricity and magnetism raised the question of action-at-a-distance. How does one charge “know” to be pushed or pulled by the other charge? How do they interact across the empty space between them? The answer to that question came from James Clerk Maxwell.

Maxwell’s breakthrough was to change the way we thought electromagnetic forces. His idea was that each charge must reach out to each other with some kind of energy. That is, a charge is surrounded by a field of electricity, a field that other charges can detect. Charges possess electric fields, and charges interact with the electric fields of other charges. The same must be true of magnets. Magnets possess magnetic fields, and interact with magnetic fields. Maxwell’s model was not just a description of the force between charges and magnets, but a also description of the electric and magnetic fields themselves. With that change of view, Maxwell found the connection between electricity and magnetism. They were connected by their fields.

A moving electric field creates a magnetic field, and a moving magnetic field creates an electric field. Not only are the two connected, but one type of field can create the other. Maxwell had created a single, unified description of electricity and magnetism. He had united two different forces into a single unified force, which we now call electromagnetism.

Maxwell’s theory not only revolutionized physics, it gave astrophysics the tools to finally understand some of the complex behavior of interstellar space. By the mid-1900s Maxwell’s equations were combined with the Navier-Stokes equations describing fluids to create magnetohydrodynamics (MHD). Using MHD we could finally begin to model the behavior of plasma within magnetic fields, which is central to our understanding of everything from the Sun to the formation of stars and planets. As our computational powers grew, we were able to create simulations of protostars and young planets.

Although there are still many unanswered questions, we now know that the dance of plasma and electromagnetism plays a crucial role in the formation of stars and planets.

While Maxwell’s electromagnetism is an incredibly powerful theory, it is a classical model just like Newton’s gravity and general relativity. But unlike gravity, electromagnetism could be combined with quantum theory to create a fully quantum model known as quantum electrodynamics (QED).

A central idea of quantum theory is a duality between particle-like and wave-like (or field-like) behavior. Just has electrons and protons can interact as fields, the electromagnetic field can interact as particle-like quanta we call photons. In QED, charges and a electromagnetic fields are described as interactions of quanta. This is most famously done through Richard Feynman’s figure-based approach now known as Feynman diagrams.

first feynman diagram
The first feynman diagram. Credit: Richard Feynman.

Feynman diagrams are often mis-understood to represent what is actually happening when charges interact. For example, two electrons approach each other, exchange a photon, and then move away from each other. Or the idea that virtual particles can pop in and out of existence in real time. While the diagrams are easy to understand as particle interactions, they are still quanta, and still subject to quantum theory. How they are actually used in QED is to calculate all the possible ways that charges could interact through the electromagnetic field in order to determine the probability of a certain outcome. Treating all these possibilities as happening in real time is like arguing that five apples on a table become real one at a time as you count them.

QED has become the most accurate physical model we’ve devised so far, but this theoretical power comes at the cost of losing the intuitive concept of a force.

Feynman’s interactions can be used to calculate the force between charges, just as Einstein’s spacetime curvature can be used to calculate the force between masses. But QED also allows for interactions that aren’t forces. An electron can emit a photon in order to change its direction, and an electron and positron can interact to produce a pair of photons. In QED matter can become energy and energy can be come matter.

What started as a simple force has become a fairy dance of charge and light. Through this dance we left the classical world and moved forward in search of the strong and the weak.

You thought quantum mechanics was weird: check out entangled time


<em>Photo by Alan Levine/Flickr</em>

In the summer of 1935, the physicists Albert Einstein and Erwin Schrödinger engaged in a rich, multifaceted and sometimes fretful correspondence about the implications of the new theory of quantum mechanics. The focus of their worry was what Schrödinger later dubbed entanglement: the inability to describe two quantum systems or particles independently, after they have interacted.

Until his death, Einstein remained convinced that entanglement showed how quantum mechanics was incomplete. Schrödinger thought that entanglement was the defining feature of the new physics, but this didn’t mean that he accepted it lightly. ‘I know of course how the hocus pocus works mathematically,’ he wrote to Einstein on 13 July 1935. ‘But I do not like such a theory.’ Schrödinger’s famous cat, suspended between life and death, first appeared in these letters, a byproduct of the struggle to articulate what bothered the pair.

The problem is that entanglement violates how the world ought to work. Information can’t travel faster than the speed of light, for one. But in a 1935 paper, Einstein and his co-authors showed how entanglement leads to what’s now called quantum nonlocality, the eerie link that appears to exist between entangled particles. If two quantum systems meet and then separate, even across a distance of thousands of lightyears, it becomes impossible to measure the features of one system (such as its position, momentum and polarity) without instantly steering the other into a corresponding state.

Up to today, most experiments have tested entanglement over spatial gaps. The assumption is that the ‘nonlocal’ part of quantum nonlocality refers to the entanglement of properties across space. But what if entanglement also occurs across time? Is there such a thing as temporal nonlocality?

The answer, as it turns out, is yes. Just when you thought quantum mechanics couldn’t get any weirder, a team of physicists at the Hebrew University of Jerusalem reported in 2013 that they had successfully entangled photons that never coexisted. Previous experiments involving a technique called ‘entanglement swapping’ had already showed quantum correlations across time, by delaying the measurement of one of the coexisting entangled particles; but Eli Megidish and his collaborators were the first to show entanglement between photons whose lifespans did not overlap at all.

Here’s how they did it. First, they created an entangled pair of photons, ‘1-2’ (step I in the diagram below). Soon after, they measured the polarisation of photon 1 (a property describing the direction of light’s oscillation) – thus ‘killing’ it (step II). Photon 2 was sent on a wild goose chase while a new entangled pair, ‘3-4’, was created (step III). Photon 3 was then measured along with the itinerant photon 2 in such a way that the entanglement relation was ‘swapped’ from the old pairs (‘1-2’ and ‘3-4’) onto the new ‘2-3’ combo (step IV). Some time later (step V), the polarisation of the lone survivor, photon 4, is measured, and the results are compared with those of the long-dead photon 1 (back at step II).

Figure 1. Time line diagram: (I) Birth of photons 1 and 2, (II) detection of photon 1, (III) birth of photons 3 and 4, (IV) Bell projection of photons 2 and 3, (V) detection of photon 4.

The upshot? The data revealed the existence of quantum correlations between ‘temporally nonlocal’ photons 1 and 4. That is, entanglement can occur across two quantum systems that never coexisted.

What on Earth can this mean? Prima facie, it seems as troubling as saying that the polarity of starlight in the far-distant past – say, greater than twice Earth’s lifetime – nevertheless influenced the polarity of starlight falling through your amateur telescope this winter. Even more bizarrely: maybe it implies that the measurements carried out by your eye upon starlight falling through your telescope this winter somehow dictated the polarity of photons more than 9 billion years old.

Lest this scenario strike you as too outlandish, Megidish and his colleagues can’t resist speculating on possible and rather spooky interpretations of their results. Perhaps the measurement of photon 1’s polarisation at step II somehow steers the future polarisation of 4, or the measurement of photon 4’s polarisation at step V somehow rewrites the past polarisation state of photon 1. In both forward and backward directions, quantum correlations span the causal void between the death of one photon and the birth of the other.

Just a spoonful of relativity helps the spookiness go down, though. In developing his theory of special relativity, Einstein deposed the concept of simultaneity from its Newtonian pedestal. As a consequence, simultaneity went from being an absolute property to being a relative one. There is no single timekeeper for the Universe; precisely when something is occurring depends on your precise location relative to what you are observing, known as your frame of reference. So the key to avoiding strange causal behaviour (steering the future or rewriting the past) in instances of temporal separation is to accept that calling events ‘simultaneous’ carries little metaphysical weight. It is only a frame-specific property, a choice among many alternative but equally viable ones – a matter of convention, or record-keeping.

The lesson carries over directly to both spatial and temporal quantum nonlocality. Mysteries regarding entangled pairs of particles amount to disagreements about labelling, brought about by relativity. Einstein showed that no sequence of events can be metaphysically privileged – can be considered more real – than any other. Only by accepting this insight can one make headway on such quantum puzzles.

The various frames of reference in the Hebrew University experiment (the lab’s frame, photon 1’s frame, photon 4’s frame, and so on) have their own ‘historians’, so to speak. While these historians will disagree about how things went down, not one of them can claim a corner on truth. A different sequence of events unfolds within each one, according to that spatiotemporal point of view. Clearly, then, any attempt at assigning frame-specific properties generally, or tying general properties to one particular frame, will cause disputes among the historians. But here’s the thing: while there might be legitimate disagreement about which properties should be assigned to which particles and when, there shouldn’t be disagreement about the very existence of these properties, particles, and events.

These findings drive yet another wedge between our beloved classical intuitions and the empirical realities of quantum mechanics. As was true for Schrödinger and his contemporaries, scientific progress is going to involve investigating the limitations of certain metaphysical views. Schrödinger’s cat, half-alive and half-dead, was created to illustrate how the entanglement of systems leads to macroscopic phenomena that defy our usual understanding of the relations between objects and their properties: an organism such as a cat is either dead or alive. No middle ground there.

Most contemporary philosophical accounts of the relationship between objects and their properties embrace entanglement solely from the perspective of spatial nonlocality. But there’s still significant work to be done on incorporating temporal nonlocality – not only in object-property discussions, but also in debates over material composition (such as the relation between a lump of clay and the statue it forms), and part-whole relations (such as how a hand relates to a limb, or a limb to a person). For example, the ‘puzzle’ of how parts fit with an overall whole presumes clear-cut spatial boundaries among underlying components, yet spatial nonlocality cautions against this view. Temporal nonlocality further complicates this picture: how does one describe an entity whose constituent parts are not even coexistent?

Discerning the nature of entanglement might at times be an uncomfortable project. It’s not clear what substantive metaphysics might emerge from scrutiny of fascinating new research by the likes of Megidish and other physicists. In a letter to Einstein, Schrödinger notes wryly (and deploying an odd metaphor): ‘One has the feeling that it is precisely the most important statements of the new theory that can really be squeezed into these Spanish boots – but only with difficulty.’ We cannot afford to ignore spatial ortemporal nonlocality in future metaphysics: whether or not the boots fit, we’ll have to wear ’em.

Scientists Are Rethinking the Very Nature of Space and Time


The Nature of Space and Time

A pair of researchers have uncovered a potential bridge between general relativityand quantum mechanics — the two preeminent physics theories — and it could force physicists to rethink the very nature of space and time.

Albert Einstein’s theory of general relativity describes gravity as a geometric property of space and time. The more massive an object, the greater its distortion of spacetime, and that distortion is felt as gravity.

In the 1970s, physicists Stephen Hawking and Jacob Bekenstein noted a link between the surface area of black holes and their microscopic quantum structure, which determines their entropy. This marked the first realization that a connection existed between Einstein’s theory of general relativity and quantum mechanics.

Less than three decades later, theoretical physicist Juan Maldacena observed another link between between gravity and the quantum world. That connection led to the creation of a model that proposes that spacetime can be created or destroyed by changing the amount of entanglement between different surface regions of an object.

In other words, this implies that spacetime itself, at least as it is defined in models, is a product of the entanglement between objects.

To further explore this line of thinking, ChunJun Cao and Sean Carroll of the California Institute of Technology (CalTech) set out to see if they could actually derive the dynamical properties of gravity (as familiar from general relativity) using the framework in which spacetime arises out of quantum entanglement. Their research was recently published in arXiv.

Using an abstract mathematical concept called Hilbert space, Cao and Carroll were able to find similarities between the equations that govern quantum entanglement and Einstein’s equations of general relativity. This supports the idea that spacetime and gravity do emerge from entanglement.

Carroll told Futurism the next step in the research is to determine the accuracy of the assumptions they made for this study.

“One of the most obvious ones is to check whether the symmetries of relativity are recovered in this framework, in particular, the idea that the laws of physics don’t depend on how fast you are moving through space,” he said.

A Theory of Everything

Today, almost everything we know about the physical aspects of our universe can be explained by either general relativity or quantum mechanics. The former does a great job of explaining activity on very large scales, such as planets or galaxies, while the latter helps us understand the very small, such as atoms and sub-atomic particles.

However, the two theories are seemingly not compatible with one another. This has led physicists in pursuit of the elusive “theory of everything” — a single framework that would explain it all, including the nature of space and time.

Because gravity and spacetime are an important part of “everything,” Carroll said he believes the research he and Cao performed could advance the pursuit of a theory that reconciles general relativity and quantum mechanics. Still, he noted that the duo’s paper is speculative and limited in scope.

“Our research doesn’t say much, as yet, about the other forces of nature, so we’re still quite far from fitting ‘everything’ together,” he told Futurism.

Still, if we could find such a theory, it could help us answer some of the biggest questions facing scientists today. We may be able to finally understand the true nature of dark matterdark energyblack holes, and other mysterious cosmic objects.

Already, researchers are tapping into the ability of the quantum world to radically improve our computing systems, and a theory of everything could potentially speed up the process by revealing new insights into the still largely confusing realm.

While theoretical physicists’ progress in pursuit of a theory of everything has been “spotty,” according to Carroll, each new bit of research — speculative or not — leads us one step closer to uncovering it and ushering in a whole new era in humanity’s understanding of the universe.

According To Quantum Mechanics, Reality Might Not Exist Without An Observer


If a tree falls in the forest and there’s no one around to hear it, does it make a sound? The obvious answer is yes—a tree falling makes a sound whether or not we hear it—but certain experts in quantum mechanics argue that without an observer, all possible realities exist. That means that the tree both falls and doesn’t fall, makes a sound and is silent, and all other possibilities therein. This was the crux of the debate between Niels Bohr and Albert Einstein. Learn more about it in the video below.

Quantum Entanglement And The Bohr-Einstein Debate

Does reality exist when we’re not watching?

The Double Slit Experiment

Learn about one of the most famous experiments in quantum physics.

Watch the video. URL:

An Illustrated Lesson In Quantum Entanglement

Delve into this heavy topic with some light animation.

Watch the video. URL:

Inside knowledge: Is information the only thing that exists?


Physics suggests information is more fundamental than matter, energy, space and time – the problems start when we try to work out what that means

space

“IT FROM bit.” This phrase, coined by physicist John Wheeler, encapsulates what a lot of physicists have come to believe: that tangible physical reality, the “it”, is ultimately made from information, or bits.

Concepts such as entropy in thermodynamics, a measure of disorder whose irresistible rise seems to characterise our universe, have long been known to be connected with information. More recently, some efforts to unify general relativity, the theory that describes space and time, with quantum mechanics, the theory that describes particles and matter, have homed in on information as a common language.

Inside knowledge: The biggest questions about facts, truth, lies and belief

Forget alternative facts. To get to the bottom of what we know and how we know we know it, delve into our special report on epistemology – the science of knowledge itself

But what is this information? Is it “ontological” – a real thing from which space, time and matter emerge, just as an atom emerges from fundamental particles such as electrons and quarks and gluons? Or is it “epistemic” – something that just represents our state of knowledge about reality?

Here opinions are divided. Cosmologist Paul Davies argues in the book Information and the Nature of Reality that information “occupies the ontological basement”. In other words, it is not about something, it is itself something. Sean Carroll at the California Institute of Technology in Pasadena disagrees. Even if all of reality emerges from information, he says, this information is just knowledge about the universe’s basic quantum state.

Watch the video. URL:

So we have to drill deeper. In quantum mechanics, an object’s state is encoded.

Source:newscientist.com

Prominent Astrophysicist Calls the Big Bang A “Mirage”


Article Image
Artist conceptualization of the Big Bang.

Science classes the world over teach that the Big Bang is the beginning of our universe, as if it’s established fact. In reality, it’s a theory and one that’s been challenged periodically. In the last few years, two teams of scientists have revived the debate, and offer fascinating alternative models. A recent paper published in the journal Nature, even goes so far as to suggest that the Big Bang was a “mirage.”

This paper was written by astrophysicist Niayesh Afshordi and colleagues, at the University of Waterloo in Ontario, Canada. They built upon the work of physicist Gia Dvali at the Ludwig Maximillian’s University in Munich, Germany. Physicists have some evidence that the Big Bang took place.

For instance, microwave radiation lurking in the background suggests an apocryphal explosion some 13.7 billion years ago, when the Big Bang is said to have taken place. The fact that the universe is still expanding also suggests that all things came from a common point, strengthening the accepted theory. But what happened before it took place has always been a mystery.

Today, we’re told is that everything began with an unimaginably hot, infinitely dense point in space, which did not adhere to the standard laws of physics. This is known as the singularity. But almost nothing is known about it. Afshordi points out in an interview in Nature, “For all physicists know, dragons could have come flying out of the singularity.” Mathematically, the Big Bang itself holds up. But equations can only show us what happened after, not before.

Background radiation in the universe. 

Since the singularity doesn’t fit into normal, predictable physics models and can’t offer a glimpse into its own origins, some scientists are searching for other answers. Dr. Ahmed Farag Ali of Benha University, in Egypt, calls the singularity, “the most serious problem of general relativity.”

He collaborated with Professor Saurya Das of the University of Lethbridge, in Canada, to investigate. In 2015, they released a series of equations which describe the universe, not as an object with a beginning and an end, but as a constantly flowing river, devoid of all boundaries.

There was no Big Bang in this view and similarly no “Big Crunch,” or a time when the universe might stop expanding and begin condensing. They published their work in the journal Physics Letters B, and plan to introduce a follow-up study. The paper attempts a Herculean feat, to heal the rift between general relativity and quantum mechanics.

In this view, the universe began when it filled with gravitons as a bath fills with water. These don’t contain any mass themselves but pass gravity on to other particles. From there, this “quantum fluid” spread out and the speed of expansion accelerated.

So far, it remains a hypothesis which must undergo a battery of tests, before it can compete with or supersede the present model. This isn’t the only challenge to currently accepted theory.

Currently accepted model. NASA Jet Propulsion Laboratory. Caltech.

To get a better idea on how the universe began, Prof. Afshordi and his team created a 3D model it, floating inside a 4D model of “bulk space.” Remember, the fourth dimension is space-time. This 3D model resembled a membrane, so scientists named it the “brane.” Next, they examined stars within the model and realized that over time, some would die off in violent supernova, turning into 4D black holes.

Black holes have an edge called the event horizon. Reach it and nothing will save you from being pulled in. Nothing escapes its omnipotent pull, not light, not even stars. We think of an event horizon as a corona around a black hole, as it is usually represented in 2D images. Everything in space is 3D (4D actually). So it isn’t a ring, but an outer layer of the black hole’s surface.

Afshordi ran the model to see what would happen when a 4D black hole swallowed a 4D star. A 3D brane fired out, as a result. What’s more, the ejected material began expanding in space. So the universe may be the result of a violent interaction between a star and a black hole.

Ashfordi said, “Astronomers measured that expansion and extrapolated back that the Universe must have begun with a Big Bang — but that is just a mirage.”

To learn more about one alternate theory to the Big Bang, click here:

 

A Deeper Look into Quantum Mechanics


Superconducting qubits are tops UCSB

Winfried Hensinger is the director of the Sussex Centre for Quantum Technologies in England, and he has spent a lifetime devoted to studying the ins and outs of quantum mechanics and just what it can do for us. When Hensinger first started in the field, quantum computing was still very much a theory, but now it is all around us, and various projects are within reach of creating a universal quantum computer. So, now that scientists are taking quantum computing more seriously it won’t be long before the field begins to explode and applications that we never even imagined possible will become available to use.

Quantum computing works with information that is stored in qubits which have a value of either 1, 0, or any quantum superposition of the two states.  The notion behind quantum superposition is that a quantum object has the ability to occupy more than one state until it’s measured.  Because quantum objects are used in this kind of computing, any given set of quantum values can represent much more data than binary ever could because data is not limited to 1’s and 0’s.

Currently, researchers are still battling it out to create a successful quantum computer, but they still have a way to go.  Systems have been constructed that has access to a few qubits and are good for testing hardware configurations or running some algorithms, but are very expensive and still very basic.  When Hensinger was asked about the current changes within quantum computing, he simply replied, “It used to be a physics problem.  Now, it’s and engineering problem.” 

Two possibilities from researchers of what the foundation of quantum computing should look like are superconducting qubits and trapped ions. Superconducting qubits relies on supercooled electrical circuits and could bring many advantages to the manufacturing process when making them on a mass scale.  The trapped ions method refers to a method that can cope with environmental factors but has trouble controlling lots of charged atoms within a vacuum.  Hensinger supports both of these implementations and believes they will both produce a quantum computer.  During his research, Hensinger’s results showed that the trapped ions method was slightly ahead of the competition.

However, Hensinger has also created his own method with the help of his team at Sussex and focuses on individually controlled voltages that are needed within the quantum circuit. He says, “With this concept, it becomes much easier to build a quantum computer.  This is one of the reasons why I’m very optimistic about trapped ions.”  Hensinger and co also chose to work with trapped ions as it works at room temperature, unlike the superconducting qubits method.

IBM, on the other hand, has chosen to work with superconducting qubits as the basis for their quantum work.  Their quantum computer consists of a five-qubit processor that’s contained within a printed circuit board.  The refrigerated system contains control wires that transmit microwave signals to the chip and send signals out through various amplifiers and passive microwave components where they are interpreted by a classical computer for easy reading of the system’s qubit state from outside the refrigerated system.  All of this takes up more than 100 square feet within IBM’s lab, and that is because of the significant cooling that needs to be done.

 

Jerry Chow, the manager of the Experimental Quantum Computing team at IBM, says that the reason IBM uses superconducting qubits is more to do with previous research using this technique that had been done.  And, as Chow explains, “I think superconducting qubits are really attractive because they’re micro-fabricate.  You can make them on a computer chip, on a silicon wafer, pattern them with the standard lithography techniques using transistor processes, and in that sense have a pretty straightforward route toward scaling.”

Two beryllium ions trapped 40 micrometers apart from the square gold chip in the center form the heart of this ‘trapped ion’ quantum computer. (Photo: Y. Colombe/NIST)
Two beryllium ions trapped 40 micrometers apart from the square gold chip in the center form the heart of this ‘trapped ion’ quantum computer. 
NASA’s 512-qubit Vesuvius processor is cooled to 20 millikelvin, more than 100 times colder than interstellar space. (Photo: NASA Ames/John Hardman)
NASA’s 512-qubit Vesuvius processor is cooled to 20 millikelvin, more than 100 times colder than interstellar space. 

So, one thing that we know for certain is that when it comes to quantum computing both superconducting qubits and trapped ions have emerged as the two techniques to take note of. Quantum computing will develop further over the next few years, and it’s in everyone’s best interests if large-scale quantum computers aren’t tied down to just one possible solution.  Hensinger for one is definitely in support of both ideas and notes, “It’s healthy to have different groups trying different things.”  At the moment, it’s still hard to say exactly what quantum computing will be used for in the future, but algorithms are constantly being worked on to see what hardware could be capable of using quantum computing.

A quantum algorithm is a recipe that is usually written in a mathematical format and is a solution for solving a particular problem.  But, because quantum computing does not work in the same was as classical computing, algorithms that are made from binary are useless, so new ones need to be made and that is something that Krysta Svore and team at Microsoft’s Quantum Architectures and Computation Group focuses on.  She states, “We have a programming language that we have developed explicitly for quantum computing.  Our language and our tools are called LIQUI|>.  LIQUI\> allows us to express these quantum algorithms, then it goes through a set of optimizations, compilations, and basically rewrites the language instructions into device-specific instructions.”

Svore and team at the Quantum Architecture and Computation Group have access to a simulated quantum computer that is currently running on a classical system.  This allows them to debug existing quantum algorithms, as well as design and test new ones and helps the hardware team to see how quantum computers could be used in practice.  However, IBM has taken things one step further.  As well as having a successful simulation that they can work on they have also launched the IBM Experience which is an online interface that allows students and enthusiasts to have a go themselves with a five-qubit system and run their own algorithms and experiments from the cloud-based platform. 

IBM’s five qubit processor uses a lattice architecture that scale to create larger, more powerful quantum computers. (Photo: IBM)
IBM’s five qubit processor uses a lattice architecture that scale to create larger, more powerful quantum computers.
One of the most famous applications in the world of quantum computing comes in the form of Shor’s algorithm.  Ryan O’Donnell of the Carnegie Mellon School of Computer Science in Pittsburgh said, “In 1997, Shor showed an algorithm for a quantum computer that would be able to factor such numbers very efficiently”, when referring to numbers with thousands of digits.  Ever since then it has become a kind of measuring stick for the advancement of the whole field.  One of the current applications involving quantum hardware is to research different areas of science further.

Although quantum computing is going to become more common over the next few years, it’s not suddenly going to become the next mainstream technology that is found in every office and home.  But, the technology in one form or another may do.  In the next ten years, quantum computing will develop although its exact development may not be immediately obvious to the general public as at the moment, the promise of quantum computing is much more advanced then where researchers are with it.  But, eventually, it will revolutionize computing.

 

Watch the video. URL:https://youtu.be/z1GCnycbMeA

Has LIGO Proved Einstein Wrong & Found Signs of Quantum Gravity?


Three physicists have predicted that finding ‘echoes’ of gravitational waves coming from blackhole mergers might be signs of a theory that finally unifies quantum mechanics and general relativity.

A computer simulation shows two neutron stars, the extremely dense cores of now-dead stars, smashing into each other to form a blackhole. Credit: NASA Goddard Space Flight Centre/Flickr, CC BY 2.0 (quantum)

A computer simulation shows two neutron stars, the extremely dense cores of now-dead stars, smashing into each other to form a blackhole. Credit: NASA Goddard Space Flight Centre/Flickr, CC BY 2.0

Your high-school physics teacher would’ve likely taught you to think about the smallest constituents of nature by asking you to start with a large object – like a chair – and then keep breaking it down into smaller bits. For the purposes of making sense of your syllabus, you probably stopped at protons, electrons and neutrons. That’s a pity because, if you’d kept going, you’d have stumbled upon some of the biggest mysteries of the universe. At some point, you’d have hit the Planck scale: the smallest region of space, the shortest span of time. This is the smallest scale that quantum mechanics can make sense of, and this is where many physicists expect to find the fundamental particles that make up space itself.

If this region – or some phenomena that are thought to belong exclusively to this region – are found, then physicists will have made a stunning discovery. Apart from finding the ‘atoms’ of space, they’d have opened the doors to marrying the two biggest theories of physics: quantum mechanics and the theory of general relativity (GR). The former’s demesne is the small and smaller particles you passed along the way to the Planck scale. The latter’s is the largest distances and spans of time in the universe. And the discovery would be stunning because GR, created by Albert Einstein 101 years ago, doesn’t allow space to have any constituent ‘atoms’. For GR, space is smooth. And it is this fundamental conflict that has prevented the theories from being reconciled into a single ‘quantum gravity’ theory.

But the first signs of change might be here.

In 2015, the twin Laser Interferometer Gravitational-wave Observatories (LIGO) in the US had made the first direct detection of gravitational waves. These are ripples of energy sent strumming through space at the speed of light when a massive object accelerates. LIGO had in fact detected gravitational waves created by two blackholes that were spinning rapidly around each other before colliding and merging to form a larger blackhole. The discovery was unequivocal proof that Einstein’s GR was valid and realistic. But curiously enough, three physicists recently announced that the discovery may have in fact achieved the opposite: invalidate GR and instead signal proof that the first signs of quantum gravity may have been found.

A blackhole is a particularly interesting thing. One is formed when the core of a dying star of a certain kind becomes so massive that, after the initial supernova explosion, it collapses inwards, tightly curving space around itself such that even light can’t escape its prodigious gravity. A blackhole is a consequence of GR – though Einstein didn’t himself predict its existence first. At the same time, because of its freaky nature, a blackhole also often exhibits quantum mechanical properties that physicists have been interested in for their potential to reveal something about quantum gravity. And many of these properties have to do with the blackhole’s outer shell: the event horizon, behind which nothing can escape the blackhole’s heart no matter how fast it is moving away.

GR can’t perfectly predict what the insides of a blackhole are like – and the theory simply breaks down when it comes to the blackhole’s heart itself. But using LIGO’s data when it tuned in to the mergers of two blackhole-pairs in 2015, three physicists are now saying there’s some reason to believe GR may be breaking down at the event horizon itself. They say they are motivated by having spotted signs (in the data) of a quantum-gravity effect known simply as an echo.

According to GR, the event horizon is a smooth and infinitely thin surface: at any given moment, you’re either behind it, falling into the blackhole, or in front of it, looking into the abyss that is the blackhole. But according to quantum mechanics, the event horizon is actually a ‘firewall’ of particles popping in and out of existence. You step into it and you’re immediately incinerated. But apart from making for an interesting gedanken experiment about suicidal astronauts, the existence of such a firewall can have very real consequences.

When two blackholes collide to form a larger blackhole, there is a very large amount of energy released. In LIGO’s first detection of a merger, made on September 14, 2015, two blackholes weighing 29 and 36 solar masses merged to form a blackhole weighing 62 solar masses. The remaining three solar masses – equivalent to 178.7 billion trillion trillion trillion joules of energy – were expelled as gravitational waves. If GR has its way, with an infinitely thin event horizon, then the waves are immediately expelled into space. However, if quantum mechanics has its way, then some of the waves are first trapped inside the firewall of particles, where they bounce around like echoes depending on the angle at which they were ensnared, and escape in instalments. Corresponding to the delay in setting off into space, LIGO would have detected them similarly: not arriving all at once but with delays.

LIGO original template for GW150914, along with Abedi-Dykaar-Afshordi's best fit template for the echoes. Caption and credit: arXiv:1612.00266

LIGO original template for GW150914, along with Abedi-Dykaar-Afshordi’s best fit template for the echoes.

The three physicists – Jahed Abedi, Hannah Dykaar and Niayesh Afshordi – simulated firewall-esque conditions using mirrors placed close to a computer-simulated blackhole to determine the intervals at which gravitational echoes from each of the three events LIGO has detected so far would arrive at. When they had their results, they went looking for similar signals in the LIGO data. In a pre-print paper uploaded to the arXiv server on December 1, the trio writes that it did find them, with a statistical significance of 2.9 sigma. This is a mathematical measure of confidence that’s not good enough to technically be considered evidence (3 sigma), let alone proof of any kind (5 sigma). And when tested for each event, the odds are lower: they max out at 2 sigma in the case of the merger known as GW150914, the first one that LIGO detected. Finally, even if the signal persists, it might not ultimately be due to quantum gravity at all but some other sources.

Nonetheless, the significance isn’t zero – and the LIGO team has confirmed that it is looking for more signs of echoes in its data. Luckily for everyone, the detectors also recently restarted with upgrades to make them more sensitive, to more accurately study the gravitational wave signals arising from blackholes of diverse masses. If future experiments can’t detect stronger echoes (or eliminate existing sources of noise that could be clouding observations), then that’s that for this line of verifying quantum gravity. But until then, it wouldn’t be amiss to speculate on its veracity – or on variations of it that might yield better results, results closer to LIGO’s capabilities – if only because the data that LIGO collects for each merger is so complex.

Ultimately, the most heartening takeaway from the Abedi-Dykaar-Afshordi thesis is that there is an experimental way to confirm the predictions of quantum gravity at all. Physicists have long held it to be out of human reach. This is obvious when you realise the Planck length is 100-billion-billion-times smaller than the diameter of a proton and the Planck second is 10-billion-billion-billion-times smaller than the smallest unit of time that some of the most powerful atomic clocks can measure. If quantum gravity is a true theory, then it will be to nature’s unending credit that it spawned blackholes that can magnify the effects of such infinitesimal provenance – and to humankind’s for building machines that can eavesdrop on them.

An Indian LIGO detector is currently under development and is expected to join the twin American ones to study gravitational waves by 2023.