How Quantum Technology Is Making Computers Millions Of Times More Powerful


When the first digital computer was built in the 1940s, it revolutionized the world of data calculation. When the first programming language was introduced in the 1950s, it transformed digital computers from impractical behemoths to real-world tools. And when the keyboard and mouse were added in the 1960s, it brought computers out of industry facilities and into people’s homes. There’s a technology that has the potential to change the world in an even bigger way than any of those breakthroughs. Welcome to the future of quantum computing.

 

1s and 0s (And Everything In Between)

Every computer you’ve ever encountered works on the principles of a Turing machine: they manipulate little particles of information, called bits, that exist as either a 0 or a 1, a system known as binary. The fundamental difference in a quantum computer is that it’s not limited to those two options. Their “quantum bits,” or qubits, can exist as 0, 1, or a superposition of 0 and 1—that is, both 0 and 1 and all points in between. It’s only once you measure them that they “decide” on a value. That’s what’s so groundbreaking about quantum computing: Conventional computers can only work on one computation at a time; the fastest just have ways of making multiple components work on separate tasks simultaneously. But the magic of superposition gives quantum computers the ability to work on a million computations at once. With that kind of power, just imagine what humanity could accomplish!

But that’s not all that makes quantum computing so impressive—there’s also the phenomenon of entanglement. Qubits don’t exist in a vacuum. Generally, systems of multiple qubits are entangled, so that they each take on the properties of the others. Take an entangled system of two qubits, for example. Once you measure one qubit, it “chooses” one value. But because of its relationship, or correlation, to the other qubit, that value instantly tells you the value of the other qubit—you don’t even need to measure it. When you add more qubits to the system, those correlations get more complicated. According to Plus magazine, “As you increase the number of qubits, the number of those correlations grows exponentially: for n qubits there are 2n correlations. This number quickly explodes: to describe a system of 300 qubits you’d already need more numbers than there are atoms in the visible Universe.” But that’s just the point—because those numbers are beyond what we could ever record with a conventional computer, the hope is that quantum computers could crunch unfathomably large amounts of information that conventional computers could never dream about.

D-Wave 2000Q quantum computer

The First Steps Of Quantum Computing

 In the future, quantum computers could revolutionize everything from human genomics to artificial intelligence—and over just a few decades, the technology has already gotten to the point where that’s a very real possibility. In 1998, researchers successfully analyzed information from a single qubit, and in 2000, scientists at Los Alamos National Laboratory unveiled the first 7-qubit quantum computer. Less than two decades later, and D-Wave’s 1,000-qubit quantum computers are being used by the likes of Google, NASA, and Lockheed Martin, and a 2,000-qubit quantum computer is being unveiled. Feel like replacing your old laptop? That quantum computer will run you a cool $15 million—a small price to pay for a millionfold improvement.

Watch And Learn: Our Favorite Content About The Future Of Computers

Quantum Computers Explained

  1. Transistors can either block or open the way for bits of information to pass.00:54
  2. Four classical bits can be in one of 16 different configurations at once; quantum qubits can be in all 16 combinations at once.03:44
  3. Quantum computers could better simulate the quantum world, possibly leading to insights in medicine and other fields.06:12

Source:curiosity.com

The Double-Slit Experiment Cracked Reality Wide Open


The double-slit experiment seems simple enough: cut two slits in a sheet of metal and send light through them, first as a constant wave, then in individual particles. What happens, though, is anything but simple. In fact, it’s what started science down the bizarre road of quantum mechanics.

You Got Particles In My Waves

 In the early 1800s, the majority of scientists believed that light was made up of particles, not waves. English scientists Thomas Young had a hunch that the particle theory wasn’t the end of the story, and set out to prove that light was a wave. He knew that waves interacted in predictable ways, and if he could demonstrate those interactions with light, he would have proven that light was indeed a wave. So he set up an experiment: he cut two slits in a sheet of metal and shone light through them onto a screen.

If light was indeed made of particles, the particles that hit the sheet would bounce off and those that passed through the slits would create the image of two slits on the screen, sort of like spraying paint on a stencil. But if light was a wave, it would do something very different: once they passed through the slits, the light waves would spread out and interact with one another. Where the waves met crest-to-crest, they’d strengthen each other and leave a brighter spot on the screen. Where they met crest-to-trough, they would cancel each other out, leaving a dark spot on the screen. That would produce what’s called an “interference pattern” of one very bright slit shape surrounded by “echoes” of gradually darker slit shapes on either side. Sure enough, that’s what happened. Light traveled in waves. All’s well that ends well, right?

A light wave passing through two slits interacts with itself to create an interference pattern on a screen.

Wait, That Can’t Be Right

 Around the turn of the 20th century, a few scientists began to refine this idea. Max Planck suggested that light and other types of radiation come in discrete amounts—it’s “quantized”—and Albert Einstein proposed the idea of the photon, a “quantum” of light that behaves like a particle. As a result, he said that light was both a particle and a wave. So back to the double-slit experiment: remember when we said if light was a particle, it would create a sort of spray-paint stencil pattern instead of an interference pattern? By using a special tool, you actually can send light particles through the slits one by one. But when scientists did this, something strange happened.

The interference pattern still showed up.

This suggests something very, very weird is going on: the photons seem to “know” where they would go if they were in a wave. It’s as if a theater audience showed up without seat assignments, but each person still knew the exact seat to choose in order to fill the theater correctly. As Popular Mechanics puts it, this means that “all the possible paths of these particles can interfere with each other, even though only one of the possible paths actually happens.” All realities exist at once (a concept known as superposition) until the final result occurs.

Weirder still, when scientists placed detectors at each slit to determine which slit each photon was passing through, the interference pattern disappeared. That suggests that the very act of observing the photons “collapses” those many realities into one. Mind blowing, right? It is for scientists too, which is why quantum mechanics is one of the most hotly debated areas of modern science.

Watch And Learn: The Most Mind-Melting Videos about Quantum Physics

The Quantum Experiment That Broke Reality

It changed science forever.

Source:curiosity.com

Whatever, We’re Probably Living In A Hologram Anyway, Says Neil deGrasse Tyson


Look around you. Your shoes, that tree, the Starbucks cold brew you’re clutching—it’s all very much right here in the real world. But what if the “real world” we live and move around in is just a computer simulation? Neil deGrasse Tyson, everyone’s favorite astrophysicist, thinks there’s a very high chance that everything we know is just a hologram. He’s just one of a growing number of people who believe it.

 

Philosopher Nick Bostrom proposed the simulation hypothesis in 2003, and the belief has only snowballed since then. Most notably, Elon Musk and astrophysicist Neil deGrasse Tyson have jumped on the nothing-we-know-is-real bandwagon. Tyson hosted the 2016 Isaac Asimov Memorial Debate at the American Museum of Natural History, which addressed this question head-on: Is the universe a simulation? At the event, Tyson was joined by panelists Lisa Randall, a theoretical physicist at Harvard; Max Tegmark, a cosmologist at MIT; David Chalmers, a professor of philosophy at NYU; Zohreh Davoudi, a theoretical physicist at MIT; and James Gates, a theoretical physicist at the University of Maryland.

The opinions on the simulation hypothesis varied (Chalmers had a real mind-boggler: “We’re not going to get conclusive proof that we’re not in a simulation, because any proof would be simulated.”). Tyson himself said, “I think the likelihood may be very high. […] it is easy for me to imagine that everything in our lives is just a creation of some other entity for their entertainment.” But whether or not everyone is in agreement about the matter, the concept is legitimate enough for the top minds in theoretical physics to meet on and parse out.

It’s Time To Meet Your Simulator

 Okay, let’s play along. Say nothing is actually real and we’re all just a bunch of cosmic holograms living out our lives in someone’s elaborate computer simulation. Who is that someone? Martin Savage, a physicist at the University of Washington, has some thoughts. Savage, along with two colleagues, published a paper that explores this issue in November 2012. In a conversation with Talk Nerdy To Me, Savage explains that the simulators may be our own descendants from the far future. Whoa. In the same way archaeologists dig up bones and other artifacts to piece together our past, perhaps future generations will have the ability to recreate simulations of how their ancestors (us) once lived. Yes, maybe your great-great-great-great-great-grandkid is studying you right this second. Hi, kiddo!

2016 Isaac Asimov Memorial Debate: Is the Universe a Simulation?

Watch the video discussion. URL:

Source:curiosity.com

Meet Sabrina Pasterski, The 23-Year-Old “New Einstein”


At 23, Sabrina Pasterski has a standing job offer from NASA. Her research has been cited by Stephen Hawking, and it’s been nearly a decade since she built her first plane engine. Kind of makes us wonder what we’re doing with our lives.

 

A Talent for Building Spacecraft

 Sabrina Pasterski has her eye on the prize: the 23-year-old Harvard PhD student (and top MIT grad) has never had an alcoholic drink or a cigarette, and isn’t on any form of social media, from Facebook to LinkedIn. She doesn’t even own a smartphone. “I’d rather stay alert, and hopefully I’m known for what I do and not what I don’t do,” Pasterski told OZY.

And what she does is incredible: Pasterski researches black holes, spacetime, and quantum gravity, and her papers have been cited by the likes of Andrew Strominger (her advisor at Harvard) and Stephen Hawking. One of the special skills she lists on her résumé? “Spotting elegance within the chaos.”

Pasterski had an interest in designing spacecraft from a young age: “It’s a freedom like nothing else you can compare it to,” she told Chicago Tonight. She built her first single-engine plane at the young age of 14, and has a standing job offer from Jeff Bezos, founder of Amazon.com and the aerospace research and development company called Blue Origin.

But it wasn’t all smooth sailing: Pasterski was rejected from Harvard and waitlisted at MIT in the spring of 2010, before eventually being accepted. From there, she graduated with the highest honors and entered the prestigious Harvard PhD program, gaining accolades such as a $250,000 Hertz Foundation fellowship for her research.

Into the Future

 Right now, Pasterski is focused on grappling with physics problems that excite her, learning as much as she can from the rich resources she has access to. But with standing job offers from NASA and Blue Origin, Pasterski might eventually make a big impact on aircraft travel–specifically, space travel. Companies like Blue Origin and SpaceX are looking for bright young minds to shape the future of space exploration and push us into the next frontier. And Pasterski’s potential is far from unnoticed: Forbes named her to their 30 under 30 All Star list.

Still, Pasterski remains humble about her success. “I am just a grad student. I have so much to learn. I do not deserve the attention,” she writes.

Source:curiosity.com

It’s Finally Settled: Absolute Zero Is Impossible


Just how cold can it get? The answer may be more important than you think: scientists study absolute zero to figure out all the wacky stuff that happens to molecules when the chilly temperatures slow them way down. But up until recently, absolute zero has had a shadow of controversy surrounding it, one that two researchers decided to take head-on. Grab your scarves and coat for this one.

What All the Fuss is About

 Absolute zero is the lowest temperature that is theoretically possible—0 Kelvin, or about -273.15 degrees Celsius. Entropy, on the other hand, is the measure of disorder in a system. In 1906, as described by New Scientist, the German chemist Walther Nernst put forward the principle that, as a system’s temperature approaches absolute zero, the system’s entropy goes to zero. In 1912, he added the unattainability principle, stating that absolute zero is actually impossible to reach. Taken together, the principles form the third law of thermodynamics. However, the third law of thermodynamics has not been considered a law by some—it has remained controversial for decades. But a new study from researchers at the University College London may just settle the matter once and for all.

The problem is this: at 0 Kelvin, a system has minimal motion—but not a lack of motion altogether. That’s because of the Heisenberg uncertainty principle, which states that we can’t know both the exact position and momentum of a particle at the same time. There may still be small fluctuations of movement. So, how could a system’s entropy go down to zero?

The short answer: it can’t.

Solving the Riddle

 A new study from researchers Jonathan Oppenheim and Lluís Masanes sheds light on this riddle, by showing that reaching 0 Kelvin is physically impossible. Think of it this way: as described by Science Alert, the cooling of a system is essentially the “shoveling” out of heat from that system into the surrounding environment. But cooling has its limits, determined by how many steps it takes to shovel the heat out, and the size of the surrounding environment. You can only reach absolute zero, then, if you have both infinite steps and an infinite surrounding environment.

Dr. Lluís Masanes told IFL Science that their study shows “it is impossible to cool a system to absolute zero in a finite time” and that they “established a relation between time and the lowest possible temperature. It’s the speed of cooling.”

The researchers used quantum mechanics to arrive at their conclusion, viewing the cooling process as a computation, according to IFL Science. A longstanding debate about the third law of thermodynamics has finally been put to bed.

 What the Coldest Temperatures in the Universe Can Tell Us
 Do Electrons Move At Absolute Zero?

Fahrenheit, Celsius and Kelvin Explained In Ten Seconds

 Source:curiosity.com