Optical Computers Run At The Speed Of Light—Literally


Virtually every device you use—from the one you’re using to read this to the pocket calculator growing dust in the back of your desk—relies on the same basic technology: circuits containing many tiny transistors that communicate with each other using electrons. We’ve come a very long way since the room-sized computers of the 1950s, but as computing gets smaller, faster, and more complicated, we get closer to hitting a wall. There’s a physical limit to how powerful traditional computers can get. That’s why scientists are turning to completely new forms of technology for future computers.

The first in our three-part series explored how some computers use artificial neurons to “think.” Discover another way scientists are rethinking computing in the second part of the series below.

Computing At The Speed Of Light

 Neurons are only one way to make computers more like brains. Another way is by changing the medium they use to communicate. Conventional computers exchange information in the form of electrons, while the human brain uses a complex mix of chemical signals. Some research suggests, however, that the brain also relies on light particles known as photons. What if computers did too?

Optical computing is designed to do just that. Photons can move information much more quickly than electrons can—they literally travel at the speed of light. We’re already using photons to send data at breakneck speeds via fiber-optic cables; the problem is that the data has to be converted back into electrons once it arrives at its destination. If you could replace a computer’s wires with optical waveguides, it might be able to do all the same things it could before at a much greater speed.

Photons Aren’t A Fix-All

 There are a few problems, though. For one thing, light waves are just too big for what we need them to do. According to ExtremeTech, “In general, the smallest useful wavelength of light for computing has been in the infrared range, around 1000 nm in size, while improvements in silicon transistors have seen them reach and even pass the 10 nm threshold.” There are a few tricks scientists can use to get around this problem, but they add unnecessary complications to something that needs near-flawless speed and precision.

Still, the principles of optical computing are useful in certain circumstances. Li-Fi uses light instead of radiowaves to broadcast wireless internet 100 times faster than Wi-Fi, for instance. Technologies such as Optalsys have also found novel ways to get around light’s limitations. In the future, your laptop may not run on photons, but optical computing will surely have a place.

Watch And Learn: Our Favorite Content About Optical Computing

Researchers Create A Light-Based Microprocessor

It’s up to 50 times faster than electron-based microprocessors.

A Matrioshka Brain Is A Computer The Size Of A Solar System


Imagine a computer the size of a solar system. For power, it would use a Dyson sphere—a solar array that completely surrounds the host star to collect almost all of its energy. That energy-collecting sphere would double as an ultra-powerful computer processor. Once the sphere had collected all the energy it needed, it would pass the excess to another larger Dyson-sphere processor that completely surrounded the first, repeating the process until all of the energy was being used. That’s why this theoretical computer is called a Matrioshka brain: the nested Dyson spheres would resemble matryoshka dolls, or Russian nesting dolls.

Of course, if you surround your star with Dyson spheres, it would be difficult for life on your planet to continue. That’s kind of the point: this Matrioshka brain would be so powerful that a species could upload their entire consciousness into it and live within an alternate universe simulated by the computer. The species itself could die and its planet could be destroyed, but the civilization would live on in a digital world identical to the one it left behind. In fact, many people, including Elon Musk, believe we’re living in a simulation like that at this very moment. This provides one answer to the Fermi Paradox—that is, the question of why we haven’t encountered aliens despite the likelihood that they’re out there. It’s possible that any civilization advanced enough to find us has already decided to abandon reality entirely and upload themselves to a Matrioshka brain. Delve deeper into megastructures and theoretical tech with the videos below.

 How To Turn The Solar System Into A Computer

What’s a Matrioshka brain, and how would it work?

Watch the video. URL:

What Is A Dyson Sphere?

Find out whether this theoretical megastructure is even possible.

Watch the video. URL:

  1. In the future, we’ll build larger and larger solar arrays until we enclose the entire sun in a cloud of solar satellites. This “cloud” is known as a Dyson sphere.00:25
  2. In 1960, physicist Freeman Dyson theorized that if future civilization could enclose our star in a rigid shell, we could generate 384 yottawatts (384 x 10^24 watts) of energy.01:00
  3. There are many problems with the concept.

The Computers Of The Future Will Think Like Brains


Virtually every device you use—from the one you’re using to read this to the pocket calculator growing dust in the back of your desk—relies on the same basic technology: circuits containing many tiny transistors that communicate with each other using electrons. We’ve come a very long way since the room-sized computers of the 1950s, but as computing gets smaller, faster, and more complicated, we get closer to hitting a wall. There’s a physical limit to how powerful traditional computers can get. That’s why scientists are turning to completely new forms of technology for future computers.

The first in our three-part series on the future of computing involves one form you’re familiar with—it’s sitting right inside your skull.

If It Works Like A Brain And Thinks Like A Brain

 These days, we’re not satisfied letting our computers simply run programs and crunch numbers. For tasks like recognizing faces, identifying speech patterns, and reading handwriting, we need artificial intelligence: computers that can think. That’s why scientists figured out a way to build computers that work like brains, using neurons—artificial ones, anyway.

The big difference between an artificial neural network, as it’s called, and a conventional, or algorithmic, computer is the approach it uses to solve problems. An algorithmic computer solves problems based on an ordered set of instructions. The problem is, you have to know what the instructions are first so you can tell the computer what to do. The benefit to this approach is that the results are predictable, but there are definite drawbacks. An algorithmic computer can only do things one step at a time—even though with many components working simultaneously, that can happen surprisingly fast—and you can’t ask it a question you don’t know how to solve.

That’s where neural networks come in. They process information kind of like a brain: a large number of interconnected “neurons” all work at the same time to solve a problem. Instead of following a set of instructions, they do things by following examples. That means that a neural network literally learns how to solve problems based on limited information. Of course, when you don’t know how to solve a problem, you also don’t know what the solution will be. Like your brain, neural networks sometimes arrive at the wrong solutions. That’s the one drawback to neural computing: it’s unpredictable.

Perfect Harmony

 This isn’t to say that artificial neural networks are better than conventional computers. Each system has its own applications. Need some equations solved? Algorithmic computer to the rescue. Need to quickly and accurately detect lung cancer? Neural networks can do that. They can even work together: algorithmic computers are often used to “supervise” neural networks.

Watch And Learn: Our Favorite Content About Neural Networks

Computers That Think Like Humans

Inside A Neural Network

Get deep into the nitty gritty of how a neural network operates.

Computers Might Never Be Able To Solve These Three Problems


In computer science, an undecidable problem is one that requires a yes or no answer, but is impossible for any known computer to reliably solve. Three of these problems are the halting problem, Kolmogorov complexity, and the Wang tile problem. The halting problem refers to whether a computer can determine if a program will ever finish running, whereas Kolmogorov complexity deals with compression, and the impossibility of perfectly compressing any given file. Wang tiles are square tiles with a color on each side. Infinitely placing them next to each other so that the colors of each side match the colors on the adjacent squares is called “tiling the plane,” and there is currently no computer that can predict whether a given set of Wang tiles will tile the plane. Dive deeper into these computer conundrums in the video below.

3 Problems Computers Will Never Solve

Computers can’t do everything. Here are three problems that they’ll never conquer.

  1. The halting problems states that no computer can always determine if a program will continue to run or eventually stop.00:32
  2. Thus far, no program can perfectly compress any given file due to Kolmogorov complexity.01:23
  3. There can be no method that can take any given set of Wang tiles and tell you whether or not it will tile the plane.02:05

The Philosophy Of Artificial Intelligence

Computers may not be able to solve some problems, but they are on the cusp of having lifelike AI. Explore the ethics of this new frontier.

How Quantum Technology Is Making Computers Millions Of Times More Powerful


When the first digital computer was built in the 1940s, it revolutionized the world of data calculation. When the first programming language was introduced in the 1950s, it transformed digital computers from impractical behemoths to real-world tools. And when the keyboard and mouse were added in the 1960s, it brought computers out of industry facilities and into people’s homes. There’s a technology that has the potential to change the world in an even bigger way than any of those breakthroughs. Welcome to the future of quantum computing.

 

1s and 0s (And Everything In Between)

Every computer you’ve ever encountered works on the principles of a Turing machine: they manipulate little particles of information, called bits, that exist as either a 0 or a 1, a system known as binary. The fundamental difference in a quantum computer is that it’s not limited to those two options. Their “quantum bits,” or qubits, can exist as 0, 1, or a superposition of 0 and 1—that is, both 0 and 1 and all points in between. It’s only once you measure them that they “decide” on a value. That’s what’s so groundbreaking about quantum computing: Conventional computers can only work on one computation at a time; the fastest just have ways of making multiple components work on separate tasks simultaneously. But the magic of superposition gives quantum computers the ability to work on a million computations at once. With that kind of power, just imagine what humanity could accomplish!

But that’s not all that makes quantum computing so impressive—there’s also the phenomenon of entanglement. Qubits don’t exist in a vacuum. Generally, systems of multiple qubits are entangled, so that they each take on the properties of the others. Take an entangled system of two qubits, for example. Once you measure one qubit, it “chooses” one value. But because of its relationship, or correlation, to the other qubit, that value instantly tells you the value of the other qubit—you don’t even need to measure it. When you add more qubits to the system, those correlations get more complicated. According to Plus magazine, “As you increase the number of qubits, the number of those correlations grows exponentially: for n qubits there are 2n correlations. This number quickly explodes: to describe a system of 300 qubits you’d already need more numbers than there are atoms in the visible Universe.” But that’s just the point—because those numbers are beyond what we could ever record with a conventional computer, the hope is that quantum computers could crunch unfathomably large amounts of information that conventional computers could never dream about.

D-Wave 2000Q quantum computer

The First Steps Of Quantum Computing

 In the future, quantum computers could revolutionize everything from human genomics to artificial intelligence—and over just a few decades, the technology has already gotten to the point where that’s a very real possibility. In 1998, researchers successfully analyzed information from a single qubit, and in 2000, scientists at Los Alamos National Laboratory unveiled the first 7-qubit quantum computer. Less than two decades later, and D-Wave’s 1,000-qubit quantum computers are being used by the likes of Google, NASA, and Lockheed Martin, and a 2,000-qubit quantum computer is being unveiled. Feel like replacing your old laptop? That quantum computer will run you a cool $15 million—a small price to pay for a millionfold improvement.

Watch And Learn: Our Favorite Content About The Future Of Computers

Quantum Computers Explained

  1. Transistors can either block or open the way for bits of information to pass.00:54
  2. Four classical bits can be in one of 16 different configurations at once; quantum qubits can be in all 16 combinations at once.03:44
  3. Quantum computers could better simulate the quantum world, possibly leading to insights in medicine and other fields.06:12

Source:curiosity.com

Meet The Muon, The Electron’s Short-Lived Big Brother


You’ve heard of electrons—they’re the negatively charged subatomic particles that swarm about the nucleus of every atom. But if you take an electron, blow it up to more than 200 times its mass and make it blink out of existence faster than a bullet can leave a gun, you’ve got a muon (pronounced not like a cow but like a kitten: myoo-on). Why is such a heavy, short-lived particle important? Scientists aren’t sure, but they have some fascinating hunches.

 

The Second Elementary Particle

 By World War I, scientists knew all matter was made up of atoms, which in turn were made up of a nucleus of protons and neutrons surrounded by electrons. By the 1930s, scientists believed those particles, plus photons, neutrinos, and the antimatter version of electrons known as positrons, were the whole of the fundamental particles that made up the universe. But there was a problem. (Isn’t there always?).

Scientists who studied cosmic rays—the showers of high-energy particles that rain down on our atmosphere from exploding stars and black holes—couldn’t explain what they were made of with the particles they had. Instead of interacting with the lead blocks the way the known particles would, some of the particles just passed right through. Finally, a Caltech physicist named Carl Anderson was able to use the same methods he had used for his Nobel Prize-winning discovery of antimatter to come to the truth: the penetrating particles were a different type of particle altogether, one that was like an electron but heavier. That made it the second elementary particle (a particle that can’t be broken down any further) ever discovered. After some time being called a “mesotron,” this new particle was named the muon. Still, what the heck was it? Nobel laureate Isidor Isaac Rabi echoed many people’s sentiments when he said of the muon’s discovery, “Who ordered that?”

Elementary particles in the Standard Model

The Modern-Day Muon

 Today, there are 16 elementary particles in the Standard Model of Physics, and the muon is just one of them (you can check out the others in the diagram above). We know a lot more about it today—its hefty mass to eight decimal places, its ridiculously short half-life to the picosecond—but there’s still a bit of Rabi’s opinion in our knowledge of the muon. Still, scientists are hopeful. “The muon will have the last laugh,” professor Mark Lancaster told Symmetry Magazine. “There’s still a lot we don’t know about fundamental interactions and the subatomic world, and we think that the muon might have the answers.”

It might seem difficult to study a particle that decays millions of times faster than the blink of an eye, but physicists have a trick up their sleeve: particle accelerators. When you accelerate something close to the speed of light, it lives longer than it would otherwise (thanks to Einstein’s special theory of relativity). Studying them opens up a world of possible answers to important questions: Why are there so many particles? Are there more subatomic forces we don’t know about? We don’t know if muons are the keys to these mysteries, but like we said: scientists have a hunch.

Watch And Learn: Our Favorite Videos About Particle Physics

The Standard Model Of Particle Physics

Confused about the elementary particles? This video has got your back.

The Double-Slit Experiment Cracked Reality Wide Open


The double-slit experiment seems simple enough: cut two slits in a sheet of metal and send light through them, first as a constant wave, then in individual particles. What happens, though, is anything but simple. In fact, it’s what started science down the bizarre road of quantum mechanics.

You Got Particles In My Waves

 In the early 1800s, the majority of scientists believed that light was made up of particles, not waves. English scientists Thomas Young had a hunch that the particle theory wasn’t the end of the story, and set out to prove that light was a wave. He knew that waves interacted in predictable ways, and if he could demonstrate those interactions with light, he would have proven that light was indeed a wave. So he set up an experiment: he cut two slits in a sheet of metal and shone light through them onto a screen.

If light was indeed made of particles, the particles that hit the sheet would bounce off and those that passed through the slits would create the image of two slits on the screen, sort of like spraying paint on a stencil. But if light was a wave, it would do something very different: once they passed through the slits, the light waves would spread out and interact with one another. Where the waves met crest-to-crest, they’d strengthen each other and leave a brighter spot on the screen. Where they met crest-to-trough, they would cancel each other out, leaving a dark spot on the screen. That would produce what’s called an “interference pattern” of one very bright slit shape surrounded by “echoes” of gradually darker slit shapes on either side. Sure enough, that’s what happened. Light traveled in waves. All’s well that ends well, right?

A light wave passing through two slits interacts with itself to create an interference pattern on a screen.

Wait, That Can’t Be Right

 Around the turn of the 20th century, a few scientists began to refine this idea. Max Planck suggested that light and other types of radiation come in discrete amounts—it’s “quantized”—and Albert Einstein proposed the idea of the photon, a “quantum” of light that behaves like a particle. As a result, he said that light was both a particle and a wave. So back to the double-slit experiment: remember when we said if light was a particle, it would create a sort of spray-paint stencil pattern instead of an interference pattern? By using a special tool, you actually can send light particles through the slits one by one. But when scientists did this, something strange happened.

The interference pattern still showed up.

This suggests something very, very weird is going on: the photons seem to “know” where they would go if they were in a wave. It’s as if a theater audience showed up without seat assignments, but each person still knew the exact seat to choose in order to fill the theater correctly. As Popular Mechanics puts it, this means that “all the possible paths of these particles can interfere with each other, even though only one of the possible paths actually happens.” All realities exist at once (a concept known as superposition) until the final result occurs.

Weirder still, when scientists placed detectors at each slit to determine which slit each photon was passing through, the interference pattern disappeared. That suggests that the very act of observing the photons “collapses” those many realities into one. Mind blowing, right? It is for scientists too, which is why quantum mechanics is one of the most hotly debated areas of modern science.

Watch And Learn: The Most Mind-Melting Videos about Quantum Physics

The Quantum Experiment That Broke Reality

It changed science forever.

Source:curiosity.com

According To Quantum Mechanics, Reality Might Not Exist Without An Observer


If a tree falls in the forest and there’s no one around to hear it, does it make a sound? The obvious answer is yes—a tree falling makes a sound whether or not we hear it—but certain experts in quantum mechanics argue that without an observer, all possible realities exist. That means that the tree both falls and doesn’t fall, makes a sound and is silent, and all other possibilities therein. This was the crux of the debate between Niels Bohr and Albert Einstein. Learn more about it in the video below.

Quantum Entanglement And The Bohr-Einstein Debate

Does reality exist when we’re not watching?

The Double Slit Experiment

Learn about one of the most famous experiments in quantum physics.

Watch the video. URL:

An Illustrated Lesson In Quantum Entanglement

Delve into this heavy topic with some light animation.

Watch the video. URL: