Computing after Moore’s Law


Fifty years ago this month Gordon Moore published a historic paper with an amusingly casual title: “Cramming More Components onto Integrated Circuits.” The document was Moore’s first articulation of a principle that, after a little revision, became elevated to a law: Every two years the number of transistors on a computer chip will double.

As anyone with even a casual interest in computing knows, Moore’s law is responsible for the information age. “Integrated circuits make computers work,” writes John Pavlus in “The Search for a New Machine” in the May Scientific American,“but Moore’s law makes computers evolve.” People have been predicting the end of Moore’s law for decades and engineers have always come up with a way to keep the pace of progress alive. But there is reason to believe those engineers will soon run up against insurmountable obstacles. “Since 2000 chip engineers faced with these obstacles have been developing clever workarounds,” Pavlus writes, “but these stopgaps will not change the fact that silicon scaling has less than a decade left to live.”

Faced with this deadline, chip manufacturers are investing billions to study and develop new computing technologies. In his article Pavlus takes us on a tour of this research and development frenzy. Although it’s impossible to know which technology will surmount silicon—and there’s good reason to believe it will be a combination of technologies rather than any one breakthrough—we can take a look at the contenders. Here’s a quick survey.

Graphene
One of the more radical moves a manufacturer of silicon computer chips could make would be to ditch silicon altogether. It’s not likely to happen soon but last year IBM did announce that it was spending $3 billion to look for alternatives. The most obvious candidate is—what else?—graphene, single-atom sheets of carbon. “Like silicon,” Pavlus writes, “graphene has electronically useful properties that remain stable under a wide range of temperatures. Even better, electrons zoom through it at relativistic speeds. And most crucially, it scales—at least in the laboratory. Graphene transistors have been built that can operate hundreds or even thousands of times faster than the top-performing silicon devices, at reasonable power density, even below the five-nanometer threshold in which silicon goes quantum.” A significant problem, however, is that graphene doesn’t have a band gap—the quantum property that makes it possible to turn a transistor from on to off.

Carbon Nanotubes
Roll a single-atom sheet of carbon into a cylinder and the situation improves: carbon nanotubes develop a band gap and, along with it, some semiconducting properties. But Pavlus found that even the researchers charged with developing carbon nanotube–based computing had their doubts. “Carbon nanotubes are delicate structures,” he writes. “If a nanotube’s diameter or chirality—the angle at which its carbon atoms are “rolled”—varies by even a tiny amount, its band gap may vanish, rendering it useless as a digital circuit element. Engineers must also be able to place nanotubes by the billions into neat rows just a few nanometers apart, using the same technology that silicon fabs rely on now.”

Memristors
Hewlett–Packard is developing chips based on an entirely new type of electronic component: the memristor. Predicted in 1971 but only developed in 2008, memristors—the term is a portmanteau combining “memory” and “resistor”—possess the strange ability to “remember” how much current previously flowed through it. As Pavlus explains, memristors make it possible to combine storage and random-access memory. “The common metaphor of the CPU as a computer’s ‘brain’ would become more accurate with memristors instead of transistors because the former actually work more like neurons—they transmit and encode information as well as store it,” he writes.

Cognitive Computers
To build chips “at least as ‘smart’ [as a] housefly,” researchers in IBM’s cognitive computing group are exploring processors that ditch the calculator like Von Neumann architecture. Instead, as Pavlus explains, they “mimic cortical columns in the mammalian brain, which process, transmit and store information in the same structure, with no bus bottlenecking the connection.” The result is IBM’s TrueNorth chip, in which five billion transistors model a million neurons linked by 256 million synaptic connections. “What that arrangement buys,” Pavlus writes, “is real-time pattern-matching performance on the energy budget of a laser pointer.”