Computing after Moore’s Law


Fifty years ago this month Gordon Moore published a historic paper with an amusingly casual title: “Cramming More Components onto Integrated Circuits.” The document was Moore’s first articulation of a principle that, after a little revision, became elevated to a law: Every two years the number of transistors on a computer chip will double.

As anyone with even a casual interest in computing knows, Moore’s law is responsible for the information age. “Integrated circuits make computers work,” writes John Pavlus in “The Search for a New Machine” in the May Scientific American,“but Moore’s law makes computers evolve.” People have been predicting the end of Moore’s law for decades and engineers have always come up with a way to keep the pace of progress alive. But there is reason to believe those engineers will soon run up against insurmountable obstacles. “Since 2000 chip engineers faced with these obstacles have been developing clever workarounds,” Pavlus writes, “but these stopgaps will not change the fact that silicon scaling has less than a decade left to live.”

Faced with this deadline, chip manufacturers are investing billions to study and develop new computing technologies. In his article Pavlus takes us on a tour of this research and development frenzy. Although it’s impossible to know which technology will surmount silicon—and there’s good reason to believe it will be a combination of technologies rather than any one breakthrough—we can take a look at the contenders. Here’s a quick survey.

Graphene
One of the more radical moves a manufacturer of silicon computer chips could make would be to ditch silicon altogether. It’s not likely to happen soon but last year IBM did announce that it was spending $3 billion to look for alternatives. The most obvious candidate is—what else?—graphene, single-atom sheets of carbon. “Like silicon,” Pavlus writes, “graphene has electronically useful properties that remain stable under a wide range of temperatures. Even better, electrons zoom through it at relativistic speeds. And most crucially, it scales—at least in the laboratory. Graphene transistors have been built that can operate hundreds or even thousands of times faster than the top-performing silicon devices, at reasonable power density, even below the five-nanometer threshold in which silicon goes quantum.” A significant problem, however, is that graphene doesn’t have a band gap—the quantum property that makes it possible to turn a transistor from on to off.

Carbon Nanotubes
Roll a single-atom sheet of carbon into a cylinder and the situation improves: carbon nanotubes develop a band gap and, along with it, some semiconducting properties. But Pavlus found that even the researchers charged with developing carbon nanotube–based computing had their doubts. “Carbon nanotubes are delicate structures,” he writes. “If a nanotube’s diameter or chirality—the angle at which its carbon atoms are “rolled”—varies by even a tiny amount, its band gap may vanish, rendering it useless as a digital circuit element. Engineers must also be able to place nanotubes by the billions into neat rows just a few nanometers apart, using the same technology that silicon fabs rely on now.”

Memristors
Hewlett–Packard is developing chips based on an entirely new type of electronic component: the memristor. Predicted in 1971 but only developed in 2008, memristors—the term is a portmanteau combining “memory” and “resistor”—possess the strange ability to “remember” how much current previously flowed through it. As Pavlus explains, memristors make it possible to combine storage and random-access memory. “The common metaphor of the CPU as a computer’s ‘brain’ would become more accurate with memristors instead of transistors because the former actually work more like neurons—they transmit and encode information as well as store it,” he writes.

Cognitive Computers
To build chips “at least as ‘smart’ [as a] housefly,” researchers in IBM’s cognitive computing group are exploring processors that ditch the calculator like Von Neumann architecture. Instead, as Pavlus explains, they “mimic cortical columns in the mammalian brain, which process, transmit and store information in the same structure, with no bus bottlenecking the connection.” The result is IBM’s TrueNorth chip, in which five billion transistors model a million neurons linked by 256 million synaptic connections. “What that arrangement buys,” Pavlus writes, “is real-time pattern-matching performance on the energy budget of a laser pointer.”

Million ‘neurons’ on computer chip


chip

The chips can also be connected together to provide even more computational power
Scientists have produced a new computer chip that mimics the organisation of the brain, and squeezed in one million computational units called “neurons”.

They describe it as a supercomputer the size of a postage stamp.

Each neuron on the chip connects to 256 others, and together they can pick out the key features in a visual scene in real time, using very little power.

The design is the result of a long-running collaboration, led by IBM, and is published in the journal Science.

“The cumulative total is over 200 person-years of work,” said Dr Dharmendra Modha, the publication’s senior author.

He told BBC News the processor was “a new machine for a new era”. But it will take some time for the chip, dubbed TrueNorth, to be commercially useful.

Next generation

This is partly because programs need to be written from scratch to run on this type of chip, instead of on the traditional style which was conceived in the 1940s and still powers nearly all modern computers.

“Start Quote

Google Images… does a marvellous job of recognising pictures of cats – but it is using large arrays of computers”

Sophie Wilson Senior Technical Director, Broadcom

That design, where the processors and memory are separate, is a natural match for sequential, mathematical operations.

However, the heavily interconnected structure of biologically-inspired, “neuromorphic” systems like TrueNorth is said to be a much more efficient way of handling a lot of data at the same time.

“Our chip integrates computation, communication and memory very closely,” Dr Modha said.

Instead of binary ones and zeros, the units of computation here are spikes. When its inputs are active enough, one of TrueNorth’s “neurons” generates a spike and sends it across the chip to other neurons, taking them closer to their own threshold.

Software has to be written completely differently for these spiking-network systems.

robot
One of the envisaged applications is a robot for remotely searching dangerous environments

“It will be interesting to see those programs develop – but don’t hold your breath,” commented Sophie Wilson, an eminent computer engineer based in Cambridge.

Ms Wilson, a fellow of both the Royal Academy of Engineering and the Royal Society, can definitely see a role for this next generation of computing strategies.

“It’s clear that conventional scalar processing is getting very tricky for some of these tasks,” she told the BBC. “Google Images, for example, does a marvellous job of recognising pictures of cats – but it is using large arrays of computers to do that.”

Grid after grid

The building blocks for the TrueNorth chip are “neurosynaptic cores” of 256 neurons each, which IBM launched in 2011.

Dr Modha and his team managed to engineer an interconnected 64-by-64 grid of these cores on to a single chip, delivering over one million neurons in total.

Because each neuron is connected to 256 others, there are more than 256 million connections or “synapses”.

This complexity is impressive for a man-made device just 3cm across, but still pales in comparison with the organ it emulates. Biological neurons, packed inside the brain, send and receive something in the order of 10,000 connections each.

The chip, Dr Modha is quick to point out, is “endlessly scalable”. Multiple units can be plugged together to form another, still more powerful assembly.

“This isn’t a 10-15% improvement,” he said. “You’re talking about orders and orders of magnitude.”

connectivity diagramEach of 4,096 “neurosynaptic cores” on the chip contains 256 neurons, which connect to 256 other neurons within and outside that core; this diagram illustrates links between just 64 cores

To demonstrate TrueNorth’s capabilities, Dr Modha’s team programmed it to do a visual perception party trick.

Within a video filmed from a tower at Stanford University, a single chip analysed the moving images in real time and successfully identified which patches of pixels represented pedestrians, cyclists, cars, buses and trucks.

This is just the sort of task that the brain excels at, while traditional computers struggle.

“Start Quote

This is another step in a programme, whose end point I suspect even they don’t know at the moment”

Prof Steve Furber University of Manchester

Expanding horizons

Dr Modha envisages myriad next-generation applications, from glasses that help visually impaired people navigate, to robots for scouring the scene of a disaster.

But some of the gains might be overstated – or perhaps too eagerly anticipated.

Prof Steve Furber is a computer engineer at the University of Manchester who works on a similarly ambitious brain simulation project called SpiNNaker. That initiative uses a more flexible strategy, where the connections between neurons are not hard-wired.

He told BBC News that “time will tell” which strategy succeeds in different applications.

The new IBM chip was most significant, Prof Furber said, because of its sheer degree of interconnectedness. “I see it as continuing their programme of research – but it’s an interesting and aggressive piece of integration,” he said.

“This is another step in a programme, whose end point I suspect even they don’t know at the moment.”

glasses
Glasses to help visually impaired people navigate could also benefit from a “neuromorphic” system for analysing the visual scene

Ms Wilson also pointed out that TrueNorth’s efficiency, while it might trump a vast supercomputer, is not very far ahead of the latest small devices like smartphones and cameras, which are already engineered to minimise battery usage.

“Cellphone cameras can recognise faces,” she said.

There is also a rival chip made by a company called Movidius, which Ms Wilson explained is not as adaptable (it is designed very specifically to process images) but uses even less power than TrueNorth.

That product, which we might see in devices as soon as next year, has also lifted elements of its computing strategy from the human brain.