Digital camera gives a bug’s-eye view.


digi

digi2

Insect-inspired device achieves panoramic view and sharp focus at any distance.

Insects have a wide field of view and are acutely sensitive to motion, as anyone who has tried chasing a housefly knows. Researchers have now created a digital camera that mimics the curved, compound structure of an insect eye. These cameras could be used where wide viewing angles are important and space is at a premium — in advanced surveillance systems, for example, or in unmanned flying vehicles and endoscopes.

Insect eyes are made up of hundreds or even thousands of light-sensing structures called ommatidia. Each contains a lens and a cone that funnels light to a photosensitive organ. The long, thin ommatidia are bunched together to form the hemispherical eye, with each ommatidium pointing in a slightly different direction. This structure gives bugs a wide field of view, with objects in the periphery just as clear as those in the centre of the visual field, and high motion sensitivity. It also allows a large depth of field — objects are in focus whether they’re nearby or at a distance.

The biggest challenge in mimicking the structure of an insect eye in a camera is that electronics are typically flat and rigid, says John Rogers, a materials scientist at the University of Illinois at Urbana-Champaign. “In biology, everything is curvy,” he says.

The new device, which Rogers and his colleagues describe today in Nature1, comprises an array of microlenses connected to posts that mimic the light-funnelling cones of ommatidia, layered on top of a flexible array of silicon photodetectors. The lens–post pairs are moulded from a stretchy polymer called an elastomer. A filling of elastomer dyed with carbon black surrounds the structures, preventing light from leaking between them. The lens is about 1 centimetre in diameter.

“The whole thing is stretchy and thin, and we blow it up like a balloon” so that it curves like a compound eye, says Rogers. The current prototype produces black-and-white images only, but Rogers says a colour version could be made with the same design.

This is the first time researchers have made a working compound-eye camera, says Luke Lee, a bioengineer at the University of California, Berkeley, who was not involved with the work. The trick, he says, was building and integrating all the parts of the ommatidia. “Usually people just show one part, the lens or the detector,” says Lee. In 2006, for example, Lee’s group made arrays of artificial ommatidia that had microlenses and light-guiding cones, but no photodetectors2.

He says that Rogers made the device work by predicting the mechanics of how his designs would stretch before building them — to make sure that the lenses would not be distorted when the device was inflated, for example.

Rogers describes the camera as a “low-end insect eye”. It contains 180 artificial ommatidia, about the same number as in the eyes of a fire ant (Solenopsis fugax) or a bark beetle (Hylastes nigrinus) — insects that don’t see very well. So far the researchers have tested it by taking pictures of simple line drawings (see image).

With the basic designs in place, Rogers says, his team can now increase the resolution of the camera by incorporating more ommatidia. “We’d like to do a dragonfly, with 20,000 ommatidia,” he says, which will require some miniaturization of the components.

Alexander Borst, who builds miniature flying robots at the Max Planck Institute of Neurobiology in Martinsried, Germany, says that he is eager to integrate the camera into his machines. Insects’ wide field of vision helps them to monitor and stabilize their position during flight; robots with artificial compound eyes might be better fliers, he says.

Rogers says that his next project is to go “beyond biology”, by inflating or deflating the camera to adjust its field of view.

Source: nature

 

The mind’s eye: How the brain sorts out what you see


Can you tell a snake from a pretzel? Some can’t – and their experiences are revealing how the brain builds up a coherent picture of the world

AFTER her minor stroke, BP started to feel as if her eyes were playing tricks on her. TV shows became confusing: in one film, she was surprised to see a character reel as if punched by an invisible man. Sometimes BP would miss seeing things that were right before her eyes, causing her to bump into furniture or people.

BP’s stroke had damaged a key part of her visual system, giving rise to a rare disorder called simultanagnosia. This meant that she often saw just one object at a time. When looking at her place setting on the dinner table, for example, BP might see just a spoon, with everything else a blur (Brain, vol 114, p 1523).

BP’s problems are just one example of a group of disorders known collectively as visual agnosias, usually caused by some kind of brain damage. Another form results in people having trouble recognising and naming objects, as experienced by the agnosic immortalised in the title of Oliver Sacks’s 1985 best-seller The Man Who Mistook His Wife for a Hat.

Agnosias have become particularly interesting to neuroscientists in the past decade or so, as advances in brain scanning techniques have allowed them to close in on what’s going on in the brain. This gives researchers a unique opportunity to work out how the brain normally makes sense of the world. “Humans are naturally so good at this, it’s difficult to see our inner workings,” says Marlene Behrmann, a psychologist who studies vision at Carnegie Mellon University in Pittsburgh, Pennsylvania. Cases like BP’s are even shedding light on how our unconscious informs our conscious mind. “Agnosias allow us to adopt a reverse-engineering approach and infer how [the brain] would normally work,” says Behrmann.

Although we may not give it much thought, our ability to perceive our world visually is no mean feat; the most sophisticated robots in the world cannot yet match it. From a splash of photons falling on the retina – a 3-centimetre-wide patch of light-sensitive cells – we can discern complex scenes comprising multiple items, some near, some far, some well lit, some shaded, and with many objects partly obscured by others.

The information from the photons hitting a particular spot on the retina is restricted to their wavelength (which we perceive as colour), and their number (which determines brightness). Turning that data into meaningful mental images is a tough challenge, because so many variables are involved. For example, the number of photons bouncing off an object depends both on the brightness of the light source and on how pale or dark the object is. “The information that the visual system receives is very impoverished,” says Behrmann.

It is in the visual cortex, located at the back of the brain, where much of the processing goes on. When items obscure each other, the brain must work out where one thing ends and another begins, and take a stab at their underlying shapes. It must recognise things from different perspectives: consider the image of a chair viewed from the side compared with from above. Then there’s the challenge of recognising novel objects – a futuristic new chair, for example. “Somehow, almost magically, we derive a meaningful interpretation of complex scenes very rapidly,” says Behrmann. “How we do this is the million-dollar question in vision research.”

So how does the brain work its magic? In the early 20th century, European psychologists used simple experiments on people with normal vision to glean some basic rules that they called the “gestalt principles”. For example, the brain groups two elements in an image together if they look similar, having the same colour, shape or size, for example. And if not all of an object is visible, we mentally fill in the gaps – that’s the “closure principle” (see “Constructing reality”).

The gestalt principles can only go part of the way to describing visual perception, though. They cover how we separate the different objects in a scene, but they cannot explain how we know what those objects are. How, for example, do we know that a teacup is a teacup whether we see it from above or from the side, in light or in shadow?

It’s here that people with visual agnosias come in handy. Behrmann had previously studied people with integrative agnosia, who have difficulty recognising and naming complex objects as a whole, and instead seem to pay unusual attention to their individual features. One person, for example, mistook a picture of a harmonica for a computer keyboard, presumably thinking the row of air-holes in the mouthpiece were computer keys (Journal of Experimental Psychology: Human Perception and Performance, vol 29, p 19). Others have mistaken a picture of an octopus for a spider, and a pretzel for a snake.