Researchers design first battery-powered invisibility cloak.


Researchers at The University of Texas at Austin have proposed the first design of a cloaking device that uses an external source of energy to significantly broaden its bandwidth of operation.

Andrea Alù, associate professor at the Cockrell School of Engineering, and his team have proposed a design for an active cloak that draws energy from a battery, allowing objects to become undetectable to radio sensors over a greater range of frequencies.

The team’s paper, “Broadening the Cloaking Bandwidth with Non-Foster Metasurfaces,” was published Dec. 3 in Physical Review Letters. Alù, researcher Pai-Yen Chen and postdoctoral research fellow Christos Argyropoulos co-authored the paper. Both Chen and Argyropoulos were at UT Austin at the time this research was conducted. The proposed active cloak will have a number of applications beyond camouflaging, such as improving cellular and radio communications, and biomedical sensing.

Cloaks have so far been realized with so-called passive technology, which means that they are not designed to draw energy from an external source. They are typically based on metamaterials (advanced artificial materials) or metasurfaces (a flexible, ultrathin metamaterial) that can suppress the scattering of light that bounces off an object, making an object less visible. When the scattered fields from the cloak and the object interfere, they cancel each other out, and the overall effect is transparency to radio-wave detectors. They can suppress 100 times or more the detectability at specific design frequencies. Although the proposed design works for radio waves, active cloaks could one day be designed to make detection by the human eye more difficult.

“Many cloaking designs are good at suppressing the visibility under certain conditions, but they are inherently limited to work for specific colors of light or specific frequencies of operation,” said Alù, David & Doris Lybarger Endowed Faculty Fellow in the Department of Electrical and Computer Engineering. In this paper, on the contrary, “we prove that cloaks can become broadband, pushing this technology far beyond current limits of passive cloaks. I believe that our design helps us understand the fundamental challenges of suppressing the scattering of various objects at multiple wavelengths and shows a realistic path to overcome them.”

The proposed active cloak uses a battery, circuits and amplifiers to boost signals, which makes possible the reduction of scattering over a greater range of frequencies. This design, which covers a very broad frequency range, will provide the most broadband and robust performance of a cloak to date. Additionally, the proposed active technology can be thinner and less conspicuous than conventional cloaks.

In a related paper, published in Physical Review X in October, Alù and his graduate student Francesco Monticone proved that existing passive cloaking solutions are fundamentally limited in the bandwidth of operation and cannot provide broadband cloaking. When viewed at certain frequencies, passively cloaked objects may indeed become transparent, but if illuminated with white light, which is composed of many colors, they are bound to become more visible with the cloak than without. The October paper proves that all available cloaking techniques based on passive cloaks are constrained by Foster’s theorem, which limits their overall ability to cancel the scattering across a broad frequency spectrum.

In contrast, an active cloak based on active metasurfaces, such as the one designed by Alù’s team, can break Foster’s theorem limitations. The team started with a passive metasurface made from an array of metal square patches and loaded it with properly positioned operational amplifiers that use the energy drawn from a battery to broaden the bandwidth.

https://i0.wp.com/cdn.physorg.com/newman/gfx/news/2013/dthtynh.jpg

“In our case, by introducing these suitable amplifiers along the cloaking surface, we can break the fundamental limits of passive cloaks and realize a ‘non-Foster’ surface reactance that decreases, rather than increases, with frequency, significantly broadening the of operation,” Alù said.

The researchers are continuing to work both on the theory and design behind their non-Foster active cloak, and they plan to build a prototype.

Alù and his team are working to use active cloaks to improve wireless communications by suppressing the disturbance that neighboring antennas produce on transmitting and receiving antennas. They have also proposed to use these cloaks to improve biomedical sensing, near-field imaging and energy harvesting devices.

Team grows large graphene crystals that have exceptional electrical properties


When it comes to the growth of graphene—an ultrathin, ultrastrong, all-carbon material—it is survival of the fittest, according to researchers at The University of Texas at Austin.

The team used surface oxygen to grow centimeter-size single graphene on copper. The crystals were about 10,000 times as large as the largest crystals from only four years ago. Very large single crystals have exceptional electrical properties.

“The game we play is that we want nucleation (the growth of tiny ‘crystal seeds’) to occur, but we also want to harness and control how many of these tiny nuclei there are, and which will grow larger,” said Rodney S. Ruoff, professor in the Cockrell School of Engineering. “Oxygen at the right surface concentration means only a few nuclei grow, and winners can grow into very large crystals.”

The team—led by postdoctoral fellow Yufeng Hao and Ruoff of the Department of Mechanical Engineering and the Materials Science and Engineering Program, along with Luigi Colombo, a material scientist with Texas Instruments—worked for three years on the graphene growth method. The team’s paper, “The Role of Surface Oxygen in the Growth of Large Single-Crystal Graphene on Copper,” is featured on the cover of the Nov. 8, 2013, issue of Science.

One of the world’s strongest materials, graphene is flexible and has high electrical and thermal conductivity that makes it a promising material for flexible electronics, solar cells, batteries and high-speed transistors. The team’s understanding of how graphene growth is influenced by differing amounts of surface oxygen is a major step toward improved high-quality graphene films at industrial scale.

The team’s method “is a fundamental breakthrough, which will lead to growth of high-quality and large area graphene film,” said Sanjay Banerjee, who heads the Cockrell School’s South West Academy of Nanoelectronics (SWAN). “By increasing the single-crystal domain sizes, the electronic transport properties will be dramatically improved and lead to new applications in flexible electronics.”

Graphene has always been grown in a polycrystalline form, that is, it is composed of many crystals that are joined together with irregular chemical bonding at the boundaries between crystals (““), something like a patch-work quilt. Large single-crystal graphene is of great interest because the grain boundaries in polycrystalline material have defects, and eliminating such defects makes for a better material.

By controlling the concentration of surface oxygen, the researchers could increase the crystal size from a millimeter to a centimeter. Rather than hexagon-shaped and smaller crystals, the addition of the right amount of surface oxygen produced much larger single crystals with multibranched edges, similar to a snowflake.

“In the long run it might be possible to achieve meter-length single crystals,” Ruoff said. “This has been possible with other materials, such as silicon and quartz. Even a centimeter crystal size—if the grain boundaries are not too defective—is extremely significant.”

“We can start to think of this material’s potential use in airplanes and in other structural applications—if it proves to be exceptionally strong at length scales like parts of an airplane wing, and so on,” he said.

Another major finding by the team was that the “carrier mobility” of electrons (how fast the electrons move) in graphene films grown in the presence of surface oxygen is exceptionally high. This is important because the speed at which the charge carriers move is important for many electronic devices—the higher the speed, the faster the device can perform.

https://i0.wp.com/cdn.physorg.com/newman/gfx/news/2013/utaustinrese.jpg

Yufeng Hao says he thinks the knowledge gained in this study could prove useful to industry.

“The high quality of the graphene grown by our method will likely be developed further by industry, and that will eventually allow devices to be faster and more efficient,” Hao said.

Single-crystal films can also be used for the evaluation and development of new types of devices that call for a larger scale than could be achieved before, added Colombo.

“At this time, there are no other reported techniques that can provide high quality transferrable films,” Colombo said. “The material we were able to grow will be much more uniform in its properties than a polycrystalline film.”

New invisibility cloak type designed


A new “broadband” invisibility cloak which hides objects over a wide range of frequencies has been devised.

Despite the hype about Harry Potter-style cloaks, our best current designs can only conceal objects at specific wavelengths of light or microwaves.

At other frequencies, invisibility cloaks actually make things more visible, not less, US physicists found.

Their solution is a new ultrathin, electronic system, which they describe in Physical Review Letters.

“Start Quote

If you want to make an object transparent at all angles and over broad bandwidths, this is a good solution”

Andrea Alu University of Texas

“Our active cloak is a completely new concept and design, aimed at beating the limits of [current cloaks] and we show that it indeed does,” said Prof Andrea Alu, from the University of Texas at Austin.

“If you want to make an object transparent at all angles and over broad bandwidths, this is a good solution.

“We are looking into realising this technology at the moment, but we are still at the early stages.”

Passive vs Active

While the popular image of an invisibility cloak is the magical robe worn by Harry Potter, there is another kind which is not so far-fetched.

The first working model – which concealed a small copper cylinder by bending microwaves around it – was first demonstrated in 2006.

Left: uncloaked sphere. Right:  Same sphere covered with a plasmonic cloak
The sphere on the right is “cloaked” but actually scatters more radiation than when bare (left)

It was built with a thin shell of metamaterials – artificial composites whose structures allow properties which do not exist in nature.

Cloaking materials could have applications in the military, microscopy, biomedical sensing, and energy harvesting devices.

The trouble with current designs is they only work at limited bandwidths. Even this “perfect” 3D cloak demonstrated last year could only hide objects from microwaves.

At other frequencies the cloak acts as a beacon – making the hidden object more obvious – as Prof Alu and his team have now demonstrated in a new study in Physical Review X.

They looked at three popular types of “passive” cloaks – which do not require electricity – a plasmonic cloak, a mantle cloak, and a transformation-optics cloak.

Other ways to disappear

optical camouflage, keio university
  • Optical camouflage technology: A modified background image is projected onto a cloak of retro-reflective material (the kind used to make projector screens); the wearer becomes invisible to anyone standing at the projection source
  • The “mirage effect”: Electric current is passed through submerged carbon nanotubes to create very high local temperatures, this causes light to bounce off them, hiding objects behind
  • Adaptive heat cloaking: A camera records background temperatures, these are displayed by sheets of hexagonal pixels which change temperature very quickly, camouflaging even moving vehicles from heat-sensitive cameras
  • Calcite crystal prism: Calcite crystals send the two polarisations of light in different directions. By gluing prism-shaped crystals together in a specific geometry, polarised light can be directed around small objects, effectively cloaking them

All three types scattered more waves than the bare object they were trying to hide – when tested over the whole range of the electromagnetic spectrum.

“If you suppress scattering in one range, you need to pay the price, with interest, in some other range,” Prof Alu told BBC News.

“For example, you might make a cloak that makes an object invisible to red light. But if you were illuminated by white light (containing all colours) you would actually look bright blue, and therefore stand out more.”

A cloak that allows complete invisibility is “impossible” with current passive designs, the study concluded.

“When you add material around an object to cloak it, you can’t avoid the fact that you are adding matter, and that this matter still responds to electromagnetic waves,” Prof Alu explained.

Instead, he said, a much more promising avenue is “active” cloaking technology – designs which rely on electrical power to make objects “vanish”.

Active cloaks can be thinner and less conspicuous than passive cloaks.

Alu’s team have proposed a new design which uses amplifiers to coat the surface of the object in an electric current.

This ultrathin cloak would hide an object from detection at a frequency range “orders of magnitude broader” than any available passive cloaking technology, they wrote.

Nothing’s perfect

Prof David Smith of Duke University, one of the team who created the first cloak in 2006, said the new design was one of the most detailed he had yet seen.

“Start Quote

This does not necessarily preclude the Harry Potter cloak”

Professor David Smith Duke University

“It’s an interesting implementation but as presented is probably a bit limited to certain types of objects,” he told BBC News.

“There are limitations even on active materials. It will be interesting to see if it can be experimentally realised.”

Prof Smith points out that even an “imperfect” invisibility cloak might be perfectly sufficient to build useful devices with real-world applications.

For example, a radio-frequency cloak could improve wireless communications – by helping them bypass obstacles and reducing interference from neighbouring antennas.

“To most people, making an object ‘invisible’ means making it transparent to visible wavelengths. And the visible spectrum is a tiny, tiny sliver of the overall electromagnetic spectrum,” he told BBC News.

“So, this finding does not necessarily preclude the Harry Potter cloak, nor does it preclude any other narrow bandwidth application of cloaking.”

Scorpion venom is a painkiller for the grasshopper mouse | Mo Costandi


Researchers have identified the molecular mechanisms that make the grasshopper mouse resistant to scorpion venom.

Grasshopper mouse

A southern grasshopper mouse approaches and prepares to attack an Arizona bark scorpion. Photo: Matthew and Ashlee Rowe.

The bark scorpion is, according to Wikipedia, the most venomous scorpion in North America, wielding an intensely painful – and potentially lethal – sting that stuns and deters snakes, birds and other predators. People unfortunate enough to have experienced the sting say that it produces an immediate burning sensation, followed by prolonged throbbing pain that can last for hours.

But the grasshopper mouse is completely resistant to the bark scorpion’s venom. In fact, it actively preys upon scorpions and other poisonous creatures. As the film clip below shows, it responds to the bark scorpion’s sting by licking its paw for a second or two, before resuming its attack, then killing and eating the scorpion, starting with the stinger and the bulb containing the venom. Researchers have now established exactly why this is – paradoxically, the venom has an analgesic, or pain-killing, effect on the grasshopper mouse.

The animal’s secret lies in two proteins, the sodium channels Nav1.7 and Nav1.8, which are found in a subset of sensory nerve fibres called nociceptors. These cells express numerous other proteins that are sensitive to damaging chemicals, excessive mechanical pressure, and extremes in temperature, and have fibres that extend from just beneath the skin surface into the spinal cord.

The sensor proteins relay these signals to Nav1.7 and Nav1.8, which then change their structure in response, so that their pores, which span the nerve cell membrane, open up, allowing sodium ions to flood into the cell. This causes the nociceptors to generate nervous impulses, which are transmitted along the fibre into the spinal cord. From there, the signals are relayed to second-order sensory neurons, which then carry the signals up into the brain, where they are interpreted as pain.

Ashlee Rowe of the University of Texas in Austin and her colleagues started off by injecting scorpion venom, formaldehyde and salt water into the hind paws of southern grasshopper mice and common house mice, and compared their behavioural responses.

The house mice licked their paws furiously for several minutes after being injected with venom or formaldehyde, but not when they were injected with salt water. By contrast, the grasshopper mice seemed completely oblivious to the venom, and barely licked their paws at all after being injected with it. They found the formaldehyde to be far more irritating, and the venom actually reduced the amount of time they spent licking their paws when the two were injected together.

Next, the researchers isolated sensory neurons from both types of mice and grew them in Petri dishes. They then added scorpion venom to the dishes and used microelectrodes to measure the electrical activity of the cells. This showed that the venom strongly activated cells from the house mice, making them fire with rapid bursts of nervous impulses, but actually prevented cells from the grasshopper mice from firing. Further investigation revealed that the scorpion venom directly binds to, and potently inhibits, Nav1.8 sodium channels from the grasshopper mice, but not the house mice.

Rowe and her colleagues performed a final series of experiments to determine how this happens at the molecular level. They sequenced the Nav1.8 gene from the grasshopper mouse, and compared it to that of the common mouse, to identify multiple DNA sequence variations that confer insensitivity to scorpion venom. All the mutations encode amino acid residues in or around the pore region of the Nav1.8 protein, replacing neutral residues with acidic ones that are attracted to water.

As a result of these tiny structural changes, scorpion venom binds to Nav1.8 and switches it off, perhaps by plugging the pore or making it impermeable to sodium ions in some other way, thus blocking the transmission of pain signals into the spinal cord.

The researchers confirmed the importance of the pore region by using genetic engineering to replace this segment of the common mouse gene with the corresponding segment from the grasshopper mouse gene. This made the resulting protein resistant to the venom, whereas substituting the pore DNA sequence in the grasshopper gene with that from the common mouse gene rendered it highly sensitive to the venom.

The ability to detect pain is critical for survival, as it alerts organisms to potentially life-threatening injuries. Venomous creatures have capitalised on this, by evolving neurotoxins that inflict pain by activating nociceptors in one way or another, thus detering would-be predators from attacking again. The grasshopper inhabits the deserts of North America and Mexico, and probably evolved resistance to venom as a physiological adaptation, which enabled it to eek out an existence in such an extreme environment by feasting on venomous prey.

Previous work has identified Nav1.7 as a key player in pain signalling, and researchers have identified a number of rare mutations in the gene encoding it, which make people either completely or partially insensitive to pain. Drugs that block Nav1.7 activity could therefore be effective pain-killers, and various research groups have been researching and developing such drugs. The new findings identify Nav1.8 as another potential target, and provide another potential route for the development new analgesic drugs.

Brain decoding: Reading minds.


By scanning blobs of brain activity, scientists may be able to decode people’s thoughts, their dreams and even their intentions.

Jack Gallant perches on the edge of a swivel chair in his lab at the University of California, Berkeley, fixated on the screen of a computer that is trying to decode someone’s thoughts.

On the left-hand side of the screen is a reel of film clips that Gallant showed to a study participant during a brain scan. And on the right side of the screen, the computer program uses only the details of that scan to guess what the participant was watching at the time.

Anne Hathaway’s face appears in a clip from the film Bride Wars, engaged in heated conversation with Kate Hudson. The algorithm confidently labels them with the words ‘woman’ and ‘talk’, in large type. Another clip appears — an underwater scene from a wildlife documentary. The program struggles, and eventually offers ‘whale’ and ‘swim’ in a small, tentative font.

“This is a manatee, but it doesn’t know what that is,” says Gallant, talking about the program as one might a recalcitrant student. They had trained the program, he explains, by showing it patterns of brain activity elicited by a range of images and film clips. His program had encountered large aquatic mammals before, but never a manatee.

Groups around the world are using techniques like these to try to decode brain scans and decipher what people are seeing, hearing and feeling, as well as what they remember or even dream about.

Media reports have suggested that such techniques bring mind-reading “from the realms of fantasy to fact”, and “could influence the way we do just about everything”. The Economist in London even cautioned its readers to “be afraid”, and speculated on how long it will be until scientists promise telepathy through brain scans.

Although companies are starting to pursue brain decoding for a few applications, such as market research and lie detection, scientists are far more interested in using this process to learn about the brain itself. Gallant’s group and others are trying to find out what underlies those different brain patterns and want to work out the codes and algorithms the brain uses to make sense of the world around it. They hope that these techniques can tell them about the basic principles governing brain organization and how it encodes memories, behaviour and emotion (see ‘Decoding for dummies’).

Applying their techniques beyond the encoding of pictures and movies will require a vast leap in complexity. “I don’t do vision because it’s the most interesting part of the brain,” says Gallant. “I do it because it’s the easiest part of the brain. It’s the part of the brain I have a hope of solving before I’m dead.” But in theory, he says, “you can do basically anything with this”.

Beyond blobology

Brain decoding took off about a decade ago1, when neuroscientists realized that there was a lot of untapped information in the brain scans they were producing using functional magnetic resonance imaging (fMRI). That technique measures brain activity by identifying areas that are being fed oxygenated blood, which light up as coloured blobs in the scans. To analyse activity patterns, the brain is segmented into little boxes called voxels — the three-dimensional equivalent of pixels — and researchers typically look to see which voxels respond most strongly to a stimulus, such as seeing a face. By discarding data from the voxels that respond weakly, they conclude which areas are processing faces.

Decoding techniques interrogate more of the information in the brain scan. Rather than asking which brain regions respond most strongly to faces, they use both strong and weak responses to identify more subtle patterns of activity. Early studies of this sort proved, for example, that objects are encoded not just by one small very active area, but by a much more distributed array.

These recordings are fed into a ‘pattern classifier’, a computer algorithm that learns the patterns associated with each picture or concept. Once the program has seen enough samples, it can start to deduce what the person is looking at or thinking about. This goes beyond mapping blobs in the brain. Further attention to these patterns can take researchers from asking simple ‘where in the brain’ questions to testing hypotheses about the nature of psychological processes — asking questions about the strength and distribution of memories, for example, that have been wrangled over for years. Russell Poldrack, an fMRI specialist at the University of Texas at Austin, says that decoding allows researchers to test existing theories from psychology that predict how people’s brains perform tasks. “There are lots of ways that go beyond blobology,” he says.

In early studies12 scientists were able to show that they could get enough information from these patterns to tell what category of object someone was looking at — scissors, bottles and shoes, for example. “We were quite surprised it worked as well as it did,” says Jim Haxby at Dartmouth College in New Hampshire, who led the first decoding study in 2001.

Soon after, two other teams independently used it to confirm fundamental principles of human brain organization. It was known from studies using electrodes implanted into monkey and cat brains that many visual areas react strongly to the orientation of edges, combining them to build pictures of the world. In the human brain, these edge-loving regions are too small to be seen with conventional fMRI techniques. But by applying decoding methods to fMRI data, John-Dylan Haynes and Geraint Rees, both at the time at University College London, and Yukiyasu Kamitani at ATR Computational Neuroscience Laboratories, in Kyoto, Japan, with Frank Tong, now at Vanderbilt University in Nashville, Tennessee, demonstrated in 2005 that pictures of edges also triggered very specific patterns of activity in humans34. The researchers showed volunteers lines in various orientations — and the different voxel mosaics told the team which orientation the person was looking at.

ILLUSTRATION BY PETER QUINNELL; PHOTO: KEVORK DJANSEZIAN/GETTY

Edges became complex pictures in 2008, when Gallant’s team developed a decoder that could identify which of 120 pictures a subject was viewing — a much bigger challenge than inferring what general category an image belongs to, or deciphering edges. They then went a step further, developing a decoder that could produce primitive-looking movies of what the participant was viewing based on brain activity5.

From around 2006, researchers have been developing decoders for various tasks: for visual imagery, in which participants imagine a scene; for working memory, where they hold a fact or figure in mind; and for intention, often tested as the decision whether to add or subtract two numbers. The last is a harder problem than decoding the visual system says Haynes, now at the Bernstein Centre for Computational Neuroscience in Berlin, “There are so many different intentions — how do we categorize them?” Pictures can be grouped by colour or content, but the rules that govern intentions are not as easy to establish.

Gallant’s lab has preliminary indications of just how difficult it will be. Using a first-person, combat-themed video game called Counterstrike, the researchers tried to see if they could decode an intention to go left or right, chase an enemy or fire a gun. They could just about decode an intention to move around; but everything else in the fMRI data was swamped by the signal from participants’ emotions when they were being fired at or killed in the game. These signals — especially death, says Gallant — overrode any fine-grained information about intention.

The same is true for dreams. Kamitani and his team published their attempts at dream decoding inScience earlier this year6. They let participants fall asleep in the scanner and then woke them periodically, asking them to recall what they had seen. The team tried first to reconstruct the actual visual information in dreams, but eventually resorted to word categories. Their program was able to predict with 60% accuracy what categories of objects, such as cars, text, men or women, featured in people’s dreams.

The subjective nature of dreaming makes it a challenge to extract further information, says Kamitani. “When I think of my dream contents, I have the feeling I’m seeing something,” he says. But dreams may engage more than just the brain’s visual realm, and involve areas for which it’s harder to build reliable models.

Reverse engineering

Decoding relies on the fact that correlations can be established between brain activity and the outside world. And simply identifying these correlations is sufficient if all you want to do, for example, is use a signal from the brain to command a robotic hand . But Gallant and others want to do more; they want to work back to find out how the brain organizes and stores information in the first place — to crack the complex codes the brain uses.

That won’t be easy, says Gallant. Each brain area takes information from a network of others and combines it, possibly changing the way it is represented. Neuroscientists must work out post hocwhat kind of transformations take place at which points. Unlike other engineering projects, the brain was not put together using principles that necessarily make sense to human minds and mathematical models. “We’re not designing the brain — the brain is given to us and we have to figure out how it works,” says Gallant. “We don’t really have any math for modelling these kinds of systems.” Even if there were enough data available about the contents of each brain area, there probably would not be a ready set of equations to describe them, their relationships, and the ways they change over time.

“Media reports have suggested that such techniques bring mind-reading ‘from the realms of fantasy to fact’.”

Computational neuroscientist Nikolaus Kriegeskorte at the MRC Cognition and Brain Sciences Unit in Cambridge, UK, says that even understanding how visual information is encoded is tricky — despite the visual system being the best-understood part of the brain (see Nature 502, 156–158; 2013). “Vision is one of the hard problems of artificial intelligence. We thought it would be easier than playing chess or proving theorems,” he says. But there’s a lot to get to grips with: how bunches of neurons represent something like a face; how that information moves between areas in the visual system; and how the neural code representing a face changes as it does so. Building a model from the bottom up, neuron by neuron, is too complicated — “there’s not enough resources or time to do it this way”, says Kriegeskorte. So his team is comparing existing models of vision to brain data, to see what fits best.

Real world

Devising a decoding model that can generalize across brains, and even for the same brain across time, is a complex problem. Decoders are generally built on individual brains, unless they’re computing something relatively simple such as a binary choice — whether someone was looking at picture A or B. But several groups are now working on building one-size-fits-all models. “Everyone’s brain is a little bit different,” says Haxby, who is leading one such effort. At the moment, he says, “you just can’t line up these patterns of activity well enough”.

Standardization is likely to be necessary for many of the talked-about applications of brain decoding — those that would involve reading someone’s hidden or unconscious thoughts. And although such applications are not yet possible, companies are taking notice. Haynes says that he was recently approached by a representative from the car company Daimler asking whether one could decode hidden consumer preferences of test subjects for market research. In principle it could work, he says, but the current methods cannot work out which of, say, 30 different products someone likes best. Marketers, he says, should stick to what they know for now. “I’m pretty sure that with traditional market research techniques you’re going to be much better off.”

Companies looking to serve law enforcement have also taken notice. No Lie MRI in San Diego, California, for example, is using techniques related to decoding to claim that it can use a brain scan to distinguish a lie from a truth. Law scholar Hank Greely at Stanford University in California, has written in the Oxford Handbook of Neuroethics (Oxford University Press, 2011) that the legal system could benefit from better ways of detecting lies, checking the reliability of memories, or even revealing the biases of jurors and judges. Some ethicists have argued that privacy laws should protect a person’s inner thoughts and desires as private, but Julian Savulescu, a neuroethicist at the University of Oxford, UK, sees no problem in principle with deploying decoding technologies. “People have a fear of it, but if it’s used in the right way it’s enormously liberating.” Brain data, he says, are no different from other types of evidence. “I don’t see why we should privilege people’s thoughts over their words,” he says.

Haynes has been working on a study in which participants tour several virtual-reality houses, and then have their brains scanned while they tour another selection. Preliminary results suggest that the team can identify which houses their subjects had been to before. The implication is that such a technique might reveal whether a suspect had visited the scene of a crime before. The results are not yet published, and Haynes is quick to point out the limitations to using such a technique in law enforcement. What if a person has been in the building, but doesn’t remember? Or what if they visited a week before the crime took place? Suspects may even be able to fool the scanner. “You don’t know how people react with countermeasures,” he says.

Other scientists also dismiss the implication that buried memories could be reliably uncovered through decoding. Apart from anything else, you need a 15-tonne, US$3-million fMRI machine and a person willing to lie very still inside it and actively think secret thoughts. Even then, says Gallant, “just because the information is in someone’s head doesn’t mean it’s accurate”. Right now, psychologists have more reliable, cheaper ways of getting at people’s thoughts. “At the moment, the best way to find out what someone is going to do,” says Haynes, “is to ask them.”

Source: Nature

Discovered: the galaxy that’s so far away we’re seeing it as it was 13 billion years ago


Scientists detected z8-GND-5296 with help of Hubble Space Telescope and Keck Telescope in Hawaii.

Astronomers have detected the furthest known galaxy in the Universe which is more than 13 billion light years away on the very edge of space.

Because of the time it takes for its light to reach Earth, the galaxy is seen today as it was just 700 million years after the Big Bang – the primordial event that created the Universe some 13.8 billion years ago.

Scientists detected the galaxy – known as z8-GND-5296 – with the help of the Hubble Space Telescope parked in geostationary orbit and the Keck Telescope on the summit of Mauna Kea in Hawaii.

They searched a library of about 100,000 of the most distant galaxies before finding that one of them could be accurately positioned in space by analysing the infrared light it had emitted.

A spectroscopic analysis of the galaxy’s wavelength showed how much it has shifted to the red end of the spectrum. This “redshift”, and the known expansion velocity of the Universe, was used to measure the galaxy’s precise distance from Earth.

“What makes this galaxy unique, compare to other such discoveries, is the spectroscopic confirmation of its distance,” said Bahran Mobasher of the University of California, Riverside and a member of the research team.

“By observing a galaxy that far back in time, we can study the earliest formation of galaxies. By comparing properties of galaxies at different distances, we can explore the evolution of galaxies throughout the age of the Universe,” Dr Mobasher said.

At this particular point in its early history,  the z8-GND-5296 galaxy was producing new stars at a rate of about 300 a year, which is about 100 times faster than our own galaxy, the Milky Way.

There is only one other known object to be further away in space – a massive star that had exploded some 70 million years earlier. The period before this is known as the “cosmic dark ages” because so little is known about it.

Astronomers believe they are close to finding the first galaxies that were probably responsible for the transition from an opaque Universe, when much of its hydrogen was neutral, to a translucent Universe, when the hydrogen became ionised – called the Era of Re-ionisation.

Steven Finkelstein of the University of Texas at Austin, who led the project, said the new galaxy is in the same region of the sky as the previous record holder.

“So we’re learning something about the distant universe. There are way more regions of very high star formation than we previously thought. There must be a decent number of them if we happen to find two in the same area of the sky,” Dr Finkelstein said.

‘Most distant galaxy’ discovered


An international team of astronomers has detected the most distant galaxy yet.

The galaxy is about 30 billion light-years away and is helping scientists shed light on the period that immediately followed the Big Bang.

It was found using the Hubble Space Telescope and its distance was then confirmed with the ground-based Keck Observatory in Hawaii.

The study is published in the journal Nature.

Because it takes light so long to travel from the outer edge of the Universe to us, the galaxy appears as it was 13.1 billion years ago (its distance from Earth of 30 billion light-years is because the Universe is expanding).

Lead researcher Steven Finkelstein, from the University of Texas at Austin, US, said: “This is the most distant galaxy we’ve confirmed. We are seeing this galaxy as it was 700 million years after the Big Bang.”

The far-off galaxy goes by the catchy name of z8_GND_5296.

Astronomers were able to measure how far it was from Earth by analysing its colour.

Because the Universe is expanding and everything is moving away from us, light waves are stretched. This makes objects look redder than they actually are.

Astronomers rate this apparent colour-change on a scale that is called redshift.

They found that this galaxy has a redshift of 7.51, beating the previous record-holder, which had a redshift of 7.21.

This makes it the most distant galaxy ever found.

Galaxy
z8_GND_5296 is churning out stars at a remarkable rate, say astronomers

The system is small: about 1-2% the mass of the Milky Way and is rich in heavier elements.

But it has a surprising feature: it is turning gas and dust into new stars at a remarkable rate, churning them out hundreds of times faster than our own galaxy can.

It is the second far-flung galaxy known that has been found to have a high star-production rate.

Astronomer looking at the Milky Way
  • Human eyes can see long distances, but the further away an object gets the harder it is to see in detail
  • Telescopes make a distant object appear larger by collecting its light and focusing it to a point
  • The large reflecting Hubble Telescope creates images from the Universe’s visible light and can also detect infrared and ultraviolet radiation
  • The optical and infrared Keck Telescopes examine young stars and can look into the centre of galaxies

Prof Finkelstein said: “One very interesting way to learn about the Universe is to study these outliers and that tells us something about what sort of physical processes are dominating galaxy formation and galaxy evolution.

“What was great about this galaxy is not only is it so distant, it is also pretty exceptional.”

He added that in the coming years, astronomers are likely to discover even more distant galaxies when Nasa’s James Webb Space Telescope (JWST) is launched and other ground-based telescopes come online.

Commenting on the research, Dr Marek Kukula, Public Astronomer at the Royal Observatory Greenwich, told BBC News: “This, along with some other evidence, shows that there are already quite surprisingly evolved galaxies in the very early Universe .

“This high star-formation rate maybe is a clue as to why these galaxies can form so quickly.”

Prof Alfonso Aragon-Salamanca, from the University of Nottingham, added: “This is an important step forward, but we need to continue looking for more.

“The further away we go, the closer we will get to discovering the very first stars that ever formed in the Universe. The next generation of telescopes will make this possible.”

But Dr Stephen Serjeant from the Open University said: “Chasing ultra-high redshift galaxies is a very exciting but equally very difficult game, and many claims of extremely distant galaxies have since turned out to be more nearby interlopers.”

Chemists Work to Desalt the Ocean for Drinking Water, One Nanoliter at a Time.


By creating a small electrical field that removes salts from seawater, chemists at The University of Texas at Austin and the University of Marburg in Germany have introduced a new method for the desalination of seawater that consumes less energy and is dramatically simpler than conventional techniques. The new method requires so little energy that it can run on a store-bought battery.

 

The process evades the problems confronting current desalination methods by eliminating the need for a membrane and by separating salt from water at a microscale.

The technique, called electrochemically mediated seawater desalination, was described last week in the journal Angewandte Chemie. The research team was led by Richard Crooks of The University of Texas at Austin andUlrich Tallarek of the University of Marburg. It’s patent-pending and is in commercial development by startup company Okeanos Technologies.

“The availability of water for drinking and crop irrigation is one of the most basic requirements for maintaining and improving human health,” said Crooks, the Robert A. Welch Chair in Chemistry in the College of Natural Sciences. “Seawater desalination is one way to address this need, but most current methods for desalinating water rely on expensive and easily contaminated membranes. The membrane-free method we’ve developed still needs to be refined and scaled up, but if we can succeed at that, then one day it might be possible to provide fresh water on a massive scale using a simple, even portable, system.”

This new method holds particular promise for the water-stressed areas in which about a third of the planet’s inhabitants live. Many of these regions have access to abundant seawater but not to the energy infrastructure or money necessary to desalt water using conventional technology. As a result, millions of deaths per year in these regions are attributed to water-related causes.

“People are dying because of a lack of freshwater,” said Tony Frudakis, founder and CEO of Okeanos Technologies. “And they’ll continue to do so until there is some kind of breakthrough, and that is what we are hoping our technology will represent.”

To achieve desalination, the researchers apply a small voltage (3.0 volts) to a plastic chip filled with seawater. The chip contains a microchannel with two branches. At the junction of the channel an embedded electrode neutralizes some of the chloride ions in seawater to create an “ion depletion zone” that increases the local electric field compared with the rest of the channel. This change in the electric field is sufficient to redirect salts into one branch, allowing desalinated water to pass through the other branch.

“The neutralization reaction occurring at the electrode is key to removing the salts in seawater,” said Kyle Knust, a graduate student in Crooks’ lab and first author on the paper.

Like a troll at the foot of the bridge, the ion depletion zone prevents salt from passing through, resulting in the production of freshwater.

Thus far Crooks and his colleagues have achieved 25 percent desalination. Although drinking water requires 99 percent desalination, they are confident that goal can be achieved.

“This was a proof of principle,” said Knust. “We’ve made comparable performance improvements while developing other applications based on the formation of an ion depletion zone. That suggests that 99 percent desalination is not beyond our reach.”

The other major challenge is to scale up the process. Right now the microchannels, about the size of a human hair, produce about 40 nanoliters of desalted water per minute. To make this technique practical for individual or communal use, a device would have to produce liters of water per day. The authors are confident that this can be achieved as well.

If these engineering challenges are surmounted, they foresee a future in which the technology is deployed at different scales to meet different needs.

“You could build a disaster relief array or a municipal-scale unit,” said Frudakis. “Okeanos has even contemplated building a small system that would look like a Coke machine and would operate in a standalone fashion to produce enough water for a small village.”

Source: http://www.utexas.edu

 

Expert calls for more research on endocrine disrupters as risk for child development.


Endocrine-disrupting chemicals and have been the topic of numerous studies, but additional studies are needed to determine whether there is a link between the chemicals and child development, according to one researcher. John D. Meeker, ScD, associate professor of environmental health sciences and associate chair of environmental health sciences at the University of Michigan, School of Public Health, wrote a review published in the Archives of Pediatrics & Adolescent Medicine discussing adverse effects of endocrine-disrupting chemicals (EDCs), calling for more research on the issue. “Once inside the body, EDCs can affect the endocrine system through a multitude of specific mechanisms that can target different levels of the hypothalamic-pituitary-gonad, thyroid, and adrenal axes, ranging from effects on hormone receptors to effects on hormone synthesis, secretion, or metabolism; therefore, they can have far-reaching health implications throughout the life course,” Meeker wrote. Bisphenol A has come under the media microscope recently, but Meeker said other EDCs such as persistent organic pollutants, phthalates, contemporary-use pesticides and chemicals, including parabens, triclosan, perchlorate, alternative brominated and chlorinated flame retardants, and fluorinated organic compounds such as perfluorooctane octanoate and perfluorooctane sulfonate should be studied further. Meeker said there is inconsistent evidence linking reduced birth weight with exposure to persistent organic pollutants, organophosphate insecticides and triazine herbicides. However, several studies have been conducted to assess the relationship between the EDCs and fetal growth and gestation duration. “Because study designs and results have varied across studies, for most EDCs it is currently difficult to conclude whether a relationship exists between exposure and birth weight,” he wrote. Meeker said male reproductive tract development, pubertal development, neurodevelopment and obesity are all areas of concern when it comes to EDC exposure. He encourages clinicians to consult with other physicians or professionals in the environmental and occupational health field to address potential risks for environmental-related health conditions. “A growing body of evidence shows that exposure to a number of chemicals may adversely affect child development through altered endocrine function. However, many of the potential exposure-response relationships described here have not been adequately explored,” Meeker concluded. Disclosure:Dr. Meeker reports no relevant financial disclosures. Perspective   Andrea C. Gore

  • The primary reason this review is important is that it provides a balanced summary of the evidence for endocrine disruption in humans. The problems with linking EDCs to disease in humans include the potentially long lag between exposure during critical developmental periods (fetus, infant) and manifestation of disease. Human exposures vary between populations and in individuals, and we are each exposed throughout our lives to complex mixtures. Nevertheless, the evidence is growing from a combination of controlled animal experiments, epidemiology, and exposure assessment, that there is very real risk of EDCs, leading to a variety of disease outcomes.
    • Andrea C. Gore, PhD
    • Gustavus and Louise Pfeiffer Professor of Pharmacology and Toxicology University of Texas at Austin

Source: Endocrine Today.

Expert calls for more research on endocrine disrupters as risk for child development.


Endocrine-disrupting chemicals and have been the topic of numerous studies, but additional studies are needed to determine whether there is a link between the chemicals and child development, according to one researcher.

John D. Meeker, ScD, associate professor of environmental health sciences and associate chair of environmental health sciences at the University of Michigan, School of Public Health, wrote a review published in the Archives of Pediatrics & Adolescent Medicine discussing adverse effects of endocrine-disrupting chemicals (EDCs), calling for more research on the issue.

“Once inside the body, EDCs can affect the endocrine system through a multitude of specific mechanisms that can target different levels of the hypothalamic-pituitary-gonad, thyroid, and adrenal axes, ranging from effects on hormone receptors to effects on hormone synthesis, secretion, or metabolism; therefore, they can have far-reaching health implications throughout the life course,” Meeker wrote.

Bisphenol A has come under the media microscope recently, but Meeker said other EDCs such as persistent organic pollutants, phthalates, contemporary-use pesticides and chemicals, including parabens, triclosan, perchlorate, alternative brominated and chlorinated flame retardants, and fluorinated organic compounds such as perfluorooctane octanoate and perfluorooctane sulfonate should be studied further.

Meeker said there is inconsistent evidence linking reduced birth weight with exposure to persistent organic pollutants, organophosphate insecticides and triazine herbicides. However, several studies have been conducted to assess the relationship between the EDCs and fetal growth and gestation duration.

“Because study designs and results have varied across studies, for most EDCs it is currently difficult to conclude whether a relationship exists between exposure and birth weight,” he wrote.

Meeker said male reproductive tract development, pubertal development, neurodevelopment and obesity are all areas of concern when it comes to EDC exposure. He encourages clinicians to consult with other physicians or professionals in the environmental and occupational health field to address potential risks for environmental-related health conditions.

“A growing body of evidence shows that exposure to a number of chemicals may adversely affect child development through altered endocrine function. However, many of the potential exposure-response relationships described here have not been adequately explored,” Meeker concluded.

Disclosure: Dr. Meeker reports no relevant financial disclosures.

Perspective

 

Andrea C. Gore

  • The primary reason this review is important is that it provides a balanced summary of the evidence for endocrine disruption in humans. The problems with linking EDCs to disease in humans include the potentially long lag between exposure during critical developmental periods (fetus, infant) and manifestation of disease. Human exposures vary between populations and in individuals, and we are each exposed throughout our lives to complex mixtures. Nevertheless, the evidence is growing from a combination of controlled animal experiments, epidemiology, and exposure assessment, that there is very real risk of EDCs, leading to a variety of disease outcomes.
    • Andrea C. Gore, PhD
    • Gustavus and Louise Pfeiffer Professor of Pharmacology and Toxicology
      University of Texas at Austin

Source: Endocrine Today.