Late night food ‘breeds weight gain’


late night eating
Eating after the sun has gone down might trigger weight gain, say researchers who have been studying the effect in mice.

Even when given the same amount of calories overall, mice that ate around the clock put on more fat.

Fasting for at least 12 hours appears to switch on important fat burning pathways in the body.

The US team told the journal Cell Metabolism they now plan human tests to see if the same is true in man.

During the study around 400 mice were fed diets high in sugar or fat or both, or normal diets and over different time periods.

Overall, mice that were only allowed to feed for nine or 12 hours gained less weight than mice that could eat the same amount food but at any time they wanted in a 24-hour period.

Overnight fast

mice

Even when the restricted feed time mice were allowed a blow out at weekends and could eat when they liked, they still gained less weight, suggesting that the diet can withstand some temporary interruptions, the researchers said.

And when obese mice who had been eating freely were moved to a restricted schedule they lost 5% of their body weight even though they were eating the same number of calories as before.

The researchers believe a key to controlling weight gain could be sticking to a consistent 12-hour fast every 24 hours.

In the experiments, fasting at night had beneficial effects on blood sugar and cholesterol and reversed the effects of diabetes in the mice.

Study leader Dr Satchidananda Panda, an associate professor at the Salk Institute in California, said that brown fat, which burns energy at a much higher rate is also activated by this approach.

Additional work in mice by another team showed that limiting eating to half the day also altered the balance of microbes in the gut, which experts say might be important.

Dr Perry Barrett, a senior research fellow at the University of Aberdeen who does research on regulation of appetite said: “The revelation that there is a circadian rhythm in gut microbes now adds another dimension to this very interesting area of research.”

He said there had not been many human studies in this fairly new area of ‘chrono-nutrition’ but those that had been done had so far concentrated on sleep cycles.

TV box helps colour-blind viewers


Eyeteq demo

The tech works by enhancing certain colours in an image. This picture is unaltered…

A set-top TV box that can help people with colour blindness better differentiate shades has been developed by a Cambridge firm.

Eyeteq, from University of East Anglia-based company Spectral Edge, alters colours frame-by-frame – without spoiling them for the non-colour-blind.

The technology could also be used on video games, the company said.

A colour-blindness awareness group has called for Eyeteq to be part of all televisions as standard.

The condition affects one in 12 men, and one in 200 women, with red-green colour blindness the most common.

According to Spectral Edge’s website: “Eyeteq gently modifies colours in images in such a way that colour-blind observers enjoy both improved visibility as well as the overall appearance.”

“With careful design using mathematical perception models,” it adds, “we are able to remap colours to maximise discrimination for colour-blind people, at the same as minimising the strength of the effect for non-colour-blind people.”

Liverpool v Ludogorets
Colour clash – colour-blind football fans complained about Liverpool and Ludogorets playing in red and green

The company says those who are not colour blind do not mind the colour change as it is slight. It also said there is no noticeable lag as pictures are remapped in real-time, a process that takes milliseconds.

Liverpool woe

The technology had now reached proof-of-concept stage, Spectral Edge’s managing director Christopher Cytera told the BBC.

“The next step is to refine and upgrade that proof of concept,” he said.

“At the moment it’s working at 720p resolution, we want to get it to 1080p.”

Spectral Edge then plans to license the technology to manufacturers to include in new televisions.

Grey line

Are colour blind gamers left out?

Call of Duty screenshot

Next time you are playing a video game online and a member of your own team shoots you, spare a thought – they could be colour blind.

Colour Blind Awareness, a group promoting the needs of colour-blind people, said it believed Eyeteq should become a standard feature.

“It has such good feedback,” said founder Kathryn Albany-Ward.

“When I tried it on my son, he gave it 10 out of 10 – it was like opening his eyes up.”

She told the BBC the technology would greatly help when watching certain sporting events.

A recent European football match between Liverpool and Bulgarian side Ludogorets left colour-blind viewers frustrated as the teams played in red and green.

Am I colour blind?

Blobs of colour in a test for vision deficiency

Can you see a number in the image above? If not, you may suffer from colour blindness.

Those with the condition said it was like watching 22 players in the same kit.

Big market

Mr Cytera said he hoped his company’s technology would become a “badge of honour” for manufacturers promoting accessibility credentials.

“There is a big market – 8% of men worldwide are affected, which is a huge number.

“Lots of great work done in audio description, and subtitling, but nothing so far for colour blindness.”

Eyeteq works by presenting the viewer with a slider, allowing adjustments for severity.

The company has released a free mobile app for people to test out the system.

Orion ‘Mars ship’ set for test flight


A US space capsule that could help get humans to Mars is about to make its maiden flight.

Orion will be launched on a Delta rocket out of Cape Canaveral in Florida on a short journey above the Earth to test key technologies.

The conical vessel is reminiscent of the Apollo command ships that took men to the Moon in the 1960s and 1970s, but bigger and with cutting-edge systems.

Given that this is a first outing, there will be no people aboard.

Nonetheless, the US space agency describes the demonstration as a major event.

Nasa has a window in which to launch Orion of about two-and-a-half hours, which began at 07:05 local time (12:05 GMT).

The launch preparations had to be stopped shortly before the opening of the window because a boat strayed into the eastern part of the launch range. After that, the countdown had to be held because of strong winds and a technical issue.

“This is huge; Thursday is a giant day for us,” said Nasa administrator Charlie Bolden.

Flight profile

Orion is being developed alongside a powerful new rocket that will have its own debut in 2017 or 2018.

Together, they will form the core capabilities needed to send humans beyond the International Space Station to destinations such as the Red Planet.

For Thursday’s flight, the Delta IV-Heavy rocket – currently the beefiest launcher in the world – is being used as a stand-in.

It will send Orion twice around the globe, throwing the ship up to an altitude of almost 6,000km (3,600 miles).

This will set up a fast fall back to Earth, with a re-entry speed into the atmosphere close to 30,000km/h (20,000mph) – near what would be expected of a capsule coming back from the Moon.

It should give engineers the opportunity to check the performance of Orion’s critical heat shield, which is likely to experience temperatures in excess of 2,000C (4,000F).

They will also watch how the parachutes deploy as they gently lower the capsule into Pacific waters off Mexico’s Baja California Peninsula.

Orion diagram

Although Orion is a Nasa project, the development has been contracted to Lockheed Martin, and the aerospace giant will be running the show on Thursday.

But the US space agency will be there in the background, keen to see that the LM designs meet their specifications.

A good example is the radiation protection built into the capsule. Radiation will be one of the major hazards faced on voyages into deep space, and Orion’s systems must cope with the challenge.

“We’re going to be flying through parts of the Van Allen radiation belts, since we’re 15 times higher than the space station,” explained Mark Geyer, Nasa’s Orion programme manager.

“The ISS would not have to deal with radiation but we will, and so will every vehicle that goes to the Moon. That’s a big issue for the computers. These processors that are now so small – they’re great for speed but they’re more susceptible to radiation. That’s something we have to design for and see how it all behaves.”

Even if today it had a fully functioning Orion, with its dedicated rocket, Nasa would not be able to mount a mission to another planetary body because the technology to carry out surface operations has not been produced yet.

This worries observers like space historian John Logsdon, who doubts the policy as currently envisaged is sustainable.

He told the BBC: “The first launch with a crew aboard is 2020/21, and then nothing very firmly is defined after that, although of course Nasa has plans. That’s too slow-paced to keep the launch teams sharp, to keep everyone engaged. It’s driven by the lack of money, not the technical barriers.”

One solution is to pull in international partners. Europe, for instance, is going to make the “back end” for all future Orion capsules.

This service module is principally the propulsion unit that drives Orion through space. Prof Logsdon wonders if additional partners might want to pick up some of the other technologies needed to help speed the exploration path.

Orion at Mars
A Mars mission with Orion is perhaps still 20 years away

Did Columbus really bring syphilis to Europe?


A new study is intensifying the debate over whether Christopher Columbus or his crews brought syphilis from the New World to Europe, setting the stage for hundreds of years of illness and death.

Researchers in Bosnia report that an ancient skeleton of a young Croatia-area man shows signs of the disease. That would mean that the existed there long before the era of the great explorers, they said.

Two specialists questioned the study’s findings. They said it’s still most likely that the crews of Columbus’ ships are responsible for spreading the sexually transmitted disease across Europe.

“Despite the many efforts to suggest otherwise, there is no Old World evidence of prior to 1492,” said Dr. Bruce Rothschild, a professor of medicine at the University of Kansas who studies the origins of diseases like syphilis.

But study lead author Ivana Anteric, a researcher at Croatia’s University of Split, insisted that the Columbus theory isn’t proven.

The origin of syphilis has been a big topic of debate in the scientific world, with three major theories emerging from the debate. The most common suggests that syphilis existed in the New World and traveled to Europe via Columbus’ crew upon his return to Europe.

Another theory “holds that syphilis has been present in Europe before Columbus,” but it’s difficult to find evidence for this because it looked similar to other diseases, Anteric said. Under this theory, the disease become more noticeable after Columbus’ time.

And a third theory “assumes that it existed in the Old World and New World, but four different syndromes developed,” she said.

Whatever the truth, syphilis had a devastating effect in Europe as it spread rapidly shortly after the Columbus voyage, a fact that contributed to the idea that sailors brought it back from the Americas. “In the beginning of the 16th century, about one-third of inhabitants of Paris had syphilis,” Anteric said.

“Syphilis was one of the first global diseases, so it is very important to understand where it came from and how it spread,” Anteric said. Understanding its origins may also “be helpful in combating diseases today.”

In the new study, researchers examined 403 skeletons from Croatia, in southern Europe. The skeletons were from various time periods going back to prehistory.

The researchers report that one skeleton showed signs of syphilis. It’s the skeleton of a man in his 20s thought to have lived in the period from the 2nd to 6th centuries. The study authors said their analysis determined that only syphilis could be the culprit behind the indications of disease in the skeleton. (DNA testing was not an option because, the study said, it can’t be used to confirm syphilis in an ancient body.)

As a result of their research, the study authors wrote, “we believe that the Columbian theory of syphilis origin is not sustainable.”

Not everyone is on board with the study’s findings. Rothschild said the skeleton bones actually suggest a kind of , not syphilis.

“Syphilis is clearly a New World product,” he added.

Rob Knell, a senior lecturer at Queen Mary University of London who studies the evolution of , is also skeptical.

“One diagnosis in a pre-Columbian tells us very little about the origins of syphilis,” especially in light of molecular evidence suggesting that syphilis had its beginnings in North America, Knell said.

Why Time Can’t Go Backward: Physicists Explain


“Time is what keeps everything from happening at once,” wrote Ray Cummings in his 1922 science fiction novel “The Girl in the Golden Atom,” which sums up time’s function quite nicely. But how does time stop everything from happening at once? What mechanism drives time forward, but not backward?
In a recent study published in the journal Physical Review Letters, a group of theoretical physicists re-investigate the “Arrow of Time” — a concept that describes the relentless forward march of time — and highlight a different way of looking at how time manifests itself over universal scales.

Traditionally, time is described by the “past hypothesis” that assumes that any given system begins in a low entropy state and then, driven by thermodynamics, its entropy increases. In a nutshell: The past is low entropy and the future is high entropy, a concept known as thermodynamic time asymmetry.
In our everyday experience, we can find many examples of increasing entropy, such as a gas filling a room or an ice cube melting. In these examples, an irreversible increase in entropy (and therefore disorder) is observed.
If this is applied on a universal scale, it is presumed that the Big Bang spawned the Universe in a low entropy state — i.e. a state of minimum entropy. Over the aeons, as the Universe expanded and cooled, the entropy of this large-scale system has increased. Therefore, as the hypothesis goes, time is intrinsically linked with the degree of entropy, or disorder, in our Universe.
But there are several problems with this idea.

Just after the Big Bang, several lines of observational evidence point to a Big Bang environment that was a hot and extremely disordered mess of primordial particles. As the Universe matured and cooled, gravity took over and made the Universe more ordered and more complex — from the cooling clouds of gas, stars formed and planets evolved from gravitational collapse. Eventually, organic chemistry became possible, giving rise to life and humans that philosophize about time and space. On a Universal scale, therefore, “disorder” has effectively decreased, not increased as the “past hypothesis” presumes.
This, argues co-investigator Flavio Mercati of the Perimeter Institute (PI) for Theoretical Physics in Ontario, Canada, is an issue with how entropy is measured.
As entropy is a physical quantity with dimensions (like energy and temperature), there needs to be an external reference frame so they can be measured against. “This can be done for subsystems of the universe because the rest of the universe sets these references for them, but the whole universe has, by definition, nothing exterior to it with respect to define these things,” Mercati wrote in an email to Discovery News.
So if not entropy, what could be driving universal time forward?

Complexity is a dimensionless quantity that, in its most basic form, describes how complex a system is. So, if one looks at our Universe, complexity is directly linked with time; as time progresses, the Universe becomes increasingly structured.
“The question we seek to answer in our paper is: what set these systems in that very low-entropy state in first place? Our answer is: gravity, and its tendency to create order and structure (complexity) from chaos,” said Mercati.
To test this idea, Mercati and his colleagues created basic computer models to simulate particles in a toy universe. They found that, no matter how the simulation was run, the universes’ complexity always increased, and never decreased, with time.
From the Big Bang, the Universe started in its lowest-complexity state (the hot ‘soup’ of disordered particles and energy). Then, as the Universe cooled to a state that gravity began to take over, gases clumped together, stars formed and galaxies evolved. The Universe became inexorably more complex, and gravity is the driving force of this increase in complexity.

“Every solution of the gravitational toy model we studied has this property of having somewhere in the middle a very homogeneous, chaotic and unstructured state, which looks very much like the plasma soup that constituted the universe at the time the Cosmic Microwave Background was created,” said Mercati. “Then in both time directions from that state gravity enhances the inhomogeneities and creates a lot of structure and order, in an irreversible way.”
As the Universe matures, he added, the subsystems become isolated enough so that other forces set up the conditions for the ‘classical’ arrow of time to dominate in low-entropy subsystems. In these subsystems, such as daily life on Earth, entropy can take over, creating a “thermodynamical arrow of time.”
Over Universal scales, our perception of time is driven by the continuous growth of complexity, but in these subsystems, entropy dominates.
“The universe is a structure whose complexity is growing,” said Mercati in a PI press release. “The universe is made up of big galaxies separated by vast voids. In the distant past, they were more clumped together. Our conjecture is that our perception of time is the result of a law that determines an irreversible growth of complexity.”
The next step in this research would be to look for observational evidence, something Mercati and his team are working on. “…we don’t know yet whether there is any (observational) support, but we know what kind of experiments have a chance of testing our idea. These are cosmological observations.”
For now, he hasn’t revealed what kinds of cosmological observations will be investigated, only that they will detailed in an upcoming, and likely fascinating, paper.

The Salk Polio Vaccine: ‘Greatest Public Health Experiment in History’


A nationwide trial of an experimental vaccine using school children as virtual guinea pigs would be unthinkable in the United States today.

But that’s exactly what happened in 1954 when frantic American parents — looking for anything that could beat back the horror of polio — offered up more than 1.8 million children to serve as test subjects. They included 600,000 kids who would be injected with either a new polio vaccine or a placebo.

Equally remarkable, the Salk polio vaccine trial stands as the largest peacetime mobilization of volunteers in American history, requiring the efforts of 325,000 doctors, nurses, educators and private citizens — with no money from federal grants or pharmaceutical companies. The results were tracked by volunteers using pencils and paper.

And it lasted just one year, with officials hopeful at the outset that they would be able to begin giving the vaccine to children within weeks of the final results.

“I can’t imagine what the disease would be today that could get that many parents to sign up their children for an experimental vaccine trial,” said Daniel Wilson, a history professor at Muhlenberg College in Allentown, Pa., who has written three books on the history of polio in the United States and is himself a polio survivor. “I think it’s a measure of how much people feared polio that mothers and fathers were willing to accept the word of researchers that the vaccine was safe.”

Financing for the trial came from donations made to the National Foundation for Infantile Paralysis — the forerunner of the March of Dimes. The foundation was created in 1938 by President Franklin D. Roosevelt and his law partner, Basil O’Connor.

Roosevelt had a profoundly personal interest in defeating polio — the disease left him crippled in 1921 at age 39, and he spent his entire presidency in leg braces, confined to a wheelchair, unable to even get up by himself.

The National Foundation spent $7.5 million in donations — $66.3 million in today’s dollars — to initiate, organize and run the vaccine trial, with little participation from the federal government.

“That’s what makes it the greatest public health experiment in history,” said David Oshinsky, who wrote the Pulitzer Prize-winning book Polio: An American Story. “It’s not just the success of the trials. It’s the incredible organization involved, with tens of thousands of mothers and families coming together to save their children. And it was all done privately. That’s what makes this so incredible.”

There was enormous pressure to get the field trial under way in advance of the 1954 polio season. Polio epidemics took place during the summer, with the number of cases rising through June and July and peaking in August.

“We realized we wanted to get it accomplished in 1954, early enough that it could possibly have an impact on that year’s polio season,” said David Rose, archivist for the March of Dimes.

A grass-roots movement without precedent

The National Foundation for Infantile Paralysis already had a nationwide network of health officials, medical professionals, elementary educators and volunteers in place to help respond to polio outbreaks. These were the same people who would form the workforce needed for the clinical trial. In addition, the foundation’s annual “Mother’s March” raised millions in dimes and dollars each year, which was used for polio research and aid to communities enduring polio epidemics.

Some of that money had funded Dr. Jonas Salk’s creation in 1952 of an experimental “killed-virus” polio vaccine, and his subsequent experiments that proved the vaccine’s safety in humans.

Basil O’Connor and the National Foundation’s scientific advisors had taken a keen interest in Salk’s vaccine, especially when his early experiments suggested that it increased the level of polio antibodies in a person’s blood without any ill effects. So plans were made for the national trial.

O’Connor announced in November 1953 that the field trial would begin the following spring, and would be based on an “observed-control” design. That meant one group of children would receive the vaccine, and another group of kids in the same age range would be observed but not injected with either the vaccine or a placebo.

There were, of course, major concerns. Some questioned whether the National Foundation could perform an impartial evaluation of a vaccine that it had had a hand in creating. They also expressed doubts about the “observed control” design of the trial.

The problem with the “observed-control” approach was that middle- and upper-class neighborhoods were more likely to suffer a polio outbreak than poorer areas. The reason: better sanitation, which meant less exposure to germs and resulting immunity, said Dr. Peter Salk, Jonas Salk’s son and president of the Jonas Salk Legacy Foundation.

“The concern was that the children who would end up receiving the real vaccine would be from a different social cut from those who would serve as observed controls,” Salk said. “It was the wealthier neighborhoods that had more polio. If you took kids from the wealthier areas, they would have a higher risk of polio, and those kids would be expected to have a higher incidence than controls.”

To counter potential charges of scientific bias, the National Foundation turned the polio vaccine field trial over to Jonas Salk’s mentor, Dr. Thomas Francis, Jr., a virologist at the University of Michigan who had worked with Salk years before on an influenza vaccine.

Francis established the Poliomyelitis Vaccine Evaluation Center at the University of Michigan, which would guide the trial and independently analyze the results.

Soon after taking charge, Francis announced that the trial would be conducted using two separate “arms.” One arm would follow the “observed-control” design originally proposed by the National Foundation. The second arm would utilize a “placebo-control” design, with half the children getting the vaccine and the other half a placebo.

Salk himself, who had only a supporting role in the massive undertaking, initially resisted the idea of a “placebo-control” trial, arguing that doctors shouldn’t be giving kids something that deliberately would not protect them against polio, his son recalled.

“Very fortunately, my father ended up yielding to the forces at work, which was that the only way it would be possible to convince anyone and to understand the effectiveness of the vaccine would be to use a placebo-controlled design,” said Peter Salk.

Legions of proud ‘Polio Pioneers’

Between April 26 and July 10, 1954, volunteers distributed Salk’s series of three polio shots. In all, more than 443,000 children received at least one polio inoculation, while more than 210,000 received a placebo, according to the March of Dimes.

“There were three shots and it was a double-blind study,” Oshinsky said. “Neither the child nor the caregiver knew who was receiving the vaccine or a placebo, so the paperwork was enormous.”

All the kids in the trials became known as the “Polio Pioneers,” and each received what would become a much-treasured Polio Pioneer metal pin and certificate of membership signed by O’Connor himself.

Bonnie Yarry of Maitland, Fla., still had her Polio Pioneer pin and certificate in 2005 when she wrote a personal remembrance for the non-profit group Post-Polio Health International.

Calling herself a “tiny peg in Dr. Salk’s success story,” Yarry recalled how her New York City second grade class at monthly intervals “traipsed down to P.S. 148’s makeshift infirmary, a kindergarten classroom filled with New York Health Department doctors and nurses prepared to inoculate us.”

“With butterflies in my stomach, I stuck out my arm, never looked at the needle, waited for the prick and then the pain,” Yarry wrote. “I heard others cry, but I didn’t.”

The Salk vaccine trial also served as one of the earliest and largest examples of informed consent, the process by which researchers get permission to experiment on human subjects, Oshinsky said.

“Parents actually signed a piece of paper saying, ‘I give my consent to have my child participate in this experiment,’ ” he said.

Researchers spent the rest of 1954 following the health of all the children, and taking blood samples from 40,000 kids in the study to examine their antibody response.

Through three months of winter and the early spring of 1955, the researchers analyzed and evaluated the data gathered on inoculation, blood samples, and resulting cases of polio. Much of the work was done by hand, although some computations were performed using punch cards that were fed into a primitive computer the size of a room, Oshinsky said.

People were on pins and needles waiting for the results of the trial. Even Salk himself knew nothing about how the analysis was proceeding, his son said.

‘An instant hero’

Then, just one year after the trial started, the National Foundation announced the results: The Salk vaccine proved 80 to 90 percent effective in preventing polio.

“The vaccine works. It is safe, effective and potent,” stated the press release issued by the National Foundation on Tuesday, April 12, 1955. It concluded, “There can be no doubt now that children can be inoculated successfully against polio.”

The New York Times blared the news with a banner headline: “SALK POLIO VACCINE PROVES SUCCESS; MILLIONS WILL BE IMMUNIZED SOON; CITY SCHOOLS BEGIN SHOTS APRIL 25.”

“Salk became sort of an instant hero,” said Muhlenberg College’s Wilson. “He appeared on the cover of Time magazine. He really was celebrated. [President Dwight] Eisenhower entertained him at the White House.”

For some children, however, the vaccine came too late. Wilson contracted polio at age 5 in September 1955, months after the vaccine’s success had been announced.

“The vaccine was out and available in the fall of 1955, but it was in short supply at that time in rural Wisconsin,” said Wilson, who lived in Wausau back then. “I was a year short of going to school, and so I didn’t get the vaccine.” Now 64, he has had lifelong health problems due to his childhood polio.

Once Salk’s vaccine became widely available, Oshinsky said, it saved the lives of tens of thousands of children in the United States and Canada.

And by 1961, the rate of polio had dropped by 96 percent in the United States, thanks to the Salk vaccine, according to the March of Dimes.

Salk’s legacy, however, extends far beyond his vaccine. Oshinsky contends that Salk’s true contribution to science was his demonstration that a killed virus vaccine could be as effective as using a live virus. The flu shot people receive every year is a killed virus vaccine, as are modern vaccines that protect against typhoid, cholera and whooping cough, he said.

“Jonas Salk showed that a killed virus vaccine would work and would be damned effective in fighting disease,” Oshinsky said. “This was something that virologists of the day pooh-poohed. And Salk proved them wrong.”

But Salk’s vaccine, still available and the primary polio vaccine for the United States, isn’t as widely used across the globe today as the live virus polio vaccine developed by his rival, Dr. Albert Sabin.

Sabin, a Polish medical researcher who become a naturalized U.S. citizen in 1930, tested the effectiveness of his oral vaccine on at least 100 million people in the USSR and other countries between 1955 and 1961.

His vaccine proved even better at preventing polio, and much easier to deliver.

“You can give it in drops, you can put the drops on sugar cubes,” Wilson said. “You don’t need to have an expert doctor or nurse to give the vaccine. Sabin’s vaccine was the vaccine to bring polio to the edge of eradication.”

However, Sabin’s vaccine doesn’t completely eradicate polio, because a minute number of children given the live virus vaccine will actually contract polio, Oshinsky said.

“When you get the numbers way, way down, you have to come in with the Salk vaccine to finish it off,” Oshinsky noted.

He added, “I don’t think the irony would be lost on Sabin or Salk, two scientific rivals who truly did not like each other. We need both their vaccines to end polio forever. We can’t do it with just one of them.”

Laser sniffs out toxic gases from afar


Scientists have developed a way to sniff out tiny amounts of toxic gases—a whiff of nerve gas, for example, or a hint of a chemical spill—from up to one kilometer away.

The new technology can discriminate one type of gas from another with greater specificity than most remote sensors—even in complex mixtures of similar chemicals—and under normal atmospheric pressure, something that wasn’t thought possible before.

The researchers say the technique could be used to test for radioactive byproducts from nuclear accidents or arms control treaty violations, for example, or for remote monitoring of smokestacks or factories for signs of air pollution or chemical weapons.

“You could imagine setting this up around the perimeter of an area where soldiers are living, as a kind of trip wire for ,” said lead author Henry Everitt, an Army scientist and adjunct professor of physics at Duke University.

The technique uses a form of invisible light called terahertz radiation, or T-rays.

Already used to detect tumors and screen airport passengers, T-rays fall between microwaves and infrared radiation on the electromagnetic spectrum.

Zapping a gas molecule with a terahertz beam of just the right energy makes the molecule switch between alternate rotational states, producing a characteristic absorption spectrum “fingerprint,” like the lines of a .

Terahertz sensors have been used for decades to identify trace gases in the dry, low-pressure conditions of interstellar space or in controlled conditions in the lab, where they are capable of unambiguous identification and ultra-sensitive, part-per-trillion detection.

But until now, efforts to use the same technique to detect trace gases under normal atmospheric conditions have failed because the pressure and water vapor in the air smears and weakens the spectral fingerprint.

In a study published in the journal Physical Review Applied, Everitt, Ohio State University physicist Frank De Lucia and colleagues have developed a way around this problem.

Their approach works by blasting a cloud of gas with two beams at once. One is a steady terahertz beam, tuned to the specific rotational transition energy of the they’re looking for.

The second beam comes from a laser, operating in the infrared, which emits light in high-speed pulses.

At the U.S. Army Aviation and Missile Research, Development, and Engineering Center near Huntsville, Alabama, the researchers have installed a one-of-a-kind .

Manufactured by a company called STI Optronics, it’s capable of firing dozens of pulses of infrared light a second, each of which is less than a billionth-of-a-second long.

“It’s kind of like whacking a molecule with an infrared sledgehammer,” Everitt said.

Normal still blurs the chemical “bar code” produced by the blast of the Terahertz beam, but the ultra-short pulses of light from the more powerful infrared laser knock the molecule out of equilibrium, causing the smeared absorption lines to flicker.

“We just have to tune each beam to the wavelengths that match the type of molecule we’re looking for, and if we see a change, we know it has to be that gas and nothing else,” Everitt said.

The researchers directed the two beams onto samples of methyl fluoride, methyl chloride and methyl bromide gases in the lab to determine what combination of laser settings would be required to detect trace amounts of these gases under different weather conditions.

“Terahertz waves will only propagate so far before water vapor in the air absorbs them, which means the approach works a lot better on, say, a cold winter day than a hot summer day,” Everitt said.

The researchers say they are able to detect trace gases from up to one kilometer away. But even under ideal , the technology isn’t ready to be deployed in the field just yet.

For one, converting an eight-foot, one-ton laser into something closer in size to a briefcase will take some time.

Having demonstrated that the technique can work, their next step is to figure out how to tune the beams to detect additional gases.

Initially, they plan to focus on toxic industrial chemicals such as ammonia, carbon disulfide, nitric acid and sulfuric acid.

Eventually, the researchers say their technique could also be useful for law enforcement in detecting toxic gases generated by meth labs, and other situations where detection at the gas’s source isn’t feasible.

“Point sensing at close range is always better than remote sensing if you can do it, but it’s not always possible. These methods let us collect chemical intelligence that tells us what’s going on before we get somewhere,” Everitt said.

Scientists detect brain network that gives humans superior reasoning skills


When it comes to getting out of a tricky situation, we humans have an evolutionary edge over other primates. Take, as a dramatic example, the Apollo 13 voyage in which engineers, against all odds, improvised a chemical filter on a lunar module to prevent carbon dioxide buildup from killing the crew.

UC Berkeley scientists have found mounting brain evidence that helps explain how humans have excelled at “relational reasoning,” a cognitive skill in which we discern patterns and relationships to make sense of seemingly unrelated information, such as solving problems in unfamiliar circumstances.

Their findings, reported in the Dec. 3 issue of the journal Neuron, suggest that subtle shifts in the frontal and parietal lobes of the brain are linked to superior cognition. Among other things, the frontoparietal network plays a key role in analysis, memory retrieval, abstract thinking and problem-solving, and has the fluidity to adapt according to the task at hand.

“This research has led us to take seriously the possibility that tweaks to this network over an evolutionary timescale could help to explain differences in the way that humans and other primates solve problems,” said UC Berkeley neuroscientist Silvia Bunge, the study’s principal investigator. “It’s not just that we humans have language at our disposal. We also have the capacity to compare and integrate several pieces of information in a way that other primates don’t.”

In reviewing dozens of studies – including their own – that use neuroimaging, neuropsychology, developmental cognitive and other investigative methods, Bunge and fellow researchers concluded that anatomical changes in the lateral frontoparietal network over millennia have served to boost human reasoning skills.

“Given the supporting evidence across species, we posit that connections between these frontal and parietal regions have provided the necessary support for our unique ability to reason using abstract relations,” said Michael Vendetti, co-author of the study and a postdoctoral researcher in neuroscience at UC Berkeley.

Relational reasoning is a high-level cognitive process in which we make comparisons and find equivalencies, as one does in algebra, for example. First-order comparisons identify the relationship between two items or activities in the following ways: semantic (hammer is used to hit a nail); numeric (four is greater than two); temporal (we get out of bed before we go to work) or visuospatial (the bird is on top of the house). Second-order or higher-order comparisons take this a step further by equating two or more sets of first-order relations (a chain is to a link as a bouquet is to a flower).

To test their hypothesis that the human gift for relational reasoning can be traced to developmental and evolutionary changes in the brain’s lateral frontoparietal network, the researchers examined studies that track anatomical changes in the developing human brain; compare neural patterns in human and non-human primates, and compare how human and non-human primates tackle various reasoning tasks.

Their exhaustive meta-analysis identified three parts of the brain that play key roles in relational reasoning, the rostrolateral prefrontal cortex, the and the inferior parietal lobule, with the rostrolateral region more actively engaged in second-order relational reasoning.

In looking at brain development, they found that “synaptic pruning,” which usually takes place in adolescence when white matter replaces gray matter and signals between neurons speed up, was more evident in the inferior parietal regions of the brain.

Also crucial to their finding was a study led by Oxford University neuroscientist Matthew Rushworth that compared neural patterns in humans and macaque monkeys. While human and non-human primates were found to share similarities in the frontal and parietal brain regions, activity in the human rostrolateral prefrontal cortex differed significantly from that of the macaque monkey’s frontal cortex, the study found.

“We had hypothesized that there could have been to this region to support our reasoning ability, so we were really excited when Rushworth and his colleagues came out with these findings,” Vendetti said.

Meanwhile, in the behavioral studies they analyzed, humans were found to use higher-order strategies to guide their judgment while non-human primates relied more heavily on perceptual similarities and were slower at reasoning and problem-solving.

“These results do not necessarily prove that non-human primates are unable to reason using higher-order thinking, but if it is possible to train non-humans to produce human-like performance on tasks associated with higher-order relational thinking, it is certainly not something that comes naturally to them,” the study concluded.

Overall, Bunge said, “The findings allow us to gain insights into human intelligence by examining how we got to where we are by examining our changes across both evolution and development.”

Low-Pressure Epidural Gravity Saline Technique May Reduce Cesarean Complications


In patients undergoing cesarean delivery who are receiving a combined spinal epidural (CSE), a study has found that delivering epidural saline using a low-pressure gravity technique leads to fewer complications than relying solely on an epidural catheter.

The investigators Shaul Cohen, MD, Antonio Chiricolo, MD, and their colleagues from the Rutgers-Robert Wood Johnson University Hospital, in New Brunswick, N.J., tested whether giving cesarean delivery patients 10 mL of epidural saline via a low-pressure method, after the spinal and before the epidural is given, would result in fewer epidural blood vessel punctures and paresthesias and better-quality sedation than traditional CSE.

For this prospective, randomized, double-blind study, 229 women having elective cesarean delivery were placed in one of two groups: those receiving epidural saline by gravity and CSE and those receiving only CSE.

The study was presented at the 2014 annual meeting of the Society for Ambulatory Anesthesia (abstract 30).

In CSE, the physician injects spinal solution through the spinal needle, which is traced through a larger epidural needle. The spinal needle is withdrawn while the epidural needle remains in place, although there is a risk for epidural blood vessel punctures and paresthesias. In rare cases, catheters can enter the subarachnoid space and lead to a total spinal, with loss of consciousness and respiratory arrest.

“We think the saline helps by lubricating the structures in the epidural space,” said Dr. Chiricolo, thus creating a pathway that allows the catheter to advance.

Patients were randomly assigned to two groups. In the first group of 115 patients, a catheter was inserted immediately after the spinal solution, which was delivered with a Pencan needle (B. Braun Medical) through the epidural needle. In the second group of 114 patients, each patient was given 10 mL of saline by gravity after the spinal solution was injected and before a closed-end catheter (B. Braun Medical) was inserted epidurally.

The investigators used a CSE kit, which contained an Espocan epidural needle (B. Braun Medical) along with a Pencan spinal needle. The Tuohy needle in these kits has a hole at the tip that allows the spinal needle to slide through to pierce the dura.

While lying on their sides, all patients were given injections of 10 mg ropivacaine, a local anesthetic, along with 100 mcg of epinephrine and 25 mcg of fentanyl at roughly the same location in the spine. Additional doses of local anesthesia (5-20 mL of 0.75% ropivacaine—an isobaric solution—with 5 mcg/mL epinephrine and 5 mcg fentanyl) were injected into the epidural catheter if spinal anesthesia was deemed unsatisfactory.

After the procedure, the investigators collected patient data, asked the anesthesiologists how many attempts were required to pass the catheter, and observed whether there was frank blood in the cerebrospinal fluid or catheter.

The data collectors tested patients’ motor abilities using a 5-point Bromage test, with 1 meaning the patient lacked foot movement and 5 meaning the patient could achieve total hip and knee flexion. The highest level of motor block from either leg was recorded and used in later analyses.

Epidural catheters caused blood vessel punctures in 19 patients in the first group, those who received CSE alone, and in six patients in the second group, those given epidural saline and CSE. In the first group, 74 patients had paresthesia, compared with 46 patients in the second group.

After their procedures, the patients were given lidocaine with epinephrine via the epidural catheter, which was connected to a portable Abbott Pain Management Provider to relieve postsurgical pain. Seventeen patients in the first group required additional nitrous oxide or fentanyl for pain relief, whereas three patients in the second group needed medication for additional pain relief.

Physicians were unable to pierce the dura with a spinal needle and failed to achieve spinal blocks in 17 patients in the first group and 23 patients in the second group—a failure rate of 14.8% and 20.2%, respectively.

Art Saus, MD, assistant professor of anesthesiology at Louisiana State University Health Center, in Shreveport, commented that he found the failure rate to be “awfully high” for physicians using a CSE kit. “Failure to put the needle into the dura would be very uncommon.”

Dr. Saus felt that although the authors reduced complications in patients undergoing cesarean delivery using the gravity technique, their methodology raised too many questions to be able to determine the study’s value.

A disproportionate number of patients may have been heavy—setting a spinal block in an obese patient is typically more difficult—or perhaps less experienced residents may have been handling the epidurals, Dr. Saus said. Knowing the body weight of each patient, and whether there were equal numbers of heavy and light women in each group, would have made the data stronger, he said.

Whatever the causes, the study authors lost 15% to 20% of their subjects in each group because of this high failure rate, a fact that Dr. Saus felt left the study underpowered.

The patient groups showed no differences in levels of itching, sedation, hypotension, time to pain or overall satisfaction. The time to incision was nearly equal, at 32.7 minutes for the first group and 32.6 minutes for the second group.

In a later phone survey, the investigators found that all of the epidurals were successful for postoperative pain management for two to three days. They found no differences in shortness of breath or muscle weakness between the groups.

The authors believed their results could be easily replicated. Dr. Chiricolo said future studies might evaluate the same patient population using a hyperbaric solution for the spinal block instead of an isobaric solution, to see how results might differ.

System Tracks Drug-Resistant Bacteria in the BodySystem Tracks Drug-Resistant Bacteria in the BodyLab Reports


Positron emission tomography (PET) combined with sorbitol, an ingredient commonly used in sugar-free foods, can be used to detect and monitor gram-negative bacterial infections in real time, report researchers from Johns Hopkins University School of Medicine (Weinstein EA et al. Sci Transl Med. 2014;6[259]:259ra146).

First page PDF preview

The team converted a commercially available PET imaging tracer into radiolabeled sorbitol to selectively tag and illuminate clusters of Enterobacteriaceae within the body. This bacterial family, which includes Escherichia coli, metabolizes sorbitol and is the most common cause of gram-negative bacterial infections in humans.