Selfies linked to narcissism, addiction and mental illness


The growing trend of taking smartphone selfies is linked to mental health conditions that focus on a person’s obsession with looks.

According to psychiatrist Dr David Veal: “Two out of three of all the patients who come to see me with Body Dysmorphic Disorder since the rise of camera phones have a compulsion to repeatedly take selfies.

“Cognitive behavioural therapy is used to help a patient to recognise the reasons for his or her compulsive behaviour and then to learn how to moderate it,” he told the Sunday Mirror.

19-year-old Danny Bowman’s selfie addiction spiralled out of control, spending ten hours a day taking up to 200 snaps of himself on his iPhone.

The teenager is believed to be the UK’s first selfie addict and has had therapy to treat his technology addiction as well as OCD and Body Dysmorphic Disorder.

Part of his treatment at the Maudsley Hospital in London included taking away his iPhone for intervals of 10 minutes, which increased to 30 minutes and then an hour.

miley-cyrus-selfie-2

“It was excruciating to begin with but I knew I had to do it if I wanted to go on living,” he told the Sunday Mirror.

Public health officials in the UK announced that addiction to social media such as Facebook and Twitter is an illness and more than 100 patients sought treatment every year.

“Selfies frequently trigger perceptions of self-indulgence or attention-seeking social dependence that raises the damned-if-you-do and damned-if-you-don’t spectre of either narcissism or low self-esteem,” said Pamela Rutledge in Psychology Today.

The addiction to selfies has also alarmed health professionals in Thailand. “To pay close attention to published photos, controlling who sees or who likes or comments them, hoping to reach the greatest number of likes is a symptom that ‘selfies’ are causing problems,” said Panpimol Wipulakorn, of the Thai Mental Health Department.

The doctor believed that behaviours could generate brain problems in the future, especially those related to lack of confidence.

The word “selfie” was elected “Word of the Year 2013″ by the Oxford English Dictionary. It is defined as “a photograph that one has taken of oneself, typically with a smartphone or webcam and uploaded to a social media website”.

Why is Google doing a doodle on Nellie Bly?


Search engine giant Google is celebrating veteran American journalist Nellie Bly’s 151st birthday with a musical doodle.

Elizabeth Jane Cochran, popularly known by her pen name Nellie Bly, was a pioneer in the field of investigative journalism.

She began her career with The Pittsburgh Dispatch. Angered by a “regressive” editorial on women in the newspaper, she wrote a rebuttal piece. Impressed by it, the editor gave her a job in the newspaper.

“At the time women who worked at newspapers almost always wrote articles on gardening, fashion or society. Nellie Bly eschewed these topics for hard pressing stories on the poor and oppressed,” says the bio in her official website.

She authored “Around The World In Seventy-Two Days,” based on an expedition she took that covered many countries including, England, France, Egypt, Ceylon, Singapore, Hong Kong, and Japan.

She was also America’s first female war correspondent. She covered the World War I from Austria.

Nellie Bly passed away due to pneumonia on January 27, 1922.

“When creating the Doodle, we took inspiration from Karen O’s lyrics and Nellie’s journey around the globe,” said Liat Ben-Rafael, Program Manager, Google Doodles, through a blogpost.

​Early diagnosis: Revolutionary study says most cancers can now be predicted years in advance


Following a 13-year study, US scientists have identified a specific pattern of change in the length of a biomarker present in cells, which occurs many years before a patient begins suffering symptoms of cancer.

A joint team from Harvard and Northwestern monitored 792 initially cancer-free people for the period, 135 of who were eventually diagnosed with various forms of the illness. In those years, they constantly monitored their telomeres.

Telomeres serve as a protective cap on the ends of chromosomes. As people age, and their cells replicate more and more times, the telomeres grow shorter, until eventually the cell cannot be multiplied again, and simply dies. They can be both an indicator of ageing – a sort of internal body clock – and a cause of it, as cells with worn-down telomeres can malfunction, causing a range of age-related diseases.

 

Those who were eventually diagnosed with cancer saw their telomeres depleted alarmingly many years before it actually developed – with some future patients having telomeres that are typical for a person 15 years older.

Some of this catastrophic shortening could be predicted and diagnosed externally – for example simply by looking at patients who already suffering from inflammations, oxidative stress and other conditions that age their cells at a rapid rate. But in other people, who appeared to have few external risk factors for cancer, measuring this rate could produce a revelatory diagnosis.

But this is not the most surprising part of the study.

Suddenly, three to four years before the diagnosis, the shortening would stop, and stabilize – but this was not good news.

“We found cancer has hijacked the telomere shortening in order to flourish in the body,”said Dr. Lifang Hou, a professor of preventive medicine at Northwestern University Feinberg School of Medicine, and the lead author of the study, which has just been published in EBioMedicine magazine.

Normally, a cell with a shortening telomere might be “aged,” but it would at least self-destruct, to prevent any of the cell abnormalities being allowed to spread through the body. But in future patients, these cells multiplied, like with the help of the enzyme telomerase, eventually likely contributing to the cancer.

The study authors say that previous“snapshot” studies failed to understand the long-term complex interaction between telomeres and cancer – but that the relationship is clear and consistent.

“Understanding this pattern of telomere growth may mean it can be a predictive biomarker for cancer. Because we saw a strong relationship in the pattern across a wide variety of cancers, with the right testing these procedures could be used to eventually diagnose a wide variety of cancers,” said Hou.

To go beyond merely early testing, scientists now face two separate medical battles.

One is to slow down the telomere shortening, or whatever underlying conditions may cause it. The other, according to Hou, is to develop a potential cancer treatment that would force the afflicted cells to self-destruct instead of making copies, but which would not merely age and kill off swathes of healthy cells.

According to latest available World Health Organisation statistics, there were over 14 million new cancer diagnoses in 2012, with more than 8 million people dying of the disease.

Zombie bacteria may help heal wounds


Zombie bacteria

A zombie cell of Pseudomonas aeruginosa PA 01. The white glowing spots are silver nanoparticles embedded within the cell (Racheli Wakshlak)

Bacteria killed with silver can destroy living bacterial strains through a novel mechanism dubbed ‘the zombie effect’, a new study has found.

In the study, strains of Pseudomonas aeruginosa were killed with silver nitrate solution, a common antibacterial agent.

The bacterial cadavers were isolated, cleaned through and exposed to fresh, live Pseudomonas strains.

Exposure to nitrate solution didn’t simply kill the bacteria, it actually turned them into long-lasting, silver-releasing ‘zombie’ bacterial killers.

“If an antibacterial agent remains chemically active after the killing action, then this is not the end of the story, but only the beginning of it”, says the study’s lead author Dr David Avnir of the Hebrew University of Jerusalem, in Israel.

“In principle, if not washed away, the same amount of agent can kill generation after generation,” says Avnir.

The study, published in the journal Scientific Reports , is the first to report this novel antibacterial mechanism and has potential implications in the way wounds are treated, says Avnir.

“[The new mechanism] offers an explanation to the observation that in many cases the activity of antibacterial agents is prolonged, much beyond what one would expect from the administered dose.”

He says the findings may lead to reduced doses of antibacterial medicines and reduce the toxicity effects in a range of applications that use metallic and non-biodegradable antibacterial agents such as the treatment of wounds and cleaning of circulated water.

Silver lining

A key aspect of antimicrobial agents concerns their long-term effectiveness, which is crucial in preventing bacterial re-colonisation.

In this sense, a major focus of recent research has been the development of an effective and long-term delivery system of antibacterial agents.

Some metals like copper and silver have been extensively studied for the antibacterial properties properties of their cations. The slow release of these positively charged atoms have been used to successfully prevent contamination of wounds, biomedical devices and textiles.

Avnir and his team set out to answer the long-standing question of what happens to silver after it kills the bacteria.

“From that question, and using some basic rules of chemistry, it became obvious to us that the silver which is contained in the dead bacteria, must be available to be released from the dead bacteria, and kill a new population of viable bacteria; it worked … beyond our expectations,” he says.

Now Avnir seeks to apply his findings to other antibacterial agents and microorganisms.

“We intend to test how many cycles can a single dose can kill by forming generations of zombies,” he says.

Yale scientists use gene editing to correct mutation in cystic fibrosis


Left to right, cystic fibrosis cells treated with gene-correcting PNA/DNA show increasing levels of uptake, or use to correct the mutation. (Images by Rachel Fields)

Yale researchers successfully corrected the most common mutation in the gene that causes cystic fibrosis, a lethal genetic disorder.

The study was published April 27 in Nature Communications.

Cystic fibrosis is an inherited, life-threatening disorder that damages the lungs and digestive system. It is most commonly caused by a mutation in the cystic fibrosis gene known as F508del. The disorder has no cure, and treatment typically consists of symptom management. Previous attempts to treat the disease through gene therapy have been unsuccessful.

To correct the mutation, a multidisciplinary team of Yale researchers developed a novel approach. Led byDr. Peter Glazer, chair of therapeutic radiology, Mark Saltzman, chair of biomedical engineering, and Dr. Marie Egan, professor of pediatrics and of cellular and molecular physiology, the collaborative team used synthetic molecules similar to DNA — called peptide nucleic acids, or PNAs — as well as donor DNA, to edit the genetic defect.

“What the PNA does is clamp to the DNA close to the mutation, triggering DNA repair and recombination pathways in cells,” Egan explained.

The researchers also developed a method of delivering the PNA/DNA via microscopic nanoparticles. These tiny particles, which are billionths of a meter in diameter, are specifically designed to penetrate targeted cells.

In both human airway cells and mouse nasal cells, the researchers observed corrections in the targeted genes. “The percentage of cells in humans and in mice that we were able to edit was higher than has been previously reported in gene editing technology,” said Egan. They also observed that the therapy had minimal off target, or unintended, effects on treated cells.

While the study findings are significant, much more research is needed to refine the genetic engineering strategy, said Egan. “This is step one in a long process. The technology could be used as a way to fix the basic genetic defect in cystic fibrosis.”

The 10 Biggest Dangers Posed By Future Technology


It’s not easy predicting the future of technology. In the fifties it seemed a pretty much foregone conclusion that by 2015 we would all be commuting to the moon in our flying cars.

Yet here we are, still making a living on earth and still driving around on four wheels.

Today, predicting the direction in which technology is heading is a lot easier than it was 65 years ago.

Unfortunately, these advancements don’t always have the potential to benefit mankind, sometimes the opposite is true.

Is our way of life, or indeed all of humanity, at risk from these dangers?

Or are these predictions just as outlandish as those made in 1950?

The Total Loss Of Privacy

the-10-biggest-dangers-posed-by-future-technology-1

There are plenty of people who would argue that this isn’t a problem posed by future technology, but one which is already here. After the recent revelations that Samsung’s range of new smart TVs may be recording users private conversations and sharing the details with third parties, the fear that our home appliances may be spying on us is becoming less sci-fi paranoia and more a real concern.

So where will this end? Will the rush to embrace the digital era mean us waving goodbye to our privacy forever? The worry is that a day may come where every single aspect of our lives is monitored and recorded by governments who want to know what we’re up to, conglomerates investigating our spending habits, or even banks making sure we aren’t living above our means.

The often repeated line from those wishing to take our privacy away from us is that it is for our own protection. That if we’re so concerned about our privacy, then we must have something to hide; so stop closing the bathroom door when you go to the toilet, it makes you look suspicious.

A Permanent Digital Connection To The Workplace

the-10-biggest-dangers-posed-by-future-technology-2

Those of a certain age may remember a time when standard working hours consisted of Monday through Friday, 9 to 5. When you got home, work was mostly forgot about until the next day or after the weekend; it’s now pretty rare to find any occupation where this is still the case. As society as a whole becomes even more connected, we can expect work to intrude ever more into our home and family lives.

How would this be a danger? For a start there’s the possibility of companies ruling every aspect of an employees life, knowing where they are and what they’re doing at any given time. There’s also the potential workaholic lifestyle stifling human creativity. Even today most Americans spend an extra day a week working after hours, this is only like to increase in the future, leading to a multitude of health (death from overworking isn’t uncommon) and social problems.

All Data Becoming Digital

the-10-biggest-dangers-posed-by-future-technology-3

There are people alive today who have never seen a VHS tape, all they’ve ever known is the digital era of data storage. Even what was once the most popular ways of recording information, on paper and on printed photographs, is becoming less popular, thanks to the speed and simplicity their digital alternatives offer.

What possible dangers could arise from continuing this trend of data storage? Well there’s the risk of increasingly advanced technology becoming incompatible with today’s hardware. This is something we’re seeing now with data which was stored on cassette and floppy disks in the eighties. There’s also the danger of the all the hard drives being wiped – it can be a lot easier to delete some digital data than destroy the physical version. And what if, far into the future, kids are taught to type instead of write? Will pencils and pens disappear completely? Could the next DaVinci’s tools be a mouse instead of a paintbrush?

Having A Totally Machine Based Workforce

the-10-biggest-dangers-posed-by-future-technology-4

Advancements in technology have always posed the risk of creating machines that do the jobs of humans. Manufacturing jobs in particular are often one of the most at risk areas.

With new advances in the field of robotics being made every day, it has been estimated that nearly half of all US jobs will be vulnerable to computerization within the next 20 years.

Previous employment areas which were once considered ‘human only’ are now being taken over by machines. Twenty years ago a self-checkout machine was thought impossible, now there are almost half a million worldwide, enabling one member of staff to run up to six checkout lanes at a time.

With AI becoming ever more advanced, there may come a time when computers are able to do jobs which requires human decision making. With automatic cars also just around the corner, will the sole job for humans in the future be the repair and maintenance of the machines which took their jobs? Or at least until a device is invented that can do that as well.

The Death Of Human Interaction

the-10-biggest-dangers-posed-by-future-technology-5

Another future danger which many claim is a problem we are facing today. Has the advent of social media made society anything but social? Would you prefer to purchase something online or using a machine rather than talk to a fellow human being? Is texting always better than speaking over the phone? The accusations of technology destroying the art of conversation may hold some truth, but what of the future? How will the next generation’s children of the digital age interact with others?

If we do stop all non-technological social interaction, it can result in some unforeseen problems. There are even biological effects that arise from lack of human interaction. An online only world may result in the death of the offline one.

The Over-Reliance On Technology

the-10-biggest-dangers-posed-by-future-technology-6

While technology undoubtedly has the ability to advance us as humans, becoming over-reliant on it may actually reduce our intelligence. There are numerous examples today of drivers blindly following GPS instructions into rivers and ditches, people who cannot spell without the use of a word processor, and those who cannot perform simple math problems without the use of a calculator.

So what will the future give us? Top scientist Professor Stephen Hawking said “The development of full artificial intelligence could spell the end of the human race.” Part of this risk could come from giving more military control over to Artificial Intelligence, in theory reducing the chance of human error. While it’s highly unlikely full military decision making would be handed over to an AI, areas such as targeting and drone control could be put in the hands of computers.

Eventually the human race may lose the skills it has gained over the years; with almost every aspect of our lives totally reliant on technology. The problem will come if that technology ever fails, or turns against us.

The Increase In Technology-Related Illnesses

the-10-biggest-dangers-posed-by-future-technology-7

There have been few issues as controversial as the link between human illness – specifically cancer – and modern technology. While the overwhelming amount of evidence suggests that the use of cell phones and wi-fi does not increase the risk of cancer, there have been some studies suggesting the opposite is true.

While the future versions of wi-fi and cellphones may only pose a minimal threat to user’s health, it is the environmental dangers which may prove the biggest risk to humans: the technotrash which is produced from constantly renewing our gadgets, the using up of natural resources as demand increases and prices drop, the pollution that comes from the manufacture and use of these items. Unless more effort is put into developing ‘green’ tech, this could become a large, if initially unseen, danger in the future.

The Gray Goo Scenario

The-10-Biggest-Dangers-Posed-By-Future-Technology

Nanotechnology is an area of science and engineering that involves the study and manipulation of particles 1-100 nanometers in size. They have the potential to bring huge benefits in fields such as science, engineering, computing and, especially, medicine.

The Gray Goo scenario is the possibility that these tiny, self-replicating machines may get out of control and start converting all organic matter on earth into other nanobots. Leaving behind a gray goo and a lifeless earth.

While this may seem to some like nothing more than a science fiction fantasy, the scenario has been taken seriously enough for a top nanotechnology researcher to suggest some public policy recommendations to prevent this from becoming a horrific reality.

The Singularity

the-10-biggest-dangers-posed-by-future-technology-9

The singularity is the term used to describe the hypothesised moment when technology becomes so advanced it radically changes civilization as we know it. This could be from an artificial intelligence reaching a level of super intelligence we cannot imagine, human biology and technology becoming so intertwined that we become almost part machine, or even if we achieve the ability to upload our consciousness so that we essentially live forever.

The danger of the singularity comes from the risk that we may stop being human altogether. Could there come a point in the far-flung future where technology really does advance to such a point that we cannot imagine the consequences? Or is it just something in the fevered dreams of technophobes?

Artificial Intelligence

the-10-biggest-dangers-posed-by-future-technology-10

Professor Stephen Hawking, Bill Gates, Elon Musk, Clive Sinclair, many top minds are now warning of the future dangers which uncontrolled advances in AI could bring. With Musk claiming it could be “potentially more dangerous than nukes“.

Part of the reason people don’t take the threat from AI seriously is the number of times it has been showcased in fiction. Movies such as Terminator have made this threat appear as worrisome as aliens invading.

Yet every year huge advancements in the field are made, and without careful controls in place, the chance of creating a super intelligent AI will increase as time progresses. If this happens, not only may it consider humans a lower form of life and therefore expendable, but it may even create more AI’s, making the human race obsolete.

Adjustable gastric band feasible for treatment of type 2 diabetes, obesity


Patients with type 2 diabetes and obesity achieved similar 1-year benefits on diabetes control, cardiometabolic risk and patient satisfaction when undergoing laparoscopic adjustable gastric band or an intensive diabetes medical and weight management program, according to recent research.

Allison B. Goldfine, MD, head of Joslin Diabetes Center section of clinical research and associate professor of medicine at Harvard Medical School, and colleagues conducted a 12-month prospective, randomized clinical trial to evaluate the effects of laparoscopic adjustable gastric band (LAGB; n=18) and an intensive diabetes medical and weight management program (IMWM; n = 22). Participants were aged 21 to 65 years, had a BMI of 30 to 45 kg/m2, a diagnosis of type 2 diabetes more than 1 year before the study period and HbA1c of 6.5% or more on anti-hyperglycemic medications.

Allison B. Goldfine

Allison B. Goldfine

A similar proportion of participants in the LAGB group achieved HbA1c less than 6.5% and fasting plasma glucose level below 7 mmol/L (33%) at 12 months compared with the IMWM group (23%; P = .457). The LAGB group revealed greater weight loss over 12 months (-13.5 kg) compared with the IMWM group (-8.5 kg). However, changes in fat free mass (P = .617) and decreases in waist circumference (P = .856) were similar between the groups.

Over the 12-month period, reductions in HbA1c were similar between the groups at -1.23% for the LAGB group and -0.95% for the IMWM group. The groups also had similar reductions in fasting glucose (P = .975).

The IMWM group had greater reductions in systolic blood pressure from baseline compared with the LAGB group (P = .038). However, no difference was found for change in diastolic BP, total cholesterol, triglycerides, HDL-cholesterol or LDL-cholesterol at 12 months. More participants in the LAGB group achieved LDL-cholesterol levels below 2.59 mmol/L following LAGB (83%) compared with the IMWM group (45%; P = .019).

Researchers used the United Kingdom Prospective Diabetes Study risk engine to determine changes in cardiometabolic risk scores for coronary heart disease, fatal CHD, stroke and fatal stroke, and the scores were similar between the groups 12 months following LAGB and IMWM.

“We can anticipate long-term health benefits from both of these approaches, but they do require different investment of time and energy by the patient and over the short-term our studies show metabolic outcomes are similar in patients comparable to those in our study,” Goldfine told Endocrine Today. “It is important to have different therapeutic options available for patients with a complex disease like type 2 diabetes.” – by Amber Cox

Does Artificial Food Coloring Contribute to ADHD in Children?


Kraft Macaroni & Cheese—that favorite food of kids, packaged in the nostalgic blue box—will soon be free of yellow dye. Kraftannounced Monday that it will remove artificial food coloring, notably Yellow No. 5 and Yellow No. 6 dyes, from its iconic product by January 2016. Instead, the pasta will maintain its bright yellow color by using natural ingredients: paprika, turmeric and annatto (the latter of which is derived from achiote tree seeds).

A gooey bowl of birght yellow Kraft Macaroni & Cheese

The company said it decided to pull the dyes in response to growing consumer pressure for more natural foods. But claims that the dyes may be linked to attention-deficit hyperactivity disorder (ADHD) in children have also risen recently, as they did years ago, putting food dyes under sharp focus once again. On its Web site Kraft says synthetic colors are not harmful, and that their motivation to remove them is because consumers want more foods with no artificial colors.

The U.S. Food and Drug Administration maintains artificial food dyes are safe but some research studies have found the dyes can contribute to hyperactive behavior in children. Food dyes have been controversial since pediatrician Benjamin Feingold published findings in the 1970s that suggested a link between artificial colors and hyperactive behavior, but scientists, consumers and the government have not yet reached a consensus on the extent of this risk or the correct path to address it.

After a 2007 study in the U.K. showed that artificial colors and/or the common preservative sodium benzoate increased hyperactivity in children, the European Union started requiring food labels indicating that a product contains any one of six dyes that had been investigated. The label states the product “may have an adverse effect on activity and attention in children.” The FDA convened a Food Advisory Committee meeting in 2011 to review the existing research, and concluded that there was not sufficient evidence proving that foods with artificial colors caused hyperactivity in the general population. The FDA also decided that further research was needed, and that a label disclosing a possible link between dyes and hyperactivity was unnecessary.

But Joel Nigg, professor of psychiatry, pediatrics and behavioral neuroscience at Oregon Health & Science University, says the studies support the link between dyes and hyperactivity. “The literature here is so sparse that on the one hand you can sympathize with those who want to take a wait-and-see attitude. But on the other hand, when we do look at the literature we have, it’s surprising that we do see effects that seem to be real,” he says. “Do you want to take a chance that these initial studies are wrong and put kids at risk or do you want to take a chance that they’re right? We have to work on the data we have.”

A 2012 meta-analysis of studies co-authored by Nigg concluded that color additives have an effect on hyperactive behavior in children, with a small subset showing more extreme behavior than others. He also concluded that further research was needed because so many of the studies looked at only small numbers of people or could not draw conclusions about the general population. Studies have also shown removing foods containing artificial dyes via restriction diets can successfully decrease hyperactivity but Nigg says this is likely because removing processed foods in general is healthier and results in better behavioral outcomes for children with ADHD.

Companies typically add artificial colors to make their products look more appetizing. The chemicals Yellow Nos. 5 and 6 have been in use since the early 1900s, and the FDA approved them for use in 1969 and 1986, respectively. They are two of the nine certified colors that food manufacturers must list on ingredient labels. According to the FDA, Yellow No. 5 can cause an allergic reaction for one out of every 10,000 people. The amount of dye the FDA has deemed acceptable for daily intake, or ADI, is five milligrams per kilogram of body weight per day (mg/kg bw/day) for Yellow No. 5 and 3.75 mg/kg bw/day for Yellow No. 6. An April 2015 study looked at how much dye was in recommended servings of processed foods; it found Kraft Macaroni & Cheese contained 17.6 milligrams of Yellow Nos. 5 or 6 per one-cup serving. Because the chemicals are so similar in color, and thus difficult to tell apart in measurements, the researchers chose the dye that allowed the highest concentration. For a child weighing 30 kilograms (about 65 pounds), this translates to 0.59 mg/kg bw per serving.

Bernard Weiss, professor emeritus of the Department of Environmental Medicine at the University of Rochester Medical Center who has researched this issue for decades, says he is frustrated that the FDA has not acted on the research showing the connection between artificial dyes and hyperactivity. “All the evidence we have has showed that it has some capacity to harm,” he says. “In Europe that’s enough to get it banned because a manufacturer has to show lack of toxic effects. In this country it’s up to the government to find out whether or not there are harmful effects.” Weiss supports banning artificial colors until companies have evidence that they cause no harm. Like most other scientists in this field, he thinks more research, particularly investigating dyes’ effects on the developing brain, is imperative.

Nigg says the FDA should require manufacturers to include a label saying artificial colors could affect hyperactivity in some children, like the E.U. does. “I think the most important thing we’ve seen in our research is that there’s a subgroup of kids that seems to respond much more to these types of things, and that group is what I worry about.” The only way to protect that subgroup, he says, “is to protect everybody. We don’t have to alarm the public to inform the public.”

Turning to Bacteria to Fight the Effects of Climate Change


Recently the United Nations warned that the world could suffer a 40 percent shortfall in water by 2030 unless countries dramatically cut consumption. Since 70 percent of the world’s fresh water goes to agriculture, this means changing the way people farm.  The need is ubiquitous. In California’s Central Valley, farmers drilling for water are now tapping stores 30,000 years old. In Kenya, which is facing the worst drought since 2000, farmers are hand-digging wells to reach the receding water table, even as one-in-ten Kenyans are hungry.

But in both regions, a game-changing solution could come from an overlooked resource: billions of beneficial bacteria that teem in the soil near the roots of plants. Such bacteria are found in soil everywhere: from the hard-hit Kenyan coast, where my family grows tomato, peppers and watermelon, to the experimental greenhouses in Alabama where I now work to unearth the secrets of these soil microbiomes.

Indeed, scientists across five continents are digging in to generate evidence of the beneficial associations among microbes and crops such as corn, cotton, tomato and peppers. Plants normally exude a carbon-rich liquid that feeds the microbes. They also exude various chemicals in response to a range of stressors, including insect attacks and water stress. Soil bacteria sense these messages, and secrete chemicals of their own that can activate complex plant defenses.

For example, studies have shown that a combination of beneficial microbes applied directly to seeds is as effective as commercial pesticides in combatting the rice leaf-folder, which wraps itself in and then eats the leaves of young plants. Other studies demonstrate that some soil microbes significantly increase growth and yield of important crops. In Germany, a 10-year field study showed that beneficial microbes increase maize plant growth and the availability of phosphorous—and essential plant nutrient—in the soil. In Colombia, microbiologists have mass-produced bacteria that colonize cassava plants and increase yield by 20 percent.

For farmers struggling to adapt to climate change, especially small-scale farmers with limited resources, an increase in yield can open fresh opportunities for the simple reason that crop sales generate cash, including money that can be invested in a range of “climate-smart” farming techniques that further conserve water and soil, and sustainably increase production on small plots of land.

Most recently, studies point to a direct role for soil bacteria in shielding crops from drought; improving their growth and ability to absorb nutrients; and enhancing their tolerance of flooding, high temperatures, low temperatures and many other challenges of a changing global climate.

In one study, scientists reported that peppers cultivated in arid desert-like conditions act as “resource islands” attracting bacteria that sustain plant development when water is scarce. Another study identified soil bacteria that prompt plants to temporarily close the pores on their leaves. This not only prevents disease-causing bacteria from entering the plant, but also prevents the escape of moisture, preserving the plant’s water.

I can see this in my research labs, where several running experiments dramatically illustrate the role of soil microbes in protecting against water stress. Cotton, corn and tomato plants grown in soil that is infused with certain bacteria have root sizes that are triple the size of plants grown in untreated soil after water has been withheld for just five days. The treated plants stand tall and robust; the untreated wilt and wither. The difference is tremendous.

Although companies such as Nozozymes, Monsanto and Bayer Crop Sciences are exploring the potential commercialization of soil bacteria, and several start-up companies are working around the clock to commercialize microbial cocktails, overall, research into this area has barely begun.

The United Nations designated 2015 as the International Year of Soil, and governments, funders and researchers are taking a hard look at the role of healthy soil in achieving food security as population grows and climate change lowers yields of important food crops. But rarely do their initiatives consider the potential of the communities of beneficial bacteria, billions strong, and adapted through millennia to aid plants in their battle for survival.

Of course, the use of soil microbes is just one part of the complex and interlocking changes needed to ensure the sustainability of our natural resources and the productivity of our food systems. But they could provide novel solutions that are of central significance in contemporary plant science as it addresses the challenges of climate change.

We must invest in understanding and harnessing this resource, which works with nature, not against it. As concerns about food security increase with the global temperatures, soil bacteria could be the next key tool for food security, helping farmers around the world conserve water, increase yields and improve nutrition under the changing climate.

Can Astronomical Tidal Forces Trigger Earthquakes?


Recent studies have suggested a link between oceanic tides and some earthquake activity, but proof the gravitational tug of the moon and sun can set off temblors remains elusive.

The motion of the ocean is rocking our world, or at least helping to give it a vigorous shake in some locations when the conditions are right, a team of seismologists says.

The idea that celestial bodies can cause earthquakes is one of the oldest theories in science. In 1687 Newton’s universal law of gravitation revealed ocean tides are caused by the attraction of the sun and moon. And in the 1700s scientists started to wonder if these same distant bodies might also affect geologic faults. This idea flourished in the 19th century. The eminent French seismologist Alexis Perrey spent decades searching for a link between earthquakes and the phases of the moon. Scientific American published an 1855 article on his work. Even Charles Darwin mused on the subject (page 259).

At the end of the 20th century the notion the heavens could have a hand in earthquakes seemed to have been discounted. Despite many attempts, researchers had repeatedly failed to find hard evidence tides and temblors were connected. But in the past 20 years some studies have suggested that this long-suspected phenomenon might actually be real. The most recent of these was published in February by researchers at the Aristotle University of Thessaloniki in Greece who analyzed records from more than 17,000 earthquakes that struck the south of that country between 1964 and 2012. Rather than occurring at random intervals, the quakes seemed to be related to oceanic tidal effects, the team found.

The results reveal that the number of earthquakes with magnitudes between 2.5 and 6.0 on the Richter scale was strongly correlated with two of the four gravitational factors that cause Earth’s tides—one due to the gravitational attraction of the sun, called S2, and the other caused by the combined attraction of the sun and moon, or K1. (These, along with two other effects of lunar gravity, O1 and M2, are largely responsible for ocean tides.)

The team found that earthquakes were around 15 percent more likely to strike at times of the day when the pull of the sun (S2) was strongest, compared with when it was at its weakest. For the combined influence of the sun and moon (K1), the opposite trend was observed; when at its weakest, earthquakes were around 16 percent more likely to strike than when K1 was at its strongest. Such a correlation does not prove that tides are triggering temblors in Greece. Nevertheless, the observational results are intriguing and could represent one of the largest tidal effects on earthquakes ever measured.

Whatever is going on beneath Greece, enough evidence has accumulated since the late 1990s so that some scientists have begun to accept the concept of tidal triggers for temblors. “Logically there must be a connection between tides and earthquakes,” says John Vidale, a seismologist at the University of Washington who wasn’t involved in the Greek study. “Tides stress faults and earthquakes [occur] when the stress is sufficient.” The contribution of this effect, however, is probably vanishingly small and only occurs on certain parts of the planet.

One of the places that tides have the strongest influence on seismicity is in deep ocean basins, says Elizabeth Cochran, a geophysicist with the U.S. Geological Survey who also was not involved in the study. “The largest impact of tides on earthquakes is in oceanic regions,” she says, “where the ocean tides can in some locations impart a large force on shallow faults.”

The gravitational pull of the sun and moon is far too weak to trigger an earthquake on its own. When seawater accumulates above submarine faults that are already close to rupture, however, the increased pressure can reduce friction on the fault and thereby hasten a quake’s onset. This idea suggests that as an undersea fault builds toward an earthquake, it may become more sensitive to the small nudges of tidal forces.

Some scientists think this sensitivity could someday be used to forecast dangerous earthquakes. In 2012 a seismologist discovered that in the decade leading up to the devastating Tohoku earthquake, which hit Japan in 2011, smaller quakes started to follow the pattern of the tides. As soon as the larger temblor struck, however, the correlation disappeared. The correlation could derive from the fact that the Tohoku event’s epicenter was located in the Pacific Ocean. The movement of overlying seawater might have magnified the minute tidal forces, enabling them to trigger small tremors as the fault became stressed in the years leading up to the larger quake. The Tohoku study offers promise that large submarine earthquakes could someday be forecast by studying the tides. This approach, however, is unlikely to prove useful for quakes which strike far inland, such as the devastating magnitude 7.8 event that struck Nepal April 25. Such tidal forces are weaker in the absence of water.

Tides may not aid in forecasting all earthquakes globally, although in some regions they could become an important tool for seismologists. Beyond centuries of scientific interest, the effect of the sun and moon on the Earth’s deep geology could prove more than scientifically interesting—it could save lives.