Yes, Robots Are Coming for Our Jobs—Now What?


yes-robots-are-coming-for-our-jobs-now-what_1

 

M.I.T. economist Erik Brynjolfsson explains how technology has affected economic growth and productivity, and how human workers can adapt.

Fifteen years ago Deep Blue beat Garry Kasparov in a game of chess, marking the beginning of what Massachusetts Institute of Technology economist Erik Brynjolfsson calls the new machine age—an era driven by exponential growth in computing power. Lately, though, people have been feeling uneasy about the machine age. Pundits and experts seem to agree that the robots aredefinitely taking our jobs. At last week’s TED conference, Brynjolfsson argued that the new machine age is great for economic growth, but we still have to find a way to coexist with the machines. We asked him to expand on a few points.

[An edited transcript of the interview follows.]

You’ve written a lot about what you call the new machine age, which you argue is fundamentally different from previous industrial eras. What’s so different about it?
The first and second industrial revolutions were defined by these general-purpose technologies, the steam engine and, later, electricity. The new machine age is defined by digital technologies, and those have a lot of unusual characteristics. First, you can reproduce things at close to zero marginal cost with perfect quality and almost instant delivery—that’s something you can’t do with atoms. Second, computers are getting better faster than anything else—ever. That’s something we’re not used to dealing with, and it’s happening year over year, relentlessly. Third, you can remix or combine technologies in a way that doesn’t use them up, but that instead allows for even more combinations. That means we’re in no danger of running out of ideas. Put those all together, and I’m very optimistic about the future of economic growth and productivity growth. And the data bear that out.

You’ve also said that productivity has become “decoupled” from employment. Can you explain?
Throughout most of modern history, productivity and employment have grown side by side. But starting about 15 years ago they started becoming decoupled. Productivity continued to grow, even accelerate, but employment stagnated and even fell, as did median wages for the people who were still working. This was an important milestone, because most economists, including me, used to be of the mind-set that if you just keep increasing productivity, everything else kind of takes care of itself.

But there’s no economic law that says everyone has to benefit equally from increased productivity. It’s entirely possible that some people benefit a lot more than others or that some people are made worse off. And as it turns out, for the past 10 to 15 years it’s gone that way. The pie has gotten bigger but most of the increase in income has gone to less than 1 percent of the population. Those at the 50th percentile or lower have actually done worse in absolute terms.

There are a lot of causes for this—some of it has to do with offshoring, tax policy and so on—but those are minor players compared with the big story, which is the nature of technology. It’s simultaneously allowing us to grow faster and leading us to a very different allocation of those benefits.

Exactly how is technology shifting the landscape of jobs and wealth—who wins and who loses?
In our book Race against the Machine [written with Andrew McAfee, principal research scientist at the Center for Digital Business in the M.I.T. Sloan School of Management], we describe three sets of winners and losers. The first is skilled versus less skilled workers, as a result of what’s called skill-biased technical change. As technology advances, educated workers tend to benefit more, and workers with less education tend to have their jobs automated. It’s not a perfect correlation, but there is a correlation.

The second is called capital-biased technical change. The share of income going to capital owners has grown relative to the share of income going to labor providers. It makes some intuitive sense that when you replace human workers in a factory with robots, the capital owners will earn a bigger share of the income from that factor. That’s been happening at an accelerating pace in recent years. You may be surprised to hear that for most of the 20th century that did not happen. In fact, it didn’t really happen until about 15 years ago.

The third change might be the most important one: It’s called superstar-biased technical change, or talent-biased technical change. If somebody has talent or luck in creating something that people want, that thing can be replicated with digital technology much more easily than in the past. Think of someone who writes software. You can take that talent or luck and replicate it a million times. And while the person who created it does very, very well, the people who previously did that job are less important or maybe not even necessary. The example that I gave in my TED talk was TurboTax. You’ve got a human tax preparer being replaced by a $39 piece of software. It makes the pie bigger in the sense that you create more value with less effort, but most of that value goes to a very small group of people.

I do find it surprising that what you call capital-biased technical change didn’t take off until 15 years ago. Computers aren’t that new, nor are factory robots. What has changed?
I should be clear: Technology has always been creating and destroying jobs. Automatic threshers replaced 30 percent of labor force in agriculture in the 19th century. But it happened over a long period of time, and people could find new kinds of work to do. This time it’s happening faster. And technological change is affecting more industries simultaneously. Threshing changed agriculture, but digital technology is affecting pretty much every industry. Finally, these technologies are just qualitatively different. The ability to digitize makes it more valuable to be a creator and less valuable to be someone who carries out instructions or replicates stuff. You don’t need to pay people to handcraft each copy of TurboTax. That’s different than, say, an automobile—at least for the time being.

In a way, we were just kind of lucky during the 19th and 20th centuries that technology evolved in a way that helped people at the middle of the income distribution. There’s no economic law that says technology has to work that way. And as it happens, a set of technologies that don’t work that way are becoming very important right now.

As you know, many people are underwhelmed by the benefits those technologies have brought us. Here’s something you hear a lot: In the 20th century we put people on the moon. In the 21st century we got Facebook. What do you make of that sentiment?
Even if you go by industrial-age metrics—amount of stuff produced— productivity has been doing great. But if anything, I think that’s an underestimate, because both psychologically and in the data we tend to underweight things that are made of bits versus things that are made of atoms. A rocket blasting off looks really big and impressive. Facebook, which you use to connect with your grandmother, maybe doesn’t look as impressive.

Yet in terms of utility, which is what economists care about, you could make the case that Facebook has made more people happier. People seem to be voting with their hours. They’re spending time communicating with friends and family—showing pictures of their babies or dogs. And I’m not in a position to say, no, that’s an unworthy type of happiness. I’m going to go by what people choose to do. In fact, when we do research on where people are spending their time and what they’re doing, we find that there’s about $300 billion of unmeasured value in all these free goods on the Internet—Wikipedia, Facebook, Google, free music—that don’t get counted in GDP statistics.

But what do we do about the decoupling of productivity and employment? If technology creates a class of permanently underemployed people, the social effects could be awful, and a lot of people are very worried about it.
The first step is to diagnose it correctly—to understand why the economy is changing and why people aren’t doing as well as they used to. We also need to think about inventing new kinds of organizations that work in this new culture. There are a few examples, most of them relatively small, like oDesk or Etsy or Foldit. Foldit is a game you can play on the Web. Humans have very good visual cortexes and are able to identify ways that proteins fold that computers can’t. One of the things you need to do in biomedicine is understand how a particular sequence of amino acids codes for a particular protein shape, and it turns out that computers can’t do that—but humans are very good. It’s a more practical version of humans and computers playing chess together to beat other computers.

We need to unleash entrepreneurs to find more places where humans have capabilities that machines don’t have, and where the two of them working together can create more value than just the machines alone could—what we call racing with the machine. Just as they did a century ago when people were no longer needed on the farm, people came up with whole new industries. We’re not doing that as well as we could be and have to try to jump-start that.

 

Source: scientificamerican.com

Waxing Innovative: Researchers Pump Up Artificial Muscles Using Paraffin.


artificial-muscle-advance_1

 

When Scientific American heard from chemist Ray Baughman a year ago, he and his international team of nanotechnologists had taken artificial-muscle technology to the next level. Their innovation relied on spinning lengths of carbon nanotubes into buff yarns whose twisting and untwisting mimicked natural muscles found in an elephant’s trunk or a squid’s tentacles.

Now the researchers are reporting a new artificial muscle–building technique that makes their carbon nanotube yarns several times faster and more powerful. These qualities could help deliver on the technology’s promise of developing compact, lightweight actuators for robots, exoskeletons and other mechanical devices, although several challenges remain.

The latest breakthrough comes from infusing the carbon nanotube yarns with paraffin wax that expands when heated, enabling the artificial muscles to lift more than 100,000 times their own weight and generate 85 times more mechanical power during contraction than mammalian skeletal muscles of comparable size, according to the researchers, whose latest work is published in the November 16 issue of Science.

The previous-generation artificial muscles were electrochemical and functioned like a supercapacitor. When a charge was injected into the carbon nanotube yarn, ions from a liquid electrolyte diffused into the yarn, causing it to expand in volume and contract in length, says Baughman, director of the University of Texas at Dallas‘s Alan G. MacDiarmid NanoTech Institute. Unfortunately, using an electrolyte limited the temperature range in which the muscle could function. At colder temperatures the electrolyte would solidify, slowing down the muscle; if too hot, the electrolyte would degrade. It also needed a container, which added weight to the artificial-muscle system.

The wax eliminates the need for an electrolyte, making the artificial muscle lighter, stronger and more responsive. When heat or a light pulse is applied to a wax-impregnated yarn about 200 microns in diameter (roughly twice that of a human hair), the wax melts and expands. In about 25 milliseconds this expansion creates pressure causing the yarn’s individual nanotube threads to twist and the yarn’s length to contract. Any weightlifter will tell you that the success of any muscle—artificial or natural—depends in part on the degree of this contraction. Depending on the force exerted, the Baughman team’s muscle strands could contract by up to 10 percent.

Muscles are also judged by the weight they can lift relative to their size. “Our muscles can lift about 200 times the weight of a similar-size natural muscle,” Baughman says, adding that the wax-infused artificial muscles can also generate 30 times the maximum power of their electrolyte-powered predecessors.

The researchers’ latest artificial muscles move the technology closer to commercialized products such as environmental sensors, aerospace materials and even textiles that take can take advantage of nanoscale actuators, University of Cincinnati mechanical engineering professor Mark Schulz, wrote in a related SciencePerspectives article. This new artificial muscle outperforms existing ones, allowing possible applications such as linear and rotary motors; it also might replace biological muscle tissue if biocompatibility can be established, he adds.

However, Schulz points out—and Baughman is quick to acknowledge—that even this new crop of artificial muscles faces many challenges before they can be a practical alternative to mini–electric motors in many of the products we buy. Despite their improvements, the latest artificial muscles are for the most part inefficient and limited in the combinations of force, motion and speed they can generate, according to Schulz.

Indeed, these new artificial muscles operate at about 1 percent efficiency, a number Baughman and his colleagues want to increase at least 10-fold. An option for improving efficiency is to use a chemical fuel rather than electricity to power the muscles. “One way to compensate for a lack of efficiency is to use fuel like methanol instead of a battery,” he says. “You could store more than 20 percent more energy in a fuel like methanol than you can in a battery.”

Another challenge is that the artificial muscles must be heated and cooled to contract and release, respectively. Short lengths of yarn can cool on their own in a matter of seconds, but longer pieces would need to be actively cooled using water or air, otherwise the muscle would not relax. “Or you’d need [to use a] material that doesn’t require thermal actuation,” Baughman says. “If you keep making the [carbon nanotube] yarn longer and longer, your cooling rate increases.”

This issue of scale poses perhaps the greatest challenge. A one-millimeter length of artificial muscle can lift about 50 grams, according to Baughman. That means lifting several tons would require a greater length of carbon nanotube yarn than is practical. “We’d like our artificial muscles to be used in exoskeletons that help workers or soldiers lift objects weighing tons,” he says. But the researchers are still working out ways to pack enough yarn to perform such tasks into the length of an exoskeletal limb.

Carbon nanotube artificial muscles are more likely to first appear in products requiring only short lengths. Baughman envisions artificial muscles used in a catheter for minimally invasive surgery, “where you want to have lots of functionality on the end of the catheter to do surgical manipulations.” Another application with flex appeal—”smart” fabrics that can automatically react to their environments, becoming more or less porous when they detect heat or harmful chemicals in the air.

 

Source: scientificamerican.com

 

Did Asteroid Impacts Spark Life’s “Left-Handed” Molecules?


asteroid-impacts-spark-left-handed-molecules_1

The mysterious bias of life on Earth toward molecules that skew one way and not the other could be due to how light shines in star- and planet-forming clouds, researchers say

If correct, researchers’ findings suggest the molecules of life on Earth may initially have come from elsewhere in the cosmos.

The organic molecules that form the basis of life on Earth are often chiral, meaning they come in two forms that are mirror images, much as right and left hands appear identical but are reversed versions of each other.

Strangely, the amino acids that make up proteins on Earth are virtually all “left-handed,” even though it should be as easy to make the right-handed kind. Solving the mystery of why life came to prefer one kind of handedness over the other could shed light on the origins of life, scientists say. [7 Theories on the Origin of Life]

One possible cause for this bias might be the light shining on these molecules in space. One can think of all light waves as corkscrews that twist either one way or the other, a property known as circular polarization. Light circularly polarized one way can preferentially destroy molecules with one kind of handedness, while light circularly polarized the other way might suppress the other handedness.

To see how much light is circularly polarized in outer space, astronomers used a telescope at the South African Astronomical Observatory to detect how light is polarized over a wide field of view across the sky encompassing about a quarter diameter of the moon.

The scientists focused on the Cat’s Paw Nebula about 5,500 light-years from Earth in the constellation Scorpius. The nebula is one of the most active star-forming regions known in the Milky Way.

The researchers discovered that as much as 22 percent of light from the nebula was circularly polarized. This is the greatest degree of circular polarization yet seen in a star-forming region, and suggests circular polarization may be a universal feature of star- and planet-forming regions.

“Our findings show circular polarization is common in space,” study lead author Jungmi Kwon, an astronomer at the National Astronomical Observatory of Japan, told SPACE.com.

Computer simulations the astronomers developed suggest this large amount of circular polarization is due to grains of dust around stars. Magnetic fields in the nebula align these dust grains, and light that scatters off these aligned grains end up circularly polarized — dust on one side of the magnetic field gives light scattering off it one kind of circular polarization, while grains on the other side have the opposite effect.

“Until now, the origin of circular polarization was unclear and circular polarization was basically considered a rare feature,” Kwon said.

Chemical reactions inside nebulas can manufacture amino acids. These molecules end up possessing a certain handedness depending on the light shining on them. The researchers suggest left-handed amino acids may then have rained down on Earth bypiggybacking on space rocks, resulting in one handedness dominating the other.

“Left-handed amino acids produced by circular polarization in space can be delivered by meteorites,” Kwon said.

Source: Scientific American

 

Meteor Shower and Eclipses of Sun to Grace May Sky.


meteor-shower-eclipses-sun-may-sky_1

A meteor shower and a cosmic “ring of fire” will dominate the night sky this month

The annual Eta Aquarid meteor shower and an annular solar eclipse both occur at the beginning of May, while a less impressive lunar eclipse is set to take place at the end of the month, but that isn’t all. Constellations, planets and other celestial bodies can also be spotted in various parts of the sky throughout the month, weather permitting.

“As night falls, look for Jupiter shining in the west,” Nancy Calo from the Space Telescope Science Institute said during her narration of a video highlighting May stargazing. “The best views of Jupiter will come early in the month, when it is highest in the sky. In the closing days of May, Mercury and brilliant Venus will join Jupiter low in the west. A telescope will provide better views of the planets.” [See sky maps for May’s best stargazing events]

A telescope isn’t required to see the Eta Aquarid meteor shower this weekend, however. The shower is expected to peak after midnight on May 5, and viewers can expect to see about 10 meteors per hour coming from the eastern part of the sky, Calo said. The Eta Aquarids are one of two meteor showers created by dusty debris left over from the famed Halley’s Comet. The Orionid meteor shower in October is the other.

On May 10, a few days after the meteor shower’s peak, stargazers in certain parts of the world can see an annular solar eclipse that should make the sun look like a shining ring in the sky. Annular solar eclipses are also known as “ring of fire” eclipses because they occur when the moon does not completely block the sun, leaving a bright ring visible around the moon.

Weather permitting, some parts of the world will get a partial view of the eclipse — created when the moon passes between the sun and the Earth, obscuring the star — but Australia and the southern part of the Pacific Ocean will get the best showing. It will not be visible from North America.

A minor lunar eclipse will take place on May 24, but viewers might not be able to tell.

Although the eclipse will be visible to stargazers in South America, western Europe and western Africa, the full moon will only pass through the Earth’s penumbral shadow. This part of the planet’s shadow still receives some direct sunlight, making it difficult to see a difference between regular moonlight and the dimmer light of the eclipse.

May’s lunar eclipse will most likely be less impressive than the partial lunar eclipse in April. During that eclipse, the moon dipped into the umbra — the part of Earth’s shadow that doesn’t receive any direct sunlight — which obscured part of the rocky face of the satellite in darkness.

A smattering of deep sky objects will also be visible throughout the month.

Jupiter can be seen in the western part of the sky in early May and Saturn appears in the southeastern sky after sunset.

“Looking toward the south, we’ve turned away from the crowded center of our Milky Way Galaxy,” Calo said. “Thus we see farther into the universe.”

By using a pair of binoculars, stargazers can peer into the Virgo cluster of galaxies, spot the Whirlpool Galaxy and see the M64 spiral galaxy, Calo added.

Source: Scientific American

Atomic Toolbox: Manufacturing at the Nanoscale .


rise-of-the-nano-machines_1

 

atomic-toolbox-manufacturing-at-nanoscale_2

 

Scientists are building the next generation of atomic-scale devices

For decades industrial manufacturing has meant long assembly lines. This is how scores of workers—human or robot—have built really big things, such as automobiles and aircraft, or have brought to life smaller, more complex items, such as pharmaceuticals, computers and smartphones.

Now envision a future in which the assembly of digital processors and memory, energy generators, artificial tissue and medical devices takes place on a scale too small to be seen by the naked eye and under a new set of rules. The next few years begin an important era that will take us from manufactured products that simply containnanotechnology—sunscreen with UV-blocking bits of titanium dioxide, as well as particles for enhancing medical imaging, to name two—to products that arenanotechnology.

Source: scientific American

 

 

 

 

 

Astronomers Discover New Neighbor Galaxy to the Milky Way.


nearby-galaxy-leo-p_1

In recent years astronomers have extended their view almost to the very edge of the observable universe. With the venerable Hubble Space Telescope researchers have spotted a handful of galaxies so faraway that we see them as they appeared just 400 million years or so after the big bang.

But even as astronomers peer ever deeper into the universe to explore the cosmic frontier, others are finding new realms to explore in our own backyard. Such is the case with Leo P, a dwarf galaxy that astronomers have just discovered in the Milky Way’s vicinity. At a distance of some five million or six million light-years from the Milky Way, Leo P is not quite a next-door neighbor, but on the vast scales of the universe it counts as a neighbor nonetheless.

Intriguingly, Leo P seems to have kept to itself, rarely if ever interacting with other galaxies. So the discovery, detailed in a series of studies in The Astronomical Journal,offers astronomers a rare glimpse at a cosmic object unsullied by disruptive galactic encounters. It also suggests the presence of other small galaxies that await discovery in our corner of the cosmos.

Leo P is one of just a few dozen local galaxies that does not swarm around the Milky Way or its massive sibling Andromeda, each of which has been extensively scanned for companion galaxies in recent years. “There has been a massive increase in the number of these nearby galaxies” around the Milky Way and Andromeda, says astronomer Alan McConnachie of the National Research Council Canada’s Herzberg Institute of Astrophysics, who did not contribute to the new research. “There have really been very, very few discoveries of dwarfs that are sort of sitting out in the middle of nowhere.” Those lonely dwarf galaxies, such as Leo P, are hard to spot because they are faint, distant, and could be found anywhere on the sky.

In its cosmic isolation the newfound galaxy appears to have led a relatively serene life, undisturbed by the tugs and twists imparted by the gravitational pull of a larger galaxy. “It is a product of a sedate environment, away from major galaxies,” says Riccardo Giovanelli of Cornell University, one of the astronomers who discovered Leo P. He and his colleagues first spotted it as a cloud of hydrogen gas with the Arecibo Observatory radio telescope in Puerto Rico, then confirmed the discovery with optical telescopes at Kitt Peak National Observatory in Arizona, which identified individual stars within the galaxy.

Compared with the Milky Way, Leo P is a true pip-squeak. Its stars may number in the hundreds of thousands whereas the Milky Way has hundreds of billions. Nevertheless, Leo P is actively making new stars—it contains a number of bright, blue, newly formed stars as well as a region of ionized gas that indicates the presence of a luminous young star. Its large gas reservoir and current star formation are unusual for such a small galaxy—many of its ilk have had their star-making gas stripped away during encounters with bigger galaxies.

By definition, dwarf galaxies are tiny, McConnachie notes. “So they’re very sensitive to the things that are going on around them. They get harassed, they get pulled apart, they get stripped of their gas,” he says. “Chances are, when we look at a galaxy like Leo P, we’re seeing how a dwarf galaxy should look if left to its own devices.” Indeed, the “P” in the galaxy’s name stands for “pristine”; the rest refers to the galaxy’s location in the constellation Leo as viewed from Earth.

Large galaxies such as the Milky Way grow by pulling in and cannibalizing dwarf galaxies that draw too close, so the study of small galaxies can shed light on how the giants of the cosmos came to be. “The small galaxies and the big galaxies have kind of a shared history, if you like,” McConnachie says. “But all the [dwarfs] that we see have sort of been too messed up to tell us much about the intrinsic properties of the small galaxies.”

The discovery of the pristine Leo P could be a bit of a happy accident—its current bout of star formation made the galaxy stand out. “If there hadn’t been some of these bright young blue stars—and they only have lifetimes in the millions of years, not the billions of years—it would have been much harder to pick up this thing,” says astronomer Katherine Rhode of Indiana University Bloomington, who led the optical observations of the galaxy.

Astronomers may soon know whether other, similar objects lurk nearby. In a new study in The Astrophysical Journal, Giovanelli and two colleagues catalogued 59 additional clouds of gas that were spotted in the same sky survey that unearthed Leo P. On further inspection some of those clouds may also prove to be low-mass galaxies that are faint enough to have so far escaped notice. “We have many dozens of these objects now,” Giovanelli says. “We’re going to see which we can pull out of the muck.”

Source: scientific American

 

Moon Landing Faked!!!—Why People Believe in Conspiracy Theories.


moon-landing-faked-why-people-believe-conspiracy-theories_1New psychological research helps explain why some see intricate government conspiracies behind events like 9/11 or the Boston bombing

Did NASA fake the moon landing? Is the government hiding Martians in Area 51? Isglobal warming a hoax? And what about the Boston Marathon bombing…an “inside job” perhaps?

In the book “The Empire of Conspiracy,” Timothy Melley explains that conspiracy theories have traditionally been regarded by many social scientists as “the implausible visions of a lunatic fringe,” often inspired by what the late historian Richard Hofstadter described as “the paranoid style of American politics.” Influenced by this view, many scholars have come to think of conspiracy theories as paranoid and delusional, and for a long time psychologists have had little to contribute other than to affirm the psychopathological nature of conspiracy thinking, given that conspiricist delusions are commonly associated with (schizotype) paranoia.

Yet, such pathological explanations have proven to be widely insufficient because conspiracy theories are not just the implausible visions of a paranoid minority. For example, a national poll released just this month reports that 37 percent of Americans believe that global warming is a hoax, 21 percent think that the US government is covering up evidence of alien existence and 28 percent believe a secret elite power with a globalist agenda is conspiring to rule the world. Only hours after the recent Boston marathon bombing, numerous conspiracy theories were floated ranging from a possible ‘inside job’ to YouTube videos claiming that the entire event was a hoax.

So why is it that so many people come to believe in conspiracy theories? They can’t all be paranoid schizophrenics. New studies are providing some eye-opening insights and potential explanations.

For example, while it has been known for some time that people who believe in one conspiracy theory are also likely to believe in other conspiracy theories, we would expect contradictory conspiracy theories to be negatively correlated. Yet, this is not what psychologists Micheal Wood, Karen Douglas and Robbie Suton found in a recentstudy. Instead, the research team, based at the University of Kent in England, found that many participants believed in contradictory conspiracy theories. For example, the conspiracy-belief that Osama Bin Laden is still alive was positively correlated with the conspiracy-belief that he was already dead before the military raid took place. This makes little sense, logically: Bin Laden cannot be both dead and alive at the same time. An important conclusion that the authors draw from their analysis is that people don’t tend to believe in a conspiracy theory because of the specifics, but rather because of higher-order beliefs that support conspiracy-like thinking more generally. A popular example of such higher-order beliefs is a severe “distrust of authority.” The authors go on to suggest that conspiracism is therefore not just about belief in an individual theory, but rather an ideological lens through which we view the world. A good case in point is Alex Jones’s recent commentary on the Boston bombings. Jones, (one of the country’s preeminent conspiracy theorists) reminded his audience that two of the hijacked planes on 9/11 flew out of Boston (relating one conspiracy theory to another) and moreover, that the Boston Marathon bombing could be a response to the sudden drop in the price of gold or part of a secret government plot to expand theTransportation Security Administration’s reach to sporting events. Others have pointed their fingers to a ‘mystery man’ spotted on a nearby roof shortly after the explosions. While it remains unsure whether or not credence is given to only some or all of these (note: contradicting) conspiracy theories, there clearly is a larger underlying preference to support conspiracy-type explanations more generally.

Interestingly, belief in conspiracy theories has recently been linked to the rejection of science. In a paper published in Psychological Science, Stephen Lewandowsky and colleagues investigated the relation between acceptance of science and conspiricist thinking patterns. While the authors’ survey was not representative of the general population, results suggest that (controlling for other important factors) belief in multiple conspiracy theories significantly predicted the rejection of important scientific conclusions, such as climate science or the fact that smoking causes lung cancer. Yet, rejection of scientific principles is not the only possible consequence of widespread belief in conspiracy theories.  Another recent study indicates that receiving positive information about or even being merely exposed to conspiracy theories can lead people to become disengaged from important political and societal topics. For example, in their study, Daniel Jolley and Karen Douglas clearly show that participants who received information that supported the idea that global warming is a hoax were less willing to engage politically and also less willing to implement individual behavioral changes such as reducing their carbon footprint.

These findings are alarming because they show that conspiracy theories sow public mistrust and undermine democratic debate by diverting attention away from important scientific, political and societal issues. There is no question as to whether the public should actively demand truthful and transparent information from their governments and proposed explanations should be met with a healthy amount of scepticism, yet, this is not what conspiracy theories offer. A conspiracy theory is usually defined as an attempt to explain the ultimate cause of an important societal event as part of some sinister plot conjured up by a secret alliance of powerful individuals and organizations. The great philosopher Karl Popper argued that the fallacy of conspiracy theories lies in their tendency to describe every event as ‘intentional’ and ‘planned’ thereby seriously underestimating the random nature and unintended consequences of many political and social actions. In fact, Popper was describing a cognitive bias that psychologists now commonly refer to as the “fundamental attribution error”: the tendency to overestimate the actions of others as being intentional rather than the product of (random) situational circumstances.

Since a number of studies have shown that belief in conspiracy theories is associated with feelings of powerlessness, uncertainty and a general lack of agency and control, a likely purpose of this bias is to help people “make sense of the world” by providing simple explanations for complex societal events — restoring a sense of control and predictability. A good example is that of climate change: while the most recent international scientific assessment report (receiving input from over 2500 independent scientists from more than a 100 countries) concluded with 90 percent certainty that human-induced global warming is occurring, the severe consequences and implications of climate change are often too distressing and overwhelming for people to deal with, both cognitively as well as emotionally. Resorting to easier explanations that simply discount global warming as a hoax is then of course much more comforting and convenient psychologically. Yet, as Al Gore famously pointed out, unfortunately, the truth is not always convenient.

Source: scientific American

 

Are Doctors Diagnosing Too Many Kids with ADHD?


are-doctors-diagnosing-too-many-kid_3

 

A German children’s book from 1845 by Heinrich Hoffman featured “Fidgety Philip,” a boy who was so restless he would writhe and tilt wildly in his chair at the dinner table. Once, using the tablecloth as an anchor, he dragged all the dishes onto the floor. Yet it was not until 1902 that a British pediatrician, George Frederic Still, described what we now recognize as attention-deficit hyperactivity disorder (ADHD). Since Still’s day, the disorder has gone by a host of names, including organic drivenness, hyperkinetic syndrome, attention-deficit disorder and now ADHD.

Despite this lengthy history, the diagnosis and treatment of ADHD in today’s children could hardly be more controversial. On his television show in 2004, Phil McGraw (“Dr. Phil”) opined that ADHD is “so overdiagnosed,” and a survey in 2005 by psychologists Jill Norvilitis of the University at Buffalo, S.U.N.Y., and Ping Fang of Capitol Normal University in Beijing revealed that in the U.S., 82 percent of teachers and 68 percent of undergraduates agreed that “ADHD is overdiagnosed today.” According to many critics, such overdiagnosis raises the specter of medicalizing largely normal behavior and relying too heavily on pills rather than skills—such as teaching children better ways of coping with stress.

Yet although data point to at least some overdiagnosis, at least in boys, the extent of this problem is unclear. In fact, the evidence, with notable exceptions, appears to be stronger for the undertreatment than overtreatment of ADHD.

Medicalizing Normality

The American Psychiatric Association’s diagnostic manual of the past 19 years, theDSM-IV, outlines three sets of indicators for ADHD: inattention (a child is easily distracted), hyperactivity (he or she may fidget a lot, for example), and impulsivity (the child may blurt out answers too quickly). A child must display at least six of the nine listed symptoms for at least half a year across these categories. In addition, at least some problems must be present before the age of seven and produce impairment in at least two different settings, such as school or home. Studies suggest that about 5 percent of school-age children have ADHD; the disorder is diagnosed in about three times as many boys as girls.

Many scholars have alleged that ADHD is massively overdiagnosed, reflecting a “medicalization” of largely normative childhood difficulties, such as jitteriness, boredom and impatience. Nevertheless, it makes little sense to refer to the overdiagnosis of ADHD unless there is an objective cutoff score for its presence. Data suggest, however, that a bright dividing line does not exist. In a study published in 2011 psychologists David Marcus, now at Washington State University, and Tammy Barry of the University of Southern Mississippi measured ADHD symptoms in a large sample of third graders. Their analyses demonstrated that ADHD differs in degree, not in kind, from normality.

Yet many well-recognized medical conditions, such as hypertension and type 2 diabetes, are also extremes on a continuum that stretches across the population. Hence, the more relevant question is whether doctors are routinely diagnosing kids with ADHD who do not meet the levels of symptoms specified by the DSM-IV.

Some studies hint that such misdiagnosis does occur, although its magnitude is unclear. In 1993 Albert Cotugno, a practicing psychologist in Massachusetts, reported that only 22 percent of 92 children referred to an ADHD clinic actually met criteria for ADHD following an evaluation, indicating that many children referred for treatment do not have the disorder as formally defined. Nevertheless, these results are not conclusive, because it is unknown how many of the youth received an official diagnosis, and the sample came from only one clinic.

Clearer, but less dramatic, evidence for overdiagnosis comes from a 2012 study in which psychologist Katrin Bruchmüller of the University of Basel and her colleagues found that when given hypothetical vignettes of children who fell short of the DSM-IVdiagnosis, about 17 percent of the 1,000 mental health professionals surveyed mistakenly diagnosed the kids with ADHD. These errors were especially frequent for boys, perhaps because boys more often fit clinicians’ stereotypes of ADHD children. (In contrast, some researchers conjecture that ADHD is underdiagnosed in girls, who often have subtler symptoms, such as daydreaming and spaciness.)

Pill Pushers?

Published reports of using stimulants for ADHD date to 1938. But in 1944 chemist Leandro Panizzon, working for Ciba, the predecessor of Novartis, synthesized a stimulant drug that he named in honor of his wife, Marguerite, whose nickname was Rita. Ritalin (methylphenidate) and other stimulants, such as Adderall, Concerta and Vyvanse, are now standard treatments; Strattera, a nonstimulant, is also widely used. About 80 percent of children diagnosed with ADHD display improvements in attention and impulse control while on the drugs but not after their effects wear off. Still, stimulants sometimes have side effects, such as insomnia, mild weight loss and a slight stunting of height. Behavioral treatments, which reward children for remaining seated, maintaining attention or engaging in other appropriate activities, are also effective in many cases.

Many media sources report that stimulants have been widely prescribed for children without ADHD. As Dutch pharmacologist Willemijn Meijer of PHARMO Institute in Utrecht and his colleagues observed in a 2009 review, stimulant prescriptions for children in the U.S. rose from 2.8 to 4.4 percent between 2000 and 2005. Yet most data suggest that ADHD is undertreated, at least if one assumes that children with this diagnosis should receive stimulants. Psychiatrist Peter Jensen, then at Columbia University, noted in a 2000 article that data from the mid-1990s demonstrated that although about three million children in the U.S. met criteria for ADHD, only two million received a stimulant prescription from a doctor.

The perception that stimulants are overprescribed and overused probably has a kernel of truth, however. Data collected in 1999 by psychologist Gretchen LeFever, then at Eastern Virginia Medical School, point to geographical pockets of overprescription. In southern Virginia, 8 to 10 percent of children in the second through fifth grades received stimulant treatment compared with the 5 percent of children in that region who would be expected to meet criteria for ADHD. Moreover, increasing numbers of individuals with few or no attentional problems—such as college students trying to stay awake and alert to study—are using stimulants, according to ongoing studies. Although the long-term harms of such stimulants among students are unclear, they carry a risk of addiction.

A Peek at the Future

The new edition of the diagnostic manual, DSM-5 (due out in May), is expected to specify a lower proportion of total symptoms for an ADHD diagnosis than its predecessor and to increase the age of onset to 12 years. In a commentary in 2012 psychologist Laura Batstra of the University of Groningen in the Netherlands and psychiatrist Allen Frances of Duke University expressed concerns that these modifications will result in erroneous increases in ADHD diagnoses. Whether or not their forecast is correct, this next chapter of ADHD diagnosis will almost surely usher in a new flurry of controversy regarding the classification and treatment of the disorder.

 

Source: Scientific American

 

Risk factor analysis of the development of new neurological deficits following supplementary motor area resection.


Clinical article

Abstract

OBJECT

Supplementary motor area (SMA) resection often induces postoperative contralateral hemiparesis or speech disturbance. This study was performed to assess the neurological impairments that often follow SMA resection and to assess the risk factors associated with these postoperative deficits.

METHODS

The records for patients who had undergone SMA resection for pharmacologically intractable epilepsy between 1994 and 2010 were gleaned from an epilepsy surgery database and retrospectively reviewed in this study.

RESULTS

Forty-three patients with pharmacologically intractable epilepsy underwent SMA resection with intraoperative cortical stimulation and mapping while under awake anesthesia. The mean patient age was 31.7 years (range 15–63 years), and the mean duration and frequency of seizures were 10.4 years (range 0.1–30 years) and 14.6 per month (range 0.1–150 per month), respectively. Pathological examination of the brain revealed cortical dysplasia in 18 patients (41.9%), tumors in 16 patients (37.2%), and other lesions in 9 patients (20.9%). The mean duration of the follow-up period was 84.0 months (range 24–169 months). After SMA resection, 23 patients (53.5%) experienced neurological deficits. Three patients (7.0%) experienced permanent deficits, and 20 (46.5%) experienced symptoms that were transient. All permanent deficits involved contralateral weakness, whereas the transient symptoms patients experienced were varied, including contralateral weaknesses in 15, apraxia in 1, sensory disturbances in 1, and dysphasia in 6. Thirteen patients recovered completely within 1 month. Univariate analysis revealed that resection of the SMA proper, a shorter lifetime seizure history (< 10 years), and resection of the cingulate gyrus in addition to the SMA were associated with the development of neurological deficits (p = 0.078, 0.069, and 0.023, respectively). Cingulate gyrus resection was the only risk factor identified on multivariate analysis (p = 0.027, OR 6.530, 95% CI 1.234–34.562).

CONCLUSIONS

Resection of the cingulate gyrus in addition to the SMA was significantly associated with the development of postoperative neurological impairment.

Source: JNS

 

Nonsurgical treatment of chronic subdural hematoma with tranexamic acid.


Clinical article

Chronic subdural hematoma (CSDH) is a common condition after head trauma. It can often be successfully treated surgically by inserting a bur hole and draining the liquefied hematoma. However, to the best of the authors’ knowledge, for nonemergency cases not requiring surgery, no reports have indicated the best approach for preventing hematoma enlargement or resolving it completely. The authors hypothesized that hyperfibrinolysis plays a major role in liquefaction of the hematoma. Therefore, they evaluated the ability of an antifibrinolytic drug, tranexamic acid, to completely resolve CSDH compared with bur hole surgery alone.

METHODS

From 2007 to 2011, a total of 21 patients with CSDH seen consecutively at Kuki General Hospital, Japan, were given 750 mg of tranexamic acid orally every day. Patients were identified by a retrospective records review, which collected data on the volume of the hematoma (based on radiographic measurements) and any complications. Follow-up for each patient consisted of CT or MRI every 21 days from diagnosis to resolution of the CSDH.

RESULTS

Of the 21 patients, 3 with early stages of CSDH were treated by bur hole surgery before receiving medical therapy. The median duration of clinical and radiographic follow-up was 58 days (range 28–137 days). Before tranexamic acid therapy was initiated, the median hematoma volume for the 21 patients was 58.5 ml (range 7.5–223.2 ml); for the 18 patients who had not undergone surgery, the median hematoma volume was 55.6 ml (range 7.5–140.5 ml). After therapy, the median volume for all 21 patients was 3.7 ml (range 0–22.1 ml). No hematomas recurred or progressed.

CONCLUSIONS

Chronic subdural hematoma can be treated with tranexamic acid without concomitant surgery. Tranexamic acid might simultaneously inhibit the fibrinolytic and inflammatory (kinin-kallikrein) systems, which might consequently resolve CSDH. This medical therapy could prevent the early stages of CSDH that can occur after head trauma and the recurrence of CSDH after surgery.

Source: JNS