Mysterious Death: Body of Doctor Who Linked Vaccines To Autism Found Floating in River Alex Jones’ Infowars: There’s a war on for your mind!


Dr. Jeff Bradstreet helped families whose children were believed to have been damaged by immunizations

Mysterious Death: Body of Doctor Who Linked Vaccines To Autism Found Floating in River
A prominent autism researcher and vaccine opponent was found dead floating in a North Carolina river last week under what many are calling suspicious circumstances.

A fisherman found the body of Dr. James Jeffery Bradstreet in the Rocky Broad River in Chimney Rock, North Carolina, last Friday afternoon.

“Bradstreet had a gunshot wound to the chest, which appeared to be self inflicted, according to deputies,” reported WHNS.

In a press release, the Rutherford County Sheriff’s Office announced, “Divers from the Henderson County Rescue Squad responded to the scene and recovered a handgun from the river.”
An investigation into the death is ongoing, and the results of an autopsy are also reportedly forthcoming.

Dr. Bradstreet ran a private practice in Buford, Georgia, which focused on “treating children with Autism Spectrum Disorder, PPD, and related neurological and developmental disorders.”

Among various remedies, Dr. Bradstreet’s Wellness Center reportedly carried out “mercury toxicity” treatments, believing the heavy metal to be a leading factor in the development of childhood autism.

Dr. Bradstreet undertook the effort to pinpoint the cause of the disease after his own child developed the ailment following routine vaccination.

“Autism taught me more about medicine than medical school did,” the doctor once stated at a conference, according to the Epoch Times’ Jake Crosby.

In addition to treating patients, Bradstreet has also offered expert testimony in federal court on behalf of vaccine-injured families and was founder and president of the International Child Development Resource Center, which at one time employed the much-scorned autism expert Dr. Andrew Wakefield as “research director.”

The circumstances surrounding Bradstreet’s death are made all the more curious by a recent multi-agency raid led by the FDA on his offices.

“The FDA has yet to reveal why agents searched the office of the doctor, reportedly a former pastor who has been controversial for well over a decade,” reported the Gwinnett Daily Post.

Social media pages dedicated to Bradstreet’s memory are filled with comments from families who say the deceased doctor impacted their lives for the better.

“Dr. Bradstreet was my son’s doctor after my son was diagnosed with autism. He worked miracles,” one Facebook user states. “At 16, my son is now looking at a normal life thanks to him. I thank him every day.”

“I will forever be grateful and thankful for Dr. Bradstreet recovering my son… from autism,” another person writes. “Treatments have changed my son’s life so that he can grow up and live a normal healthy life. Dr. Bradstreet will be missed greatly!”

A GoFundMe page has also been set up by one of Bradstreet’s family members seeking “To find the answers to the many questions leading up to the death of Dr Bradstreet, including an exhaustive investigation into the possibility of foul play.”

Despite his family requesting the public refrain from speculation, many are nevertheless concluding the doctor’s death to be part of a conspiracy.

“Self-inflicted? In the chest? I’m not buying this,” one person in the WHNS comments thread states. “This was a doctor who had access to pharmaceuticals of all kinds. This was a religious man with a thriving medical practice. Sorry, but this stinks of murder and cover-up.”

Another commentor had a more definitive conjecture:

“He did NOT kill himself! He was murdered for who he was speaking against, what he knew, and what he was doing about it. He was brilliant kind compassionate doctor with amazing abilities to heal. He was taken. Stopped. Silenced. Why would a doctor who had access to pharmaceuticals and could die peacefully shoot himself in the chest???? And throw himself in a river?? THIS IS OBVIOUS! MURDER!!”

Hospital prints first 3D heart using multiple imaging techniques


Congenital heart experts from Spectrum Health Helen DeVos Children’s Hospital have successfully integrated two common imaging techniques to produce a three-dimensional anatomic model of a patient’s heart.

The 3D model printing of patients’ hearts has become more common in recent years as part of an emerging, experimental field devoted to enhanced visualization of individual cardiac structures and characteristics. But this is the first time the integration of computed tomography (CT) and three-dimensional transesophageal echocardiography (3DTEE) has successfully been used for printing a hybrid 3D model of a patient’s . A proof-of-concept study authored by the Spectrum Health experts also opens the way for these techniques to be used in combination with a third tool— (MRI).

“Hybrid 3D printing integrates the best aspects of two or more imaging modalities, which can potentially enhance diagnosis, as well as interventional and surgical planning,” said Jordan Gosnell, Helen DeVos Children’s Hospital cardiac sonographer, and lead author of the study. “Previous methods of 3D printing utilize only one imaging modality, which may not be as accurate as merging two or more datasets.”

The team used specialized software to register from the two imaging modalities to selectively integrate datasets to produce an accurate anatomic model of the heart. The result creates more detailed and anatomically accurate 3D renderings and printed models, which may enable physicians to better diagnose and treat heart disease.

Another 3D heart model. Credit: Courtesy of Materialise

Computed tomography (CT) and magnetic resonance imaging (MRI) are established imaging tools for producing 3D printable models. Three-dimensional transesophageal echocardiography (3DTEE) recently was reported by Joseph Vettukattil, M.D., and his Helen DeVos Children’s Hospital colleagues to be a feasible imaging technique to generate 3D printing in . Vettukattil is co-director of the Helen DeVos Children’s Hospital Congenital Heart Center, division chief, pediatric cardiology, and senior author of the study.

According to Vettukattil and his colleagues, each imaging tool has different strengths, which can improve and enhance 3D printing:

  • CT enhances visualization of the outside anatomy of the heart.
  • MRI is superior to other for measuring the interior of the heart, including the right and left ventricles or main chambers of the heart, as well as the heart’s muscular tissue.
  • 3DTEE provides the best visualization of valve anatomy.

“This is a huge leap for individualized medicine in cardiology and congenital heart disease,” said Vettukattil. “The technology could be beneficial to cardiologists and surgeons. The model will promote better diagnostic capability and improved interventional and surgical planning, which will help determine whether a condition can be treated via transcatheter route or if it requires surgery.”

Vettukattil is known internationally for his work and research with three- and four-dimensional echocardiography. Most notably, Vettukattil developed the advanced technique of multiplanar reformatting in echocardiography, a method used to slice heart structures in infinite planes through the three dimensions in a virtual environment similar to a cardiac pathologist dissecting the heart to reveal underlying pathology. Commonly used with other diagnostic technologies, such as CTs, Vettukattil pioneered its use in echocardiography to evaluate complex heart defects.

Vettukattil is presenting the findings of the proof-of-concept study June 24-27 at the CSI 2015—Catheter Interventions in Congenital, Structural and Valvular Heart Diseases Congress in Frankfurt, Germany to demonstrate the feasibility of printing 3D cardiovascular models derived from multiple imaging modalities.

The Helen DeVos Children’s Hospital team worked with the Mimics Innovation Suite software from Materialise, a leading provider of 3D printing software and services based in Belgium, which printed the model using its HeartPrint Flex technology. Gosnell worked on integration of the imaging modalities, collaborating with Materialise’s U.S. Headquarters in Plymouth, Mich., to produce the final 3D rendering. Vettukattil devised the concept of integrating two or more imaging modalities for 3D printing.

Further research is required to evaluate the efficacy of hybrid 3D models in decision-making for transcatheter or surgical interventions.

 

Cytokines, Which Contribute To Inflammation, Found At High Levels In Brains Of Suicidal Patients


Cytokines, known to promote inflammation, are increased in the bodies and brains of people who are contemplating or have attempted suicide.

How do we predict those at risk for suicide? A published analysis supports the notion that levels of cytokines — which are known to promote inflammation — are increased in the bodies and brains of people who are contemplating or have attempted suicide. This is true even when compared to people with the same psychiatric disorders who are not suicidal.

Cytokine levels, the researchers say, may help distinguish patients who are suicidal from patients who are not.

“Immune system dysfunction, including inflammation, may be involved in the pathophysiology of major psychiatric disorders,” Dr. Brian Miller of Georgia Regents University stated in a press release.

Cytokines are chemical messengers, essentially. Within the immune system, cells communicate with one another by releasing and responding to cytokines. A category of small proteins, cytokines include an assortment of interleukins, interferons, and growth factors that help coordinate immune responses.

Inflammation, which is a malfunction of the immune system involving, in part, cytokines, affects every organ and system in the body. High levels of cytokines contribute to arthritis, atherosclerosis, and asthma. Many studies suggest these immune system messengers are released under conditions of psychological stress and resulting inflammation in the brain may contribute to depression.

Miller and his co-researcher, Dr. Carmen Black collected and pooled data from 18 published studies. Altogether, then, they examined information on 583 psychiatric patients who contemplated suicidal, 315 psychiatric patients who were non-suicidal, and 845 healthy control participants. Calculating the numbers, they found suicidal patients had significantly increased levels of two cytokines, interleukin (IL)-1β and IL-6, both in their blood and postmortem brain.

By identifying biological markers generally associated with suicide, the researchers believe, someday a simple blood test may be developed to help doctors predict long-term risk for self-destruction, just as today, high blood pressure helps to forecast physical problems years down the road.

“Given that suicide is a major area of public health concern, it is critical to investigate potential markers of suicidality that could be used to… advance suicide prevention efforts,” said Miller.

He and Black say it is necessary to conduct studies of large and diverse patient samples in order to confirm the presence of alterations in cytokine in people who are suicidal. Plus, scientists must evaluate whether controlling inflammation early in life will have a long-term protective effect.

Source: Black C, Miller BJ. Meta-analysis of Cytokines and Chemokines in Suicidality: Distinguishing Suicidal Versus Nonsuicidal Patients. Biological Psychiatry. 2015.

Common Antidepressants Linked to Higher Fracture Odds in Menopausal Women .


Women prescribed a common class of antidepressants to ease menopausal symptoms may face a long-term rise in their risk for bone fracture, a new study suggests.

The antidepressants in question are selective serotonin reuptake inhibitors(SSRI) medications such as Celexa, Paxil, Prozac and Zoloft.

Besides being used to treat depression, these drugs are often prescribed as an alternative to hormone replacement therapy (HRT) to tackle hot flashes, night sweats and other problems that can accompany menopause.

However, “SSRIs appear to increase fracture risk among middle aged women without psychiatric disorders,” wrote a team led by Dr. Matthew Miller of Northeastern University in Boston.

The team added that the effect seems to be “sustained over time, suggesting that shorter duration of treatment may decrease [this effect].”

The study authors acknowledged that their work did not establish a direct cause-and-effect link between SSRIs and a boost in fracture risk. However, they point out that prior research has highlighted bone-thinning as a possible side effect of antidepressants.

Findings from the study were published June 25 in the journal Injury Prevention.

For the study, researchers sifted through data from the PharMetrics Claims Database, which collects information on drug treatments involving roughly 61 million patients nationally.

In this case, investigators specifically focused on more than 137,000 women between the ages of 40 and 64, all of whom began SSRI treatment at some point between 1998 and 2010.

The SSRIs in question included citalopram (Celexa), escitalopram (Lexapro), fluoxetine (Sarafem, Prozac), fluvoxamine (Luvox), paroxetine (Paxil) and sertraline (Zoloft).

The SSRI group was compared with more than 236,000 other women who had been prescribed indigestion medications instead of an SSRI.

They found that women in the SSRI group faced a 76 percent higher risk for fracture after a single year of SSRI use, compared with the non-SSRI group. That figure fell slightly, to 73 percent after two years and 67 percent after five years, the study said.

One expert in bone health said a relationship between SSRIs and bone weakening does have some basis in biology.

“The authors speculate that the mechanism of action involves the activation of osteoclasts, cells which break down bone, by the SSRIs,” explained Dr. Caroline Messer, an endocrinologist at Lenox Hill Hospital in New York City.

She said that, “While more studies are needed, the trial does suggest that women might want to limit the duration of treatment with SSRIs and perhaps consider taking the lowest effective dose to minimize bone loss.”

Scientists are joining forces with cigarette companies so you can vape safely .


Scientists and big tobacco make an unlikely team, but is it legit?

Huge numbers of health-focused researchers are joining the ranks of some of the world’s biggest tobacco companies in a coordinated effort to develop and market the next generation of e-cigarettes.

According to a report by Toni Clarke for Reuters, tobacco giants including Philip Morris and Altria Group, the makers of Marlboro, have been on a recruitment spree to bring on board swathes of scientists with expertise in fighting cancer and other chronic conditions, so as to bolster research and development on new kinds of theoretically ‘healthier’ and risk-free e-cigarettes and vaping products.

Philip Morris in particular is heavily investing in the campaign, and is said to have hired some 400 scientists including toxicologists, chemists, biologists and biostatisticians. In addition to lab workers, the companies are also seeking to attract regulatory affairs specialists in a bid to help them navigate future red tape with the US Food and Drug Administration (FDA) and get new products onto the market with as little friction as possible.

According to Philip Gorham, an analyst at Morningstar: “If tobacco companies can prove there is reduced risk, e-cigs are likely to remain less regulated and taxed than cigarettes. If they can’t, they will likely be subject to the same restrictions.”

However, opinions elsewhere in the scientific community are decidedly mixed when it comes to collaborating with the tobacco industry.

“The whole set-up is schizophrenic,” said Lars Erik Rutqvist of Sweden’s Karolinska Institute. “I wouldn’t want to be part of that because they still make most of their money from cigarettes.”

The new research is the latest development in the controversial e-cigarettes saga. While some advocates of vaping argue that e-cigarettes are an effective aid for traditional tobacco smokers seeking to quit the habit, numerous studies have suggested that e-cigarettes pose their own problems.

A controversial study published in 2014 suggested that e-cigarettes can possesseven more carcinogens than traditional cigarettes are particularly concerning, while studies involving animal testing have demonstrated that e-cigarettes may be responsible for other kinds of potential health complications.

Clearly the jury is still out on the long-term societal risks posed by e-cigarettes, a relatively recent drug phenomenon and one that’s evolving quickly. But we’re inclined to think that any moves in the tobacco industry that genuinely look towards the health of consumers at large are a step in the right direction – provided they are indeed genuine – even if they’re solely motivated by profit. Let’s just hope they lead to innovations that will benefit both smokers and non-smokers worldwide.

Can Quantum Computing Reveal the True Meaning of Quantum Mechanics? – The Nature of Reality .


Quantum mechanics says not merely that the world is probabilistic, but that it uses rules of probability that no science fiction writer would have had the imagination to invent. These rules involve complex numbers, called “amplitudes,” rather than just probabilities (which are real numbers between 0 and 1). As long as a physical object isn’t interacting with anything else, its state is a huge wave of these amplitudes, one for every configuration that the system could be found in upon measuring it. Left to itself, the wave of amplitudes evolves in a linear, deterministic way. But when you measure the object, you see some definite configuration, with a probability equal to the squared absolute value of its amplitude. The interaction with the measuring device “collapses” the object to whichever configuration you saw.

Those, more or less, are the alien laws that explain everything from hydrogen atoms to lasers and transistors, and from which no hint of an experimental deviation has ever been found, from the 1920s until today. But could this really be how the universe operates? Is the “bedrock layer of reality” a giant wave of complex numbers encoding potentialities—until someone looks? And what do we mean by “looking,” anyway?

binary_620

Could quantum computing help reveal what the laws of quantum mechanics really mean? Adapted from an image by Flickr user Politropix under a Creative Commons license.

There are different interpretive camps within quantum mechanics, which have squabbled with each other for generations, even though, by design, they all lead to the same predictions for any experiment that anyone can imagine doing. One interpretation is Many Worlds, which says that the different possible configurations of a system (when far enough apart) are literally parallel universes, with the “weight” of each universe given by its amplitude. In this view, the whole concept of measurement—and of the amplitude waves collapsing on measurement—is a sort of illusion, playing no fundamental role in physics. All that ever happens is linear evolution of the entire universe’s amplitude wave—including a part that describes the atoms of your body, which (the math then demands) “splits” into parallel copies whenever you think you’re making a measurement. Each copy would perceive only itself and not the others. While this might surprise people, Many Worlds is seen by many (certainly by its proponents, who are growing in number) as the conservative option: the one that adds the least to the bare math.

A second interpretation is Bohmian mechanics, which agrees with Many Worlds about the reality of the giant amplitude wave, but supplements it with a “true” configuration that a physical system is “really” in, regardless of whether or not anyone measures it. The amplitude wave pushes around the “true” configuration in a way that precisely matches the predictions of quantum mechanics. A third option is Niels Bohr’s original “Copenhagen Interpretation,” which says—but in many more words!—that the amplitude wave is just something in your head, a tool you use to make predictions. In this view, “reality” doesn’t even exist prior to your making a measurement of it—and if you don’t understand that, well, that just proves how mired you are in outdated classical ways of thinking, and how stubbornly you insist on asking illegitimate questions.

But wait: if these interpretations (and others that I omitted) all lead to the same predictions, then how could we ever decide which one is right? More pointedly, does it even mean anything for one to be right and the others wrong, or are these just different flavors of optional verbal seasoning on the same mathematical meat? In his recent quantum mechanics textbook, the great physicist Steven Weinberg reviews the interpretive options, ultimately finding all of them wanting. He ends with the hope that new developments in physics will give us better options. But what could those new developments be?

In the last few decades, the biggest new thing in quantum mechanics has been the field of quantum computing and information. The goal here, you might say, is to “put the giant amplitude wave to work”: rather than obsessing over its true nature, simply exploit it to do calculations faster than is possible classically, or to help with other information-processing tasks (like communication and encryption). The key insight behind quantum computing was articulated by Richard Feynman in 1982: to write down the state of n interacting particles each of which could be in either of two states, quantum mechanics says you need 2namplitudes, one for every possible configuration of all n of the particles. Chemists and physicists have known for decades that this can make quantum systems prohibitively difficult to simulate on a classical computer, since 2n grows so rapidly as a function of n.

But if so, then why not build computers that would themselves take advantage of giant amplitude waves? If nothing else, such computers could be useful for simulating quantum physics! What’s more, in 1994, Peter Shor discovered that such a machine would be useful for more than physical simulations: it could also be used to factor large numbers efficiently, and thereby break most of the cryptography currently used on the Internet. Genuinely useful quantum computers are still a ways away, but experimentalists have made dramatic progress, and have already demonstrated many of the basic building blocks.

I should add that, for my money, the biggest application of quantum computers will be neither simulation nor codebreaking, but simply proving that this is possible at all! If you like, a useful quantum computer would be the most dramatic demonstration imaginable that our world really does need to be described by a gigantic amplitude wave, that there’s no way around that, no simpler classical reality behind the scenes. It would be the final nail in the coffin of the idea—which many of my colleagues still defend—that quantum mechanics, as currently understood, must be merely an approximation that works for a few particles at a time; and when systems get larger, some new principle must take over to stop the exponential explosion.

But if quantum computers provide a new regime in which to probe quantum mechanics, that raises an even broader question: could the field of quantum computing somehow clear up the generations-old debate about the interpretation of quantum mechanics? Indeed, could it do that even before useful quantum computers are built?

At one level, the answer seems like an obvious “no.” Quantum computing could be seen as “merely” a proposed application of quantum mechanics as that theory has existed in physics books for generations. So, to whatever extent all the interpretations make the same predictions, they also agree with each other about what a quantum computer would do. In particular, if quantum computers are built, you shouldn’t expect any of the interpretive camps I listed before to concede that its ideas were wrong. (More likely that each camp will claim its ideas were vindicated!)

At another level, however, quantum computing makes certain aspects of quantum mechanics more salient—for example, the fact that it takes 2n amplitudes to describe n particles—and so might make some interpretations seem more natural than others. Indeed that prospect, more than any application, is why quantum computing was invented in the first place. David Deutsch, who’s considered one of the two founders of quantum computing (along with Feynman), is a diehard proponent of the Many Worlds interpretation, and saw quantum computing as a way to convince the world (at least, this world!) of the truth of Many Worlds. Here’s how Deutsch put it in his 1997 book “The Fabric of Reality”:

Logically, the possibility of complex quantum computations adds nothing to a case [for the Many Worlds Interpretation] that is already unanswerable. But it does add psychological impact. With Shor’s algorithm, the argument has been writ very large. To those who still cling to a single-universe world-view, I issue this challenge: explain how Shor’s algorithm works. I do not merely mean predict that it will work, which is merely a matter of solving a few uncontroversial equations. I mean provide an explanation. When Shor’s algorithm has factorized a number, using 10500 or so times the computational resources that can be seen to be present, where was the number factorized? There are only about 1080 atoms in the entire visible universe, an utterly minuscule number compared with 10500. So if the visible universe were the extent of physical reality, physical reality would not even remotely contain the resources required to factorize such a large number. Who did factorize it, then? How, and where, was the computation performed?

As you might imagine, not all researchers agree that a quantum computer would be “psychological evidence” for Many Worlds, or even that the two things have much to do with each other. Yes, some researchers reply, a quantum computer would take exponential resources to simulate classically (using any known algorithm), but all the interpretations agree about that. And more pointedly: thinking of the branches of a quantum computation as parallel universes might lead you to imagine that a quantum computer could solve hard problems in an instant, by simply “trying each possible solution in a different universe.” That is, indeed, how most popular articles explain quantum computing, but it’s also wrong!

The issue is this: suppose you’re facing some arbitrary problem—like, say, the Traveling Salesman problem, of finding the shortest path that visits a collection of cities—that’s hard because of a combinatorial explosion of possible solutions. It’s easy to program your quantum computer to assign every possible solution an equal amplitude. At some point, however, you need to make a measurement, which returns a single answer. And if you haven’t done anything to boost the amplitude of the answer you want, then you’ll see merely a random answer—which, of course, you could’ve picked for yourself, with no quantum computer needed!

For this reason, the only hope for a quantum-computing advantage comes frominterference: the key aspect of amplitudes that has no classical counterpart, and indeed, that taught physicists that the world has to be described with amplitudes in the first place. Interference is customarily illustrated by the double-slit experiment, in which we shoot a photon at a screen with two slits in it, and then observe where the photon lands on a second screen behind it. What we find is that there are certain “dark patches” on the second screen where the photon never appears—and yet, if we close one of the slits, then the photon can appear in those patches. In other words, decreasing the number of ways for the photon to get somewhere can increase the probability that it gets there! According to quantum mechanics, the reason is that the amplitude for the photon to land somewhere can receive a positive contribution from the first slit, and a negative contribution from the second. In that case, if both slits are open, then the two contributions cancel each other out, and the photon never appears there at all. (Because the probability is the amplitude squared, both negative and positive amplitudes correspond to positive probabilities.)

Likewise, when designing algorithms for quantum computers, the goal is always to choreograph things so that, for each wrong answer, some of the contributions to its amplitude are positive and others are negative, so on average they cancel out, leaving an amplitude close to zero. Meanwhile, the contributions to the right answer’s amplitude should reinforce each other (being, say, all positive, or all negative). If you can arrange this, then when you measure, you’ll see the right answer with high probability.

It was precisely by orchestrating such a clever interference pattern that Peter Shor managed to devise his quantum algorithm for factoring large numbers. To do so, Shor had to exploit extremely specific properties of the factoring problem: it was not just a matter of “trying each possible divisor in a different parallel universe.” In fact, an important 1994 theorem of Bennett, Bernstein, Brassard, and Vazirani shows that what you might call the “naïve parallel-universe approach” never yields an exponential speed improvement. The naïve approach can reveal solutions in only the square root of the number of steps that a classical computer would need, an important phenomenon called the Grover speedup. But that square-root advantage turns out to be the limit: if you want to do better, then like Shor, you need to find something special about your problem that lets interference reveal its answer.

What are the implications of these facts for Deutsch’s argument that only Many Worlds can explain how a quantum computer works? At the least, we should say that the “exponential cornucopia of parallel universes” almost always hides from us, revealing itself only in very special interference experiments where all the “universes” collaborate, rather than any one of them shouting above the rest. But one could go even further. One could say: To whatever extent the parallel universes do collaborate in a huge interference pattern to reveal (say) the factors of a number, to that extent they never had separate identities as “parallel universes” at all—even according to the Many Worlds interpretation! Rather, they were just one interfering, quantum-mechanical mush. And from a certain perspective, all the quantum computer did was to linearly transform the way in which we measured that mush, as if we were rotating it to see it from a more revealing angle. Conversely, whenever the branches do act like parallel universes, Many Worlds itself tells us that we only observe one of them—so from a strict empirical standpoint, we could treat the others (if we liked) as unrealized hypotheticals. That, at least, is the sort of reply a modern Copenhagenist mightgive, if she wanted to answer Deutsch’s argument on its own terms.

There are other aspects of quantum information that seem more “Copenhagen-like” than “Many-Worlds-like”—or at least, for which thinking about “parallel universes” too naïvely could lead us astray. So for example, suppose Alice sends n quantum-mechanical bits (or qubits) to Bob, then Bob measures qubits in any way he likes. How many classical bits can Alice transmit to Bob that way? If you remember that n qubits require 2n amplitudes to describe, you might conjecture that Alice could achieve an incredible information compression—“storing one bit in each parallel universe.” But alas, an important result called Holevo’s Theorem says that, because of the severe limitations on what Bob learns when he measures the qubits, such compression is impossible. In fact, by sending n qubits to Bob, Alice can reliably communicate only n bits (or 2n bits, if Alice and Bob shared quantum correlations in advance), essentially no better than if she’d sent the bits classically. So for this task, you might say, the amplitude wave acts more like “something in our heads” (as the Copenhagenists always said) than like “something out there in reality” (as the Many-Worlders say).

But the Many-Worlders don’t need to take this lying down. They could respond, for example, by pointing to other, more specialized communication problems, in which it’s been proven that Alice and Bob can solve using exponentially fewer qubits than classical bits. Here’s one example of such a problem, drawing on a 1999 theorem of Ran Raz and a 2010 theorem of Boaz Klartag and Oded Regev: Alice knows a vector in a high-dimensional space, while Bob knows two orthogonal subspaces. Promised that the vector lies in one of the two subspaces, can you figure out which one holds the vector? Quantumly, Alice can encode the components of her vector as amplitudes—in effect, squeezing n numbers into exponentially fewer qubits. And crucially, after receiving those qubits, Bob can measure them in a way that doesn’t reveal everything about Alice’s vector, but does reveal which subspace it lies in, which is the one thing Bob wanted to know.

So, do the Many Worlds become “real” for these special problems, but retreat back to being artifacts of the math for ordinary information transmission?

To my mind, one of the wisest replies came from the mathematician and quantum information theorist Boris Tsirelson, who said: “a quantum possibility is more real than a classical possibility, but less real than a classical reality.” In other words, this is a new ontological category, one that our pre-quantum intuitions simply don’t have a good slot for. From this perspective, the contribution of quantum computing is to delineate for which tasks the giant amplitude wave acts “real and Many-Worldish,” and for which other tasks it acts “formal and Copenhagenish.” Quantum computing can give both sides plenty of fresh ammunition, without handing an obvious victory to either.

So then, is there any interpretation that flat-out doesn’t fare well under the lens of quantum computing? While some of my colleagues will strongly disagree, I’d put forward Bohmian mechanics as a candidate. Recall that David Bohm’s vision was of real particles, occupying definite positions in ordinary three-dimensional space, but which are jostled around by a giant amplitude wave in a way that perfectly reproduces the predictions of quantum mechanics. A key selling point of Bohm’s interpretation is that it restores the determinism of classical physics: all the uncertainty of measurement, we can say in his picture, arises from lack of knowledge of the initial conditions. I’d describe Bohm’s picture as striking and elegant—as long as we’re only talking about one or two particles at a time.

But what happens if we try to apply Bohmian mechanics to a quantum computer—say, one that’s running Shor’s algorithm to factor a 10,000-digit number, using hundreds of thousands of particles? We can do that, but if we do, talking about the particles’ “real locations” will add spectacularly little insight. The amplitude wave, you might say, will be “doing all the real work,” with the “true” particle positions bouncing around like comically-irrelevant fluff. Nor, for that matter, will the bouncing be completely deterministic. The reason for this is technical: it has to do with the fact that, while particles’ positions in space are continuous, the 0’s and 1’s in a computer memory (which we might encode, for example, by the spins of the particles) are discrete. And one can prove that, if we want to reproduce the predictions of quantum mechanics for discrete systems, then we need to inject randomness at many times, rather than only at the beginning of the universe.

But it gets worse. In 2005, I proved a theorem that says that, in any theory like Bohmian mechanics, if you wanted to calculate the entire trajectory of the “real” particles, you’d need to solve problems that are thought to be intractable even for quantum computers. One such problem is the so-called collision problem, where you’re given a cryptographic hash function (a function that maps a long message to a short “hash value”) and asked to find any two messages with the same hash. In 2002, I proved that, at least if you use the “naïve parallel-universe” approach, any quantum algorithm for the collision problem requires at least ~H1/5 steps, where H is the number of possible hash values. (This lower bound was subsequently improved to ~H1/3 by Yaoyun Shi, exactly matching an upper bound of Brassard, Høyer, and Tapp.) By contrast, if (with godlike superpower) you could somehow see the whole histories of Bohmian particles, you could solve the collision problem almost instantly.

What makes this interesting is that, if you ask to see the locations of Bohmian particles at any one time, you won’t find anything that you couldn’t have easily calculated with a standard, garden-variety quantum computer. It’s only when you ask for the particles’ locations at multiple times—a question that Bohmian mechanics answers, but that ordinary quantum mechanics rejects as meaningless—that you’re able to see multiple messages with the same hash, and thereby solve the collision problem.

My conclusion is that, if you believe in the reality of Bohmian trajectories, you believe that Nature does even more computational work than a quantum computer could efficiently simulate—but then it hides the fruits of its labor where no one can ever observe it. Now, this sits uneasily with a principle that we might call “Occam’s Razor with Computational Aftershave.” Namely: In choosing a picture of physical reality, we should be loath to posit computational effort on Nature’s part that vastly exceeds what could ever in principle be observed. (Admittedly, some people would probably argue that the Many Worlds interpretation violates my “aftershave principle” even more flagrantly than Bohmian mechanics does! But that depends, in part, on what we count as “observation”: just our observations, or also the observations of any parallel-universe doppelgängers?)

Could future discoveries in quantum computing theory settle once and for all, to every competent physicist’s satisfaction, “which interpretation is the true one”? To me, it seems much more likely that future insights will continue to do what the previous ones did: broaden our language, strip away irrelevancies, clarify the central issues, while still leaving plenty to argue about for people who like arguing. In the end, asking how quantum computing affects the interpretation of quantum mechanics is sort of like asking how classical computing affects the debate about whether the mind is a machine. In both cases, there was a range of philosophical positions that people defended before a technology came along, and most of those positions still have articulate defenders after the technology. So, by that standard, the technology can’t be said to have “resolved” much! Yet the technology is so striking that even the idea of it—let alone the thing itself—can shift the terms of the debate, which analogies people use in thinking about it, which possibilities they find natural and which contrived. This might, more generally, be the main way technology affects philosophy.

Long Mechanical Ventilation May Mean Loss of Independence


Older patients can have functional and cognitive difficulties.

Critically ill patients who have been mechanically ventilated for more than a week are at high risk for functional impairment and death by 1 year after hospital discharge, particularly the oldest patients and those with the longest hospital stays, according to a study presented here.

Among patients who were older than 66 and were treated in the intensive care unit (ICU) for 2 weeks or more, 40% died within the first year, 29% were readmitted to the ICU, and activities such as dressing, bathing, and climbing stairs remained severely restricted, according to Claudia dos Santos, MD, of the University of Toronto.

“Patients who survive mechanical ventilation in the ICU have a long road ahead and can have functional and cognitive difficulties as well as mood disorders, and also have a high mortality risk,” she said during a presentation on high impact clinical trials in critical care at the American Thoracic Society annual meeting.

The Canadian RECOVER program, which began in 2007, is a prospective cohort study that is evaluating outcomes after ICU discharge to help clinicians risk-stratify patients so that rehabilitation can be tailored to their needs.

As part of this program, the researchers, led by Margaret Herridge, MD, of the Toronto General Research Institute, followed 391 patients who were mechanically ventilated for at least a week in the ICU and survived for 7 days after discharge.

Mean age was 58, 69% were men, and mean APACHE II score was 25. Median time spent on the mechanical ventilator was 16 days, mean ICU stay was 22 days, and mean hospital stay was 29 days.

Disability was assessed using the Functional Independence Measure (FIM), which is widely used in rehabilitation settings to document improvement after an intervention, Dos Santos explained. Scores below 40 indicate total dependence and complete inability with self-care activities, 50 suggests significant dependence, and 90 indicates partial independence. A score of 126 is normal.

Recursive partitioning analysis classified patients into four disability risk groups, with the lowest being young, with short length of stay (younger than 42 and hospitalized for less than 2 weeks), and the highest being older than 66 and a length of stay of 14 days or more.

At day 7, the mean FIM score was 54. Among the lowest-risk group, mean score was 107, but in the highest-risk group it was 44, rising only to 66 by 2 weeks. “These patients were very close to being completely dependent,” she said.

FIM scores overall had improved by 1 year, averaging 110, but they still had not normalized, she noted.

Also at day 7, 60% of patients were unable to walk. By 1 year, walk distances increased from 24% to 75% of predicted.

Symptoms of depression rated on the Beck Depression Inventory, where a score of 21 indicates clinically relevant depression, remained at 17 at 6 and 12 months. “This didn’t get better,” she said.

 While several of the outcomes — including mortality and ICU readmission — were worse for the older, long-hospital-stay group, hospital readmissions in general were high for all groups, ranging from 36% to 43%.

An additional analysis indicated that FIM score, Charlson comorbidity score, and age were independent predictors of death by 1 year.

“Stratification of patients into these risk groups can help guide decisions by clinicians and family members on rehabilitation after discharge,” she concluded.

‘Psychologists should lead the way on male mental health issues’.


A group of Britain’s most senior psychologists are so concerned about the unique – and increasingly fatal – problems facing modern men, they are urgently calling for a dedicated Male Psychology Section of the British Psychological Society.

Although there has been a women’s section of the BPS since 1988, there is no male equivalent, even though “vast public health issues” face men, including the fact they are three to four times more likely to commit suicide.

Today, eminent psychologists and keynote speakers will gather at the second annual Male Psychology Conference at University College London to address this pressing matter.

To meet criteria, a mere one per cent of BPS members – around 500 signatures – must vote for it. As 300 have already done so, that means a mere 200 further signatures are needed to make their dream a reality.

Ahead of the conference, I caught up with its organiser, Martin Seager, member of the 12-strong Male Psychology Network and branch consultant for Central London Samaritans.

There are rules about masculinity that need to be honoured, not belittled

Men are suffering more severe mental health problems than women, so why isn’t there more of a focus on male psychology, asks Martin Daubney

Suicide is now the single biggest cause of death in men aged 20–49 in England and Wales

“I’ve been a psychologist for over 30 years, and historically it was always a very male environment,” he says. “But that started to change. In the early 2000s I was the head of psychology for north east London and 29 of my 30 students were female. My lone male student was studying postnatal depression in men and I realised there was a need for services for men.

“So I started running a men’s psychotherapy group for patients, guys who’d been through the system who hadn’t fit.

“Nobody had ever thought that gender might block the effectiveness of a group – that men don’t open up around women, because they feel guilty or weak.

“But in single-sex groups men can be very blokey one minute, then talk about something incredibly painful the next. It really worked. Men are very worried about shame and embarrassment, and there are rules about masculinity that need to be honoured, not belittled.

Suicide rates (per 100,000) among men, by age group, 2001-2013. Source: ONS

“If men are alone in a room they are tremendously good at supporting each other; they’re like soldiers in combat that really care for each other. So we realised that a men’s group is a really powerful space.”

When Seager attended the Mind Your Head men’s mental health conference in 2006 and met like-minded psychotherapists who’d had similar clinical experiences, they decided to start pushing for a dedicated men’s section.

“There’s a massive resistance to this, despite there being a very real need,” he says. “If you talk about the needs of men, you’re made to feel like an unreconstructed Neanderthal.

“It’s about tailoring a service to the needs of the demographic: you wouldn’t think twice about doing that for women, children, or for ethnicity, but when it comes to masculinity it’s assumed it’s politically incorrect. It’s like if you put out a flag on St George’s Day, people think you’re a fascist.

“There are vast public health issues to do with men. Men are killing themselves three to four times more then women, they’re more homeless, they’re more addicted, prisons are full of men who’ve got mental health problems, they’re underachieving in education and men are dying and having accidents at work much more often, and they die younger.

“As psychologists we should be leading the way, but we’ve had to push against the system for recognition.”

The biggest headline issue is the suicide rate, and men make up 80 per cent of cases

Extensive work with men-only groups has allowed Seager and the Male Psychology Network to form their “three rules of masculinity” that have become the cornerstones of their clinical work.

“The three rules are: one, be a fighter and a winner; two, be a provider and a protector, and three, retain mastery and control,” says Martin.

“If you break any of those, you don’t feel like a man. So if you don’t have a job, for a woman that’s awful, but if man doesn’t have a job he doesn’t feel he can provide or protect – so he’s lost his masculinity. That’s why the suicide rate for the unemployed is greater for men.

“This isn’t genetic: we are biologically evolved as male.

“The biggest headline issue is the suicide rate, and men make up 80 per cent of cases. If it was for women, or any other group in society, it would be red-flagged and there would be a massive strategy in place.

“I work closely with the Samaritans, and in London our ‘Man Talk’ project saw call-takers listen differently to men, right down to increasing the length of call. That’s because men don’t talk like women; they need longer to open up.

Men don’t need to ‘man up’ and they don’t need to ‘woman up’ – be more like women. We need to allow men to be men and honour that, on their terms – and that comes from 30 years’ experience in clinical practice.

“There’s a universal pressure on men to not show weakness and to be strong. We’re trying to paint a picture that if you seek help you are being a man; you’re taking control and being strong. We need to help men not feel ashamed and to keep their dignity.”

Is the universe ringing like a crystal glass?


Is the universe ringing like a crystal glass?
The standard view of the expanding universe.

Many know the phrase “the big bang theory.” There’s even a top television comedy series with that as its title. According to scientists, the universe began with the “big bang” and expanded to the size it is today. Yet, the gravity of all of this matter, stars, gas, galaxies, and mysterious dark matter, tries to pull the universe back together, slowing down the expansion.

Now, two physicists at The University of Southern Mississippi, Lawrence Mead and Harry Ringermacher, have discovered that the universe might not only be expanding, but also oscillating or “ringing” at the same time. Their paper on the topic has been published in the April 2015 issue of the Astronomical Journal.

In 1978 Arno Allan Penzias and Robert Woodrow Wilson received the Nobel prize for their 1964 discovery of the key signature of this theory, the primal radiation from the early universe known as the “cosmic microwave background” (CMB).

“Then in 1998 the finding that the universe was not only expanding, but was speeding up, or accelerating in its expansion was a shock when it was discovered simultaneously by east coast and west coast teams of astronomers and physicists,” said Mead. “A new form of matter, dark energy, repulsive in nature, was responsible for the speed-up. The teams led by Saul Perlmutter, Adam Riess, and Brian Schmidt won the 2011 Nobel Prize in Physics for that discovery.”

According to Mead and Ringermacher, this change from slowing down to speeding up (the transition time) took place approximately 6 to 7 billion years ago. Since then, Mead and Ringermacher say a vast accumulation of high-tech data has verified the theory to extraordinary accuracy.

Figure 1 is a NASA diagram representing the events of the Big Bang from the beginning of time to the present day as described by the current, accepted model known as “Lambda CDM” or Lambda Cold Dark Matter, where the Greek Lambda stands for Einstein’s “cosmological constant”. This cosmological constant is responsible for the acceleration of the universe. The outline of the “bell-shaped” universe represents its expanding size. The transition time is the point in time at which the bell shape shifts from going inward to outward from left to right.

Is the universe ringing like a crystal glass?
The universe ringing while expanding.

“The new finding suggests that the universe has slowed down and speeded up, not just once, but 7 times in the last 13.8 billion years, on average emulating dark matter in the process,” said Mead. “The ringing has been decaying and is now very small – much like striking a crystal glass and hearing it ring down.”

Figure 2 shows the new finding superposed on the Lambda CDM model of Figure 1. The oscillation amplitude is highly exaggerated, but the frequency is roughly correct. Ringermacher and Mead have determined that this oscillation is not a wave moving through the universe, such as a gravitational wave, but rather it is a “wave of the universe”.

Ringermacher says the discovery was made accidentally when, through their collaboration on modeling of galaxies, they found a new way of plotting a classic textbook graph describing the scale of the universe against its age (lookback time) that did not depend on one’s prior choice of models of the universe – as was traditional.

“The standard graph, the Hubble diagram, is constructed by astronomers observing the distances of Type 1A Supernovae that serve as “standard candles” for measuring the expansion of the universe,” said Ringermacher. “Analyzing this new plot to locate the transition time of the universe, we found there was more than one such time – in fact multiple oscillations with a frequency of about 7 cycles over the lifetime of the universe. It is space itself that has been speeding up its expansion followed by slowing down 7 times since creation.”

Mead and Ringermacher say this finding must ultimately be verified by independent analyses, preferably of new supernovae data, to confirm its reality. In the meantime, their work into the “ringing” of the continues.

John Nash’s unique approach produced quantum leaps in economics and maths


The American mathematician John Nash, who was killed on Saturday night in a car crash, was in Oslo five days ago to receive the Abel prize from the king of Norway. The GBP 500,000 Abel — which he shared with Louis Nirenberg — is considered a kind of maths version of the Nobel prize, which has no category for mathematics.

And yet, Nash is also a winner of the Nobel prize, the only person to share both accolades. “I must be an honorary Scandinavian,” he joked in March during the press conference that announced this year’s Abel laureates.

Nash is most famous for his research into game theory, the maths of decision-making and strategy, since it was this work that led to his being awarded the 1994 Nobel in economics. His fame also came from the 2001 Oscar-winning film A Beautiful Mind, in which he was played by Russell Crowe. The film, which turned him into probably the best known mathematician in the world, was based on the superb biography by Sylvia Nasar and charts his early career and then the struggle with schizophrenia that dominated most of his adult life.

Within the mathematics community, however, the work for which Nash was most admired — and for which he won the Abel — was not the game theory research but his advances in pure mathematics, notably geometry and partial differential equations .

The mathematician Mikhail Gromov once said: “What [Nash] has done in geometry is, from my point of view, incomparably greater than what he has done in economics, by many orders of magnitude. It was an incredible change in attitude of how you think.” Nash’s achievements in mathematics were striking not only because he proved deep and important results, but also because his career lasted only a decade before he was lost to mental illness.

Nash was born in 1928 in a small, remote town in West Virginia. His father was an electrical engineer and his mother a schoolteacher. He was an undergraduate at Carnegie Institute of Technology (now Carnegie Mellon University) in Pittsburgh and then did his graduate studies at Princeton, New Jersey. His PhD thesis, Non-Cooperative Games, is one of the foundational texts of game theory. It introduced the concept of an equilibrium for non-cooperative games, the “Nash equilibrium”, which eventually led to his economics Nobel prize.

Yet his mathematical interests soon lay elsewhere. He described his first breakthrough in pure mathematics, in his early 20s, as “a nice discovery relating to manifolds and real algebraic varieties”. His peers already recognised the result as an important and remarkable work.

In 1951, Nash left Princeton for MIT. Here, he became interested in the problems of “isometric embedding”, which asks whether it is possible to embed abstractly defined geometries into real-world geometries in such a way that distances are maintained. Nash’s two embedding theorems are considered classics, providing some of the deepest mathematical insights of the last century.

This work on embeddings led him to partial differential equations, which are equations involving flux and rates of change. He devised a way to solve a type of partial differential equation that hitherto had been considered impossible. His technique, later modified by J Moser, is now known as the Nash-Moser theorem.

In the early 1950s, Nash worked during the summers for the RAND Corporation, a civilian thinktank funded by the military in Santa Monica, California. Here, his work on game theory found applications in United States military and diplomatic strategy.

Perhaps Nash’s greatest mathematical work came from studying a mathematical puzzle that had been suggested to him by Louis Nirenberg. It concerned a major open problem concerning elliptic partial differential equations. Within a few months, Nash had solved the problem. It is thought that his work would have won him the Fields Medal — the most prestigious prize in maths, open only to those under 40 — had it not been solved at the same time by Italian mathematician Ennio De Giorgi. The men used different methods, and were not aware of each other’s work — the result is known as the Nash-De Giorgi theorem.

One of the many amazing aspects of Nash’s career was that he was not a specialist. Unlike almost all top mathematicians now, he worked on his own, and relished attacking famous open problems, often coming up with completely new ways of thinking. Louis Nirenberg once said, “About 20 years ago somebody asked me, “Are there any mathematicians you would consider as geniuses?” I said, “I can think of one, and that’s John Nash.” He had a remarkable mind. He thought about things differently from other people.”

In 1959, Nash began to suffer from delusions and extreme paranoia. For the next 40 years or so he was only able to do serious mathematical research in brief periods of lucidity.

Remarkably, however, he gradually improved and his mental state had recovered by the time he won the Nobel in 1994.

Nash showed such resolve and stamina in his mathematical work and in recovering from his mental illness, that his death in a taxi crash on the New Jersey turnpike seems all the more pointless and tragic.

May his soul rest in peace.