Multivitamins and Cognition: New Data From COSMOS


New data from the Cocoa Supplement and Multivitamin Outcomes Study (COSMOS) suggest that a daily multivitamin may help protect the aging brain. However, at least one expert has concerns about the study’s methodology and, as a result, the interpretation of its findings. 

The meta-analysis of three separate cognition studies provides “strong and consistent evidence that taking a daily multivitamin, containing more than 20 essential micronutrients, can help prevent memory loss and slow down cognitive aging,” study investigator Chirag Vyas, MBBS, MPH, with Massachusetts General Hospital and Harvard Medical School, Boston, told Medscape Medical News. 

“We are not now recommending multivitamin use, but the evidence is compelling that supports the promise of multivitamins to help prevent cognitive decline,” Vyas said. 

The new data, from the cognitive substudies of COSMOS, were published online on January 18 in the American Journal of Clinical Nutrition.

Clinically Meaningful Benefit?

To recap, COSMOS was a 2 x 2 factorial trial of coca extract (500 mg/d flavanols) and/or a daily commercial multivitamin-mineral (MVM) supplement for cardiovascular disease and cancer prevention among more than 21,000 US adults aged 60 years or older. 

Neither the cocoa extract nor the MVM supplement had a significant impact on cancer or cardiovascular disease events.

COMOS-Mind was a substudy of 2262 participants aged 65 or older without dementia who completed telephone-based cognitive assessments at baseline and annually for 3 years. 

As previously reported by Medscape Medical News, in COSMOS-Mind, there was no cognitive benefit of daily cocoa extract, but daily MVM supplementation was associated with improved global cognition, episodic memory, and executive function. However, the difference in global cognitive function between MVM and placebo was small, with a mean 0.07-point improvement on the z-score at 3 years. 

COSMOS-Web was a substudy of 3562 original participants who were evaluated annually for 3 years using an internet-based battery of neuropsychological tests. 

In this analysis, those taking the MVM supplement performed better on a test for immediate memory recall (remembering a list of 20 words); they were able to remember an additional 0.71 word on average compared with 0.44 word in the placebo group. However, they did not improve on tests of memory retention, executive function, or novel object recognition.

The new data are from COSMOS-Clinic, an analysis of 573 participants who completed in-person cognitive assessments. 

COSMOS-Clinic showed a modest benefit of MVM, compared with placebo, on global cognition over 2 years (mean difference, 0.06 SD units [SU]), with a significantly more favorable change in episodic memory (mean difference, 0.12 SU) but not in executive function/attention (mean difference, 0.04 SU), the researchers reported. 

They also conducted a meta-analysis based on the three separate cognitive substudies, with 5200 nonoverlapping COSMOS participants. 

The results showed “clear evidence” of MVM benefits on global cognition (mean difference, 0.07 SU; P = .0009) and episodic memory (mean difference, 0.06 SU; P =.0007), they reported, with the magnitude of effect on global cognition equivalent to reducing cognitive aging by 2 years.

In a statement, JoAnn Manson, MD, DrPH, chief of the Division of Preventive Medicine at Brigham and Women’s Hospital, who led the overall COSMOS trial, said that “the finding that a daily multivitamin improved memory and slowed cognitive aging in three separate placebo-controlled studies in COSMOS is exciting and further supports the promise of multivitamins as a safe, accessible, and affordable approach to protecting cognitive health in older adults.”

Not a Meta-analysis?

In an interview with Medscape Medical News, Christopher Labos, MD CM, MSc, a cardiologist and epidemiologist based in Montreal, Canada, who wasn’t involved in COSMOS, cautioned that the evidence to date on multivitamins for memory and brain health are “not all that impressive.”

Labos is a columnist for Medscape and previously has written about the COSMOS trial

He said it is important to note that this “meta-analysis of COSMOS data, strictly speaking, is not a meta-analysis” because the patients were all from the original COSMOS study without including any additional patients, “so you don’t have any more data than what you started with.

“The fact that the results are consistent with the original trial is not surprising. In fact, it would be concerning if they were not consistent because they’re the same population. They were just assessed differently — by phone, online, or in person,” Labos explained. 

“It is hard to tell what the benefit with multivitamins actually means in terms of hard clinical endpoints that matter to patients. Scoring a little bit better on a standardized test — I guess that’s a good thing, but does that mean you’re less likely to get dementia? I’m not sure we’re there yet,” he told Medscape Medical News

The bottom line, said Labos, is that “at this point, the evidence does not support recommending multivitamins purely for brain health. There is also a cost and potential downside associated with their use.”

Also weighing in on the new analyses from COSMOS, Claire Sexton, DPhil, Alzheimer’s Association senior director of scientific programs and outreach, said while there are now “positive, large-scale, long-term studies that show that multivitamin-mineral supplementation for older adults may slow cognitive aging, the Alzheimer’s Association is not ready to recommend widespread use of a multivitamin supplement to reduce risk of cognitive decline in older adults.

“Independent confirmatory studies are needed in larger, more diverse, and representative study populations. COSMOS-Clinic, for example, had less than 2% non-White in the multivitamin group and 5% non-White in the placebo group. It is critical that future treatments and preventions are effective in all populations,” Sexton told Medscape Medical News.

She noted that multivitamin supplements are “generally easy to find and relatively affordable. With confirmation, these promising findings have the potential to significantly impact public health — improving brain health, lowering healthcare costs, reducing caregiver burden — especially among older adults.”

The Alzheimer’s Association, Sexton told Medscape, “envisions a future where there are multiple treatments available that address the disease in multiple ways — like heart disease and cancer — and that can be combined into powerful combination therapies, in conjunction with brain-healthy guidelines for lifestyle, like diet and physical activity.”

The Alzheimer’s Association is leading a 2-year clinical trial known as US POINTER to evaluate whether lifestyle interventions that target multiple risk factors can protect cognition in older adults at increased risk for cognitive decline.

Why the world’s most powerful lasers could unlock secrets of the cosmos


Laser engineer Lauren Weinberg works on the Zeus laser system (Credit: Marcin Szczepanski/Michigan Engineering)

They are the most intense lasers ever built, and their beams are helping scientists probe the fabric of the Universe.

Inside a research lab at the University of Michigan, bright green light fills the vacuum chamber of a technological behemoth. It is the size of two tennis courts. The walls are shielded with 60cm (24in) of concrete to stop radiation leakage, and the staff wear masks and hairnets to ensure the delicate electronics are not affected.

This is Zeus, soon to be the most powerful laser in the US – and now it is whirring into life for its first official experiments.

Unlike the continuous lasers that scan your barcodes in shops, Zeus is a pulsed laser, firing in bursts a few quintillionths of a second long. Each pulse will be capable of reaching a peak power of three petawatts – equivalent to a thousand times the electricity consumption of the whole world. A laser capable of such extremely compressed energy will help researchers to study the quantum laws underpinning reality, for example, or recreate the conditions of extreme astrophysics out there in space.

But Zeus is not the only enormous laser that could reveal new discoveries in the coming years – a host of other high-powered lasers at facilities from Europe to Asia are hot on its heels. The field as a whole “is really growing”, says Karl Krushelnick, director of the Gérard Mourou Center for ultrafast optical science at the University of Michigan. “People are pushing the technology, and looking for interesting science.”The Zeus laser, which fired up this month, could help probe the nature of the Universe (Credit: Marcin Szczepanski/Michigan Engineering)

The Zeus laser, which fired up this month, could help probe the nature of the Universe

In the UK, a laser called Vulcan 20-20 will become the most powerful in the world when it is finished in 2029. It will produce a beam one million, billion, billion times brighter than the most intense sunlight. This single pulse will produce more than six times as much energy as is produced in the entire world – but will last for less than a trillionth of a second, with its target measuring just a few micrometres (or 0.001 of a millimetre). Like Zeus, Vulcan 20-20 will invite scientists from around the world to undertake experiments that may rewrite our understanding of the cosmos, nuclear fusion, and even create new matter.

The 20-petawatt Vulcan 20-20 is an £85m ($106m/€98m) upgrade on the existing Vulcan at the Central Laser Facility (CLF) in Harwell, Oxfordshire – which is being dismantled. Currently the size of two Olympic-sized swimming pools, its metre-wide mirrors weigh 1.5 tonnes (3,300lbs) apiece. Thick white wires snake out of the laser aperture, as the apparatus bends around the room. Considered state of the art when it was first built at the Rutherford Appleton Laboratory in 1997, the new laser will be 100 times brighter. 

The “impressive thing isn’t just the power, though, but rather the intensity of the laser”, says Rob Clarke, experimental science group leader at the CLF. To understand that intensity, imagine 500 million million standard 40W lightbulbs. Now “compress that light into something around a tenth of the size of a human hair”, he says. “The result of that is a very, very intense source of light, and it is this that creates all the fun plasma stuff such as huge electric and magnetic fields, [and] particle acceleration.”

Vulcan 20-20 will allow scientists to conduct astrophysics research in the lab – recreating the conditions of distant galaxies to analyse the inner workings of the likes of stars or gas clouds, or how matter might behave when exposed to particular temperatures and densities. The field of study is driven by a desire to investigate the cosmos, explains Alex Robinson, the CLF’s lead theoretical plasma physicist. Astrophysical research is usually “observational”, he says. “You’re pointing some sort of telescope at it, and you see various things. But this then begs the question of what’s really going on.” The hope is that conducting experiments with a laser of such power will, for the first time, allow “really rigorous tests of whether certain theories could ever work or not”.The UK's Vulcan laser, now being dismantled to be replaced by a more powerful device (Credit: Central Laser Facility)

The UK’s Vulcan laser, now being dismantled to be replaced by a more powerful device

Among the mysteries they’re hoping to investigate in Oxford are the origins of magnetic fields, which surround the majority of substantial objects in the Universe, like stars and planets. “Why are those magnetic fields there? It’s not completely obvious,” Robinson says, and no observation can ever really go back and test why they first came into existence. One such testing method might involve merging matter to create shockwaves, and adding manufactured turbulence, such as the kinds caused by the likes of molecular clouds, planets and dust, to see whether that “could give rise to magnetic fields”. 

Other experiments will explore the origins of cosmic rays (high energy particles that can travel almost at the speed of light), how jets (sprays of particles that shoot out from high-energy collisions) are formed, and the structure inside giant planets.

Researchers will also use the Vulcan 20-20 laser to investigate the formation of new materials. A form of boron nitride, a material harder than diamond, has potentially been found to be metastable – created in very high pressure and intensity conditions manufactured in the lab, that can afterwards survive at ambient temperatures. “And then the question is, what other materials could you make in the same way?” says Robinson. “Would they have fantastic electronic or optical properties? I don’t know. But at least there’s a nugget there telling us that there is something that’s worth exploring.”

Achieving fusion

Nuclear fusion is also on the ultra-high powered lasers’ hit list. In July, researchers at Lawrence Livermore National Laboratory’s National Ignition Facility, in California, used lasers to achieve a net gain in energy for a second time. Following the centre’s original breakthrough last December, the experiment this year created a higher energy yield than the first, again raising hopes that clean energy might replace our existing power sources. (Fusion reactions don’t release greenhouse gases or radioactive waste.) 

Fusion has also been one of the key areas of study at the Extreme Light Infrastructure for Nuclear Physics (ELI-NP) hub in Măgurele, Romania – which at 10-petawatt strength retains the title of the world’s most powerful laser (Mourou, its director and namesake of the University of Michigan facility, said its creation is “on a par with a lunar landing, where failure is not an option”).

Over the past year, the Romanian laser’s operator has begun partnering with private companies to develop technology that might fuel the world’s first commercial fusion plants. Using the “Chirped Pulse Amplification” technique that earned Mourou and Donna Strickland the Nobel Prize in Physics in 2018, laser pulses will be stretched, reducing their peak power, before being amplified and compressed again. This “pretty much changed the face of laser development in its entirety”, Clarke says, allowing far higher intensities to be reached with a low power (higher ones risk destroying “the quality of the laser pulse and even the laser chain itself if you push it too hard,” says Clarke).

Their research into the physical processes of this interaction is expected to be published in three years, prior to the building of their first commercial fusion power plants in the 2030s.

Bigger is better?

Physicists are keen to emphasise the collaborative nature of the field – but size still remains something of a bragging point. According to Chang Hee Nam, director of the Center for Relativistic Laser Science (CoReLS) in South Korea and professor at Gwangju Institute of Science and Technology, their laser currently “holds the highest intensity laser record” in the world, reaching 10^23 W/sq cm – or an intensity as powerful as all light on Earth focused to just over one micrometre, or less than a fiftieth the diameter of a human hair.

The South Korean scientists are using the technology to explore, among other things, proton therapy – a cancer treatment that targets positively charged beams at patients’ tumours.

Research that can yield new medical applications, along with testing out century-old ideas about the state of the Universe, have been well explored on CoReLS’ four-petawatt machine – but the team aren’t stopping there. Nam says that “we are now pushing to have a higher petawatt laser; we are preparing some proposals for a 25 petawatt laser beam”. If commissioned within the next six years as he hopes, it will dwarf the as-yet-unbuilt Vulcan 20-20.

Still, Vulcan’s Clarke says that power and intensity is not everything. The most important metric now is “what can you do with it? What science are you driving? What are you going to get out of it?” These lasers, and the researchers working on them, care about one thing above all else, he says. “It’s about building it right, and using it right.”

Signals from the depths of the cosmos


The first stars were formed in a very different environment to that of today Mark Garlick/ SPL / Alamy Stock Photo

One cannot help but wonder about the origin of the twinkling dots that span the night sky. The idea of the birth of the very first stars is intriguing given that the environment they would have formed in would have been very different from today. After their formation, radiation from the stars ionised the hydrogen gas in the Universe. The period, beginning with the birth of the first stars and galaxies followed by the reionisation of the Universe is often referred to as the cosmic dawn and epoch of reionisation1.

This period lasted roughly from 100 million to 900 million years after the Big Bang. However, it is exceptionally challenging to observe these distant epochs directly. In the absence of observations, we know truly little about this period from the nature of these first sources of radiation to the exact timeline of the processes that occurred within it.

At the Raman Research Institute, the quest to understand the cosmic dawn was initiated by Ravi Subrahmanyan and N. Udaya Shankar. They envisioned telescopes that could conduct sensitive observations to understand the nature of the first stars and galaxies. One of the best ways to do so was to exploit the fact that hydrogen gas is abundant in the Universe. It was well established and observed2, that hydrogen gas had a unique radiation emitted at a frequency of 1,420.405 MHz or equivalently at a wavelength of 21 cm. The sources of radiation during the cosmic dawn would imprint their signature in the brightness of this 21 cm signal, changing its shape and strength over cosmological timescales

With this goal in mind, the Shaped Antenna measurement of the background Radio Spectrum (SARAS) experiment was initiated at RRI. It aims to detect the mean brightness of 21 cm radiation using a single antenna with calibratable receivers. SARAS, along with its contemporaries, marked the emergence of small-scale precision experiments towards studying the cosmic dawn. The beauty of such experiments lies in their iterative abilities – they can be designed, developed, deployed and upgraded in a continuous cycle. The experiment requires ingenuity in antenna and electronics design, exceptional care in construction, meticulous selection of an observing site and algorithmic development for data modelling. Such a diverse range of activities require an integrated approach in science and engineering. As a result, the SARAS team comprises scientists, engineers and students who specialise in different domains.

The reason for such a high-precision design becomes clear when one looks at the challenges involved. The signal from the cosmic dawn is expected to arrive on Earth stretched in wavelength to meters and lowered in frequency by the expansion of the Universe to lie in the radio frequency band 50–200 MHz. The celestial signal is exceptionally faint as it is buried in sky radio waves that come to us from the gas in our own Galaxy, the Milky Way, which are a million times brighter. More unfortunate for astronomers is that this cosmic signal is in a radio wavelength band used by terrestrial communications equipment and TV and FM radio stations, which makes detecting the extraterrestrial signal extremely difficult.

In 2018, soon after SARAS 2 became the first experiment to constrain the properties of the first generation of stars via 21 cm observations3, the Experiment to Detect the Global EoR Signature (EDGES) led by astronomers at Arizona State University and Massachusetts Institute of Technology claimed to have detected the global 21 cm signal4. The strength of the reported signal from the cosmic dawn was wildly different to theoretical predictions prompting several speculations about how the Universe might be different compared to the accepted current understanding. These speculations included exotic physics, non-standard cosmology, new populations of early galaxies during the early Universe and new models of dark matter that may have resulted in such an unusual signal5. However, appreciating that errors in instrument calibration might result in spurious detections in such difficult measurements, cross-verification of the claim became a priority.

SARAS took a different turn in its observations to reach the sensitivity required for such a cross-examination. To ensure a clean measurement with SARAS, its antenna was floated on a raft on water. In an expedition in early 2020, the radio telescope was deployed in lakes in northern Karnataka, on Dandiganahalli Lake and the backwaters of Sharavati, all in India. After a rigorous statistical analysis, SARAS3 did not find any evidence of the signal detected by the EDGES experiment6. The presence of the signal was rejected after a careful assessment of the measurement’s uncertainties. The findings implied that the detection reported by EDGES was likely a calibration error. SARAS was indeed the first experiment to reach the required sensitivity to cross-verify the claim of signal detection. This research restored confidence in our understanding of the evolving Universe, re-establishing the prevailing standard cosmological model.

However, astronomers still need to know what the actual signal looks like. Having rejected the EDGES claim, the SARAS experiment is geared towards discovering the true nature of cosmic dawn. Since the publication of these findings, SARAS has undergone a series of upgrades and has conducted observations at a few radio-quiet locations in India. In the meantime, another RRI experiment is preparing to complement this quest for the signal hunt from space with PRATUSH – Probing ReionisATion of the Universe using Signal from Hydrogen. Given the challenges from ground-based observations, PRATUSH will fly in lunar orbit and conduct cosmological observations from the far side of the moon, which is expected to be pristine with no terrestrial radio frequency interference. It is currently in the pre-project studies phase, supported by the Indian Space Research Organisation.

The faint nature of the signal requires meticulous calibration of the telescope and robust cross-verification from different experiments. Therefore, SARAS and PRATUSH conducting observations from vastly different environments with unique challenges form an ideal set-up to look for the elusive 21 cm signal. A robust detection of the signal would help us unravel this last remaining gap in the history of our Universe.

Black Hole Discovery Helps to Explain Quantum Nature of the Cosmos


New insights from black hole research may elucidate the cosmological event horizon

AUTHOR

Edgar Shaghoulian is a theoretical physicist now at the University of California, Santa Cruz. His work focuses on black holes and quantum cosmology.

Where did the universe come from? Where is it headed? Answering these questions requires that we understand physics on two vastly different scales: the cosmological, referring to the realm of galaxy superclusters and the cosmos as a whole, and the quantum—the counterintuitive world of atoms and nuclei.

For much of what we would like to know about the universe, classical cosmology is enough. This field is governed by gravity as dictated by Einstein’s general theory of relativity, which doesn’t concern itself with atoms and nuclei. But there are special moments in the lifetime of our universe—such as its infancy, when the whole cosmos was the size of an atom—for which this disregard for small-scale physics fails us. To understand these eras, we need a quantum theory of gravity that can describe both the electron circling an atom and Earth moving around the sun. The goal of quantum cosmology is to devise and apply a quantum theory of gravity to the entire universe.

Quantum cosmology is not for the faint of heart. It is the Wild West of theoretical physics, with little more than a handful of observational facts and clues to guide us. Its scope and difficulty have called out to young and ambitious physicists like mythological sirens, only to leave them foundering. But there is a palpable feeling that this time is different and that recent breakthroughs from black hole physics—which also required understanding a regime where quantum mechanics and gravity are equally important—could help us extract some answers in quantum cosmology. The fresh optimism was clear at a recent virtual physics conference I attended, which had a dedicated discussion session about the crossover between the two fields. I expected this event to be sparsely attended, but instead many of the luminaries in physics were there, bursting with ideas and ready to get to work.

Event Horizons

The first indication that there is any relation between black holes and our universe as a whole is that both manifest “event horizons”—points of no return beyond which two people seemingly fall out of contact forever. A black hole attracts so strongly that at some point even light—the fastest thing in the universe—cannot escape its pull. The boundary where light becomes trapped is thus a spherical event horizon around the center of the black hole.

A person in space stands on a tiny globe looking toward a black empty sphere. The sphere’s edge is labeled “event horizon.”
Credit: Jen Christiansen

Our universe, too, has an event horizon—a fact confirmed by the stunning and unexpected discovery in 1998 that not only is space expanding, but its expansion is accelerating. Whatever is causing this speedup has been called dark energy. The acceleration traps light just as black holes do: as the cosmos expands, regions of space repel one another so strongly that at some point not even light can overcome the separation. This inside-out situation leads to a spherical cosmological event horizon that surrounds us, leaving everything beyond a certain distance in darkness. There is a crucial difference between cosmological and black hole event horizons, however. In a black hole, spacetime is collapsing toward a single point—the singularity. In the universe at large, all of space is uniformly growing, like the surface of a balloon that is being inflated. This means that creatures in faraway galaxies will have their own distinct spherical event horizons, which surround them instead of us. Our current cosmological event horizon is about 16 billion light-years away. As long as this acceleration continues, any light emitted today that is beyond that distance will never reach us. (Cosmologists also speak of a particle horizon, which confusingly is often called a cosmological horizon as well. This refers to the distance beyond which light emitted in the early universe has not yet had time to reach us here on Earth. In our tale, we will be concerned only with the cosmological event horizon, which we will often just call the cosmological horizon. These are unique to universes that accelerate, like ours.)

A person is in the center of a sphere filled with galaxies. Beyond the “cosmic event horizon” sphere boundary is emptiness.
Credit: Jen Christiansen

The similarities between black holes and our universe don’t end there. In 1974 Stephen Hawking showed that black holes are not completely black: because of quantum mechanics, they have a temperature and therefore emit matter and radiation, just as all thermal bodies do. This emission, called Hawking radiation, is what causes black holes to eventually evaporate away. It turns out that cosmological horizons also have a temperature and emit matter and radiation because of a very similar effect. But because cosmological horizons surround us and the radiation falls inward, they reabsorb their own emissions and therefore do not evaporate away like black holes.

Hawking’s revelation posed a serious problem: if black holes can disappear, so can the information contained within them—which is against the rules of quantum mechanics. This is known as the black hole information paradox, and it is a deep puzzle complicating the quest to combine quantum mechanics and gravity. But in 2019 scientists made dramatic progress. Through a confluence of conceptual and technical advances, physicists argued that the information inside a black hole can actually be accessed from the Hawking radiation that leaves the black hole. (For more on how scientists figured this out, see “How the Inside of a Black Hole Is Secretly on the Outside“).

This discovery has reinvigorated those of us studying quantum cosmology. Because of the mathematical similarities between black holes and cosmological horizons, many of us have long believed that we couldn’t understand the latter without understanding the former. Figuring out black holes became a warm-up problem—one of the hardest of all time. We haven’t fully solved our warm-up problem yet, but now we have a new set of technical tools that provide beautiful insight into the interplay of gravity and quantum mechanics in the presence of black hole event horizons.

Entropy and the Holographic Principle

Part of the recent progress on the black hole information paradox grew out of an idea called the holographic principle, put forward in the 1990s by Gerard ‘t Hooft of Utrecht University in the Netherlands and Leonard Susskind of Stanford University. The holographic principle states that a theory of quantum gravity that can describe black holes should be formulated not in the ordinary three spatial dimensions that all other physical theories use but instead in two dimensions of space, like a flat piece of paper. The primary argument for this approach is quite simple: a black hole has an entropy—a measure of how much stuff you can stick inside it—that is proportional to the two-dimensional area of its event horizon.

Holographic black hole primer demonstrates that a 3-D sphere can be mapped as a faceted surface and then flattened to 2-D.
Credit: Jen Christiansen

Contrast this with the entropy of a more traditional system—say, a gas in a box. In this case, the entropy is proportional to the three-dimensional volume of the box, not the area. This is natural: you can stick something at every point in space inside the box, so if the volume grows, the entropy grows. But because of the curvature of space within black holes, you can actually increase the volume without affecting the area of the horizon, and this will not affect the entropy! Even though it naively seems you have three dimensions of space to stick stuff in, the black hole entropy formula tells you that you have only two dimensions of space, an area’s worth. So the holographic principle says that because of the presence of black holes, quantum gravity should be formulated as a more prosaic nongravitational quantum system in fewer dimensions. At least then the entropies will match.

The idea that space might not be truly three-dimensional is rather compelling, philosophically. At least one dimension of it might be an emergent phenomenon that arises from its deeper nature rather than being explicitly hardwired into the fundamental laws. Physicists who study space now understand that it can emerge from a large collection of simple constituents, similar to other emergent phenomena such as consciousness, which seems to arise from basic neurons and other biological systems.

One of the most exciting aspects of the progress in the black hole information paradox is that it points toward a more general understanding of the holographic principle, which previously had been made precise only in situations very different from our real universe. In the calculations from 2019, however, the way the information inside of the black hole is encoded in the Hawking radiation is mathematically analogous to how a gravitational system is encoded in a lower-dimensional nongravitational system according to the holographic principle. And these techniques can be used in situations more like our universe, giving a potential avenue for understanding the holographic principle in the real world. A remarkable fact about cosmological horizons is that they also have an entropy, given by the exact same formula as the one we use for black holes. The physical interpretation of this entropy is much less clear, and many of us hope that applying the new techniques to our universe will shed light on this mystery. If the entropy is measuring how much stuff you can stick beyond the horizon, as with black holes, then we will have a sharp bound on how much stuff there can be in our universe.

Outside Observers

The recent progress on the black hole information paradox suggests that if we collect all the radiation from a black hole as it evaporates, we can access the information that fell inside the black hole. One of the most important conceptual questions in cosmology is whether the same is possible with cosmological event horizons. We think they radiate like black holes, so can we access what is beyond our cosmological event horizon by collecting its radiation? Or is there some other way to reach across the horizon? If not, then most of our vast, rich universe will eventually be lost forever. This is a grim image of our future—we will be left in the dark.

Almost all attempts to get a handle on this question have required physicists to artificially extricate themselves from the accelerating universe and imagine viewing it from the outside. This is a crucial simplifying assumption, and it more closely mimics a black hole, where we can cleanly separate the observer from the system simply by placing the observer far away. But there seems to be no escaping our cosmological horizon; it surrounds us, and it moves if we move, making this problem much more difficult. Yet if we want to apply our new tools from the study of black holes to the problems of cosmology, we must find a way to look at the cosmic horizon from the outside.

There are different ways to construct an outsider view. One of the simplest is to consider a hypothetical auxiliary universe that is quantum-mechanically entangled with our own and investigate whether an observer in the auxiliary universe can access the information in our cosmos, which is beyond the observer’s horizon. In work I did with Thomas Hartman and Yikun Jiang, both at Cornell University, we constructed examples of auxiliary universes and other scenarios and showed that the observer can access information beyond the cosmological horizon in the same way that we can access information beyond the black hole horizon. (A complementary paper by Yiming Chen of Princeton University, Victor Gorbenko of EPFL in Switzerland and Juan Maldacena of the Institute for Advanced Study [IAS] in Princeton, N.J., showed similar results.)

But these analyses all have one serious deficiency: when we investigated “our” universe, we used a model universe that is contracting instead of expanding. Such universes are much simpler to describe in the context of quantum cosmology. We don’t completely understand why, but it’s related to the fact that we can think of the interior of a black hole as a contracting universe where everything is getting squished together. In this way, our newfound understanding of black holes could easily help us study this type of universe

Even in these simplified situations, we are puzzling our way through some confusing issues. One problem is that it’s easy to construct multiple simultaneous outsider views so that each outsider can access the information in the contracting universe. But this means multiple people can reach the same piece of information and manipulate it independently. Quantum mechanics, however, is exacting: not only does it forbid information from being destroyed, it also forbids information from being replicated. This idea is known as the no-cloning theorem, and the multiple outsiders seem to violate it. In a black hole, this isn’t a problem, because although there can still be many outsiders, it turns out that no two of them can independently access the same piece of information in the interior. This limit is related to the fact that there is only one black hole and therefore just one event horizon. But in an expanding spacetime, different observers have different horizons. Recent work that Adam Levine of the Massachusetts Institute of Technology and I did together, however, suggests that the same technical tools from the black hole context work to avoid this inconsistency as well.

Toward a Truer Theory

Although there has been exciting progress, so far we have not been able to directly apply what we learned about black hole horizons to the cosmological horizon in our universe because of the differences between these two types of horizons.

The ultimate goal? No outsider view, no contracting universe, no work-arounds: we want a complete quantum theory of our expanding universe, described from our vantage point within the belly of the beast. Many physicists believe our best bet is to come up with a holographic description, meaning one using fewer than the usual three dimensions of space. There are two ways we can do this. The first is to use tools from string theory, which treats the elementary particles of nature as vibrating strings. When we configure this theory in exactly the right way, we can provide a holographic description of certain black hole horizons. We hope to do the same for the cosmological horizon. Many physicists have put a lot of work into this approach, but it has not yet yielded a complete model for an expanding universe like ours.

The other way to divine a holographic description is by looking for clues from the properties that such a description should have. This approach is part of the standard practice of science—use data to construct a theory that reproduces the data and hope it makes novel predictions as well. In this case, however, the data themselves are also theoretical. They are things we can reliably calculate even without a complete understanding of the full theory, just as we can calculate the trajectory of a baseball without using quantum mechanics. The idea works as follows: we calculate various things in classical cosmology, maybe with a little bit of quantum mechanics sprinkled in, but we try to avoid situations where quantum mechanics and gravity are equally important. This forms our theoretical data. For example, Hawking radiation is a piece of theoretical data. And what must be true is that the full, exact theory of quantum cosmology should be able to reproduce this theoretical datum in an appropriate regime, just as quantum mechanics can reproduce the trajectory of a baseball (albeit in a much more complicated way than classical mechanics).

Leading the charge in extracting these theoretical data is a powerful physicist with a preternatural focus on the problems of quantum cosmology: Dionysios Anninos of King’s College London has been working on the subject for more than a decade and has provided many clues toward a holographic description. Others around the world have also joined the effort, including Edward Witten of IAS, a figure who has towered over quantum gravity and string theory for decades but who tends to avoid the Wild West of quantum cosmology. With his collaborators Venkatesa Chandrasekaran of IAS, Roberto Longo of the University of Rome Tor Vergata and Geoffrey Penington of the University of California, Berkeley, he is investigating how the inextricable link between an observer and the cosmological horizon affects the mathematical description of quantum cosmology.

Sometimes we are ambitious and try to calculate theoretical data when quantum mechanics and gravity are equally important. Inevitably we have to impose some rule or guess about the behavior of the full, exact theory in such instances. Many of us believe that one of the most important pieces of theoretical data is the amount and pattern of entanglement between constituents of the theory of quantum cosmology. Susskind and I formulated distinct proposals for how to compute these data, and in hundreds of e-mails exchanged during the pandemic, we argued incessantly over which was more reasonable. Earlier work by Eva Silverstein of Stanford, another brilliant physicist with a long track record in quantum cosmology, and her collaborators provides yet another proposal for computing these theoretical data.

The nature of entanglement in quantum cosmology is a work in progress, but it seems clear that nailing it will be an important step toward a holographic description. Such a concrete, calculable theory is what the subject desperately needs, so that we can compare its outputs with the wealth of theoretical data that are accumulating from scientists. Without this theory, we will be stuck at a stage akin to filling out the periodic table of elements without the aid of quantum mechanics to explain its patterns.

There is a rich history of physicists quickly turning to cosmology after learning something novel about black holes. The story has often been the same: we’ve been defeated and humbled, but after licking our wounds, we’ve returned to learn more from what black holes have to teach us. In this instance, the depth of what we’ve realized about black holes and the breadth of interest in quantum cosmology from scientists around the world may tell a different tale.

Here’s Your First Look at the Most Detailed Simulation of the Cosmos Ever Made


The largest simulation of the cosmos ever run has boldly taken us where we have never gone before: the formation of the universe. Illustris: The Next Generation (IllustrisTNG) utilized new computational methods to achieve a first of its kind universe-scale simulation. The data-packed simulation has already fueled three papers, which were published Thursday, Feb 1, in the Monthly Notices of the Royal Astronomical Society.

The insights gleaned from the simulation have given researchers a new understanding of how black holes affect the distribution of the ever-elusive dark matter throughout galaxies. Not only could these powerful gravity wells be preventing older galaxies from producing new stars, they could also be influencing the emergence of cosmic structures.

A single simulation run required 24,000 processors and a timespan of more than two months. Germany’s fastest mainframe computer, the Hazel Hen machine at the High-Performance Computing Center Stuttgart, ran the simulation twice. “The new simulations produced more than 500 terabytes of simulation data,” Volker Springel, principal investigator from the Heidelberg Institute for Theoretical Studies, said in a press release. “Analyzing this huge mountain of data will keep us busy for years to come, and it promises many exciting new insights into different astrophysical processes.”

IllustrisTNG made these predictions by modeling the evolution of millions of galaxies in a representative region of a universe. The cube-shaped area has sides that are nearly 1 billion light-years long. In the previous version (called Illustris), the model area’s sides were only 350 million light-years long. The updated program also introduced some crucial physical processes that had not been included in previous simulations.

Verifiable Predictions

These updated features allowed IllustrisTNG to model a universe remarkably similar to our own. For the first time, the clustering patterns of the simulated galaxies demonstrated a high degree of realism in comparison to the patterns we see from powerful telescopes, such as those at the Sloan Digital Sky Survey.

If the program’s verifiable predictions about dark matter, galaxy formation, and magnetic fields continue to prove accurate, we’ll be able to put greater stock in the insights it provides about processes we haven’t been able to observe with even the most advanced telescopes.

“When we observe galaxies using a telescope, we can only measure certain quantities,” Shy Genel, a scientist at the Flatiron Institute’s Center for Computational Astrophysics who worked on the development of IllustrisTNG, said in the press release. “With the simulation, we can track all the properties for all these galaxies. And not just how the galaxy looks now, but its entire formation history.”

By mapping out the histories of model galaxies, we may get a glimpse of what the Milky Way looked like as Earth came into being. We could even get an idea of how our galaxy might evolve billions of years from now.

In the years to come, this simulation might prompt some astronomers to simply adjust their telescopes to look for newly predicted stellar processes. For example, the simulation of the cosmos predicted that galaxy collisions that form larger galaxies should produce faint stellar light. Specific details about where to look for this background glow could allow astronomers to confirm their theories about these intergalactic events.

NASA Is Testing the Telescope That Will Revolutionize Our View of the Cosmos


IN BRIEF

The James Webb Space Telescope, the highly anticipated successor of Hubble, recently successfully completed cryogenic vacuum testing. This round of testing is one of the last major milestones before the telescope is finally launched.

TELESCOPE TESTING

In 2017, the James Webb Space Telescope (JWST) successfully completed cryogenic vacuum testing that lasted for over 100 days, solidifying the instrument’s capabilities and potential as a full observatory. In a NASA media briefing on January 10, officials at the Johnson Space Center in Houston discussed these efforts and the magnitude of this successful testing. The “world’s largest space freezer,” as described by Mark Voyton, Webb telescope Optical Telescope Element and Integrated Science Instrument Module (OTIS) manager at Goddard, allowed the team to successfully test the instrument and its pieces at the extreme temperatures it will endure in its missions.

Additionally, this testing showed that all mirrors and instrument models were aligned, with the primary mirror’s 18 segments all operating as one monolithic mirror. The tests also allowed NASA to exercise operations as they would occur in orbit, confirm that the integrated fine guiding system can track a star through the optical system, and ensure that the telescope could maintain correct observatory pointing. This laundry list of successful testing puts the JWST right on schedule to move forward and open our eyes to previously unseen corners of the universe.

The Webb testing was completed in Chamber A, a thermal-vacuum test facility that was first made famous in testing the Apollo spacecraft. While the Apollo tests were completed with both extreme heat and cold in mind, the chamber was heavily modified for the JWST. The Apollo craft were tested at temperatures as low as 100 Kelvin, but with these modifications, testing commenced at temperatures as low as 40 Kelvin with no high-temperature testing.

The success of this testing is not only a significant milestone for the James Webb Space Telescope and its highly-anticipated 2019 launch; it’s also a testament to the human spirit. This cryogenic testing occurred 24/7 throughout Hurricane Harvey, uninterrupted, as its international teams worked together in a collaborative effort.

MOVING FORWARD

After the success of this testing, the JWST will be transported for integration into a complete observatory and to undergo final environmental testing before traveling to its launch site. While there was a delay that pushed the launch from 2018 to 2019, the telescope is currently right on track to successfully make its launch window.

Artist conception of the James Webb Space Telescope observing the cosmos.
Artist conception of the James Webb Space Telescope observing the cosmos. 

The capabilities of the JWST will far surpass anything that has been created before. This mammoth telescope, described by Voyton as “the world’s most magnificent time machine,” proved a piece of this capability in testing: it detected, with all four instruments, the light of a simulated star for the first time. The fine guidance subsystem was successful in not only generating the position of the light, but also in tracking its movement. This was a first in testing, and it shows the remarkable applications that this telescope will have.

Because it is an infrared telescope, as opposed to a visual light telescope like Hubble, the James Webb Space Telescope requires a cold environment such as the one it was tested in. This will allow it to observe light from some of the earliest moments of the universe. Additionally, it will give us clarity in viewing exoplanets that we’ve only before dreamed of, closely observing Earth-like planets that could hold the promise of solidifying the existence of extraterrestrial life.

It hasn’t even left Earth yet, but this phenomenal instrument continues to inspire.

Communicating Across the Cosmos.


The cover of the phonograph record on the Voyager 1 and 2 spacecraft, which contains an interstellar message encoded on a phonographic record.  The encoded instructions attempt to explain to extraterrestrials how to play the record.  Credit: NASA JPL

If extraterrestrial civilizations exist, the nearest is probably at least hundreds or thousands of light years away. Still, the greatest gulf that we will have to bridge to communicate with extraterrestrials is not such distances, but the gulf between human and alien minds.

In mid-November, the SETI Institute in Mountain View, California sponsored an academic conference on interstellar communication, “Communicating across the Cosmos“. The conference drew 17 speakers from a variety of disciplines, including linguistics, anthropology, archeology, mathematics, cognitive science, radio astronomy, and art. In this installment we will explore some of the formidable difficulties that humans and extraterrestrials might face in constructing mutually comprehensible interstellar messages.

Optical PAyload for Lasercomm Science (OPALS) Flight System, the first laser communication from space. Credit: NASA/JPL-Caltech.

If we knew where they were, and we wanted to, the information revolution has given us the capability to send an extraterrestrial civilization a truly vast amount of information. According to SETI Institute radio astronomer Seth Shostak, with broadband microwave radio we could transmit the Library of Congress, or the contents of the World Wide Web in 3 days; with broadband optical (a laser beam for space transmission) we could transmit this same amount of information in 20 minutes. This transmission would, of course, take decades or centuries to cross the light years and reach its destination. These truly remarkable capabilities give us the ability to send almost any message we want to the extraterrestrials. But transmitting capabilities aren’t the hard part of the problem. If the aliens can’t interpret it, the entire content of the World Wide Web is just a mountain of gibberish.

Many conference participants felt that the problems involved in devising a message that could be understood by a non-human mind were extremely formidable, and quite possibly insurmountable.

Having its own separate origin, extraterrestrial life could be different from Earthly life all the way down to its biochemical foundations. The vast diversity of life on Earth gives us little reason to think that aliens will look like us. Given the different conditions of another planet, and the contingencies of a different history, evolution will have produced a different set of results. For interstellar messaging to be possible at all, these results must include an alien creature capable of language, culture, and tool-making. But if these abilities are founded on a different biology and different perceptual systems, they might differ from their human counterparts in ways that we would find hard to even imagine. Looking to our own possible future development, we can’t even be sure that extraterrestrials will be biological creatures. They might be intelligent machines.

According to cognitive scientist Dominique Lestel, who presented at the conference, understanding extraterrestrials poses an unprecedented set of problems. We face all of the problems that ethologists (scientists who study animal behavior) face when they study perception and signaling in other animal species. These are compounded with all of the problems that ethnologists face when they study other human cultures. Lestel worries that humans might not be smart enough to do it. He wasn’t alone in that opinion.

Explanation of the symbols on the cover of the Voyager record Credit: NASA JPL

Linguist and conference presenter Sheri-Wells Jensen said that humans have created more than 7,000 different spoken and signed languages. No one knows whether all human languages sprung from a single instance of the invention of language or whether several human groups invented language independently. Given the ease with which children learn a language, many linguists think that our brain has a specialized language “module” underlying the “universal” grammar of human languages. These special features of the human brain might pose a formidable barrier to learning the language of a creature with a different brain produced by a different evolutionary history. An alien language might make demands on our short term memory or other cognitive abilities that humans would find impossible to meet.

When human beings talk to one another, they rely on a system of mutually understood conventions. Often gestures and body language are essential to conveying meaning. Conference presenter Klara Anna Capova, a cultural anthropologist, noted that interstellar messaging poses unique problems because the conventions to be followed in the message can’t be mutually arranged. We must formulate them ourselves, without knowing anything about the recipients. The intended recipients are distant in both time and space. The finite speed of light ensures that query and response will be separated by decades or centuries. With so little to go on, the message will inevitably reflect our cultural biases and motives. In 1962, the Soviet Union transmitted a message towards the planet Venus. It was in Morse code, and consisted of the Cyrillic characters “Lenin”, “CCCP” (USSR), and “MIR” (the Russian word for “peace”). But the posited Venusians couldn’t possibly have known the conventions of Morse code, the Cyrillic alphabet, human names, countries, or possible relationships between them, no matter how intimately familiar these things would have seemed to the Soviets. Whether they are meant to build national prestige, sell a product, or cause humans to think deeply about their place in the universe, interstellar messages play to a human audience.

Given the long timescales involved in interstellar messaging, many conference participants noted the parallels with archeology. Archeologists have learned quite a lot about past human cultures by studying the artifacts and symbols they have left for us. Still, archeological methodologies have their limits. According to conference presenter and archeologist Paul Wason, these limits have much to teach us about interstellar messaging. Certain meanings are accessible to archeological analysis and others aren’t, because we lack the contextual knowledge needed to interpret them. Neolithic cave paintings speak to modern investigators about the skill and abilities of the painters. But, because we don’t have the needed contextual knowledge, they don’t tell us what the paintings meant to their creators.

To interpret symbols used in the past, we need to know the conventions that related the symbols to the things they symbolized. Linguistic symbols pose special problems. To understand them, we need to know two different sets of conventions. First, we need to know the conventions that relate the script to the words of the spoken language. Second, we need to know how the words of the spoken language relate to the things and situations it refers to. It is a sobering thought for would-be exolinguists that no one has ever succeeded in deciphering an ancient script without knowing the language it was written in.

What does all this tell us about our fledgling attempts to devise messages for aliens? The phonograph record carried on the Voyager 1 and 2 spacecraft includes a moving message from then President Carter, encoded as English text. It reads in part: “We hope someday, having solved the problems we face, to join a community of galactic civilizations. This record represents our hope and our determination, and our good will in a vast and awesome universe.”

Human archeologists have never deciphered linear A, the writing system of the ancient Minoan civilization, due to its apparent lack of association with any known language. Unfortunately, since extraterrestrials likewise lack contextual knowledge of any human language, it is almost certain that they could never discern the meaning of President Carter’s text. The team that developed the Voyager message, which included astronomers and SETI pioneers Carl Sagan and Frank Drake, were well aware of the problem. Carter was, most likely, made aware. Interstellar messages play to a human audience.

An inscription written around the inner surface of a cup in Linear A, a script used by the Minoan civilization that has never been deciphered.  Credit: Sir Arthur Evans, Scripta Minoa: The Written Documents of Minoan Crete

Is it possible for us to do better? Some off-beat ideas were proposed. Both astronomer Seth Shostak and designer Marek Kultys thought we might consider sending the sequence of the human genome. This idea was quickly shot down by a comment from the audience. Why send them a key, they said, if the aliens don’t have a lock. The metaphor is apt. DNA can only do its job as a constituent part of a living cell. Reading and implementing the genetic code involves numerous highly specialized enzymes and other cellular parts. Even if alien biochemistry and cell structure are generally similar to their Earthly counterparts, there are many features of Earthly biochemistry that appear to be quirky products of the history of life on Earth. The probability that they would repeat themselves precisely on another world are, for all practical purposes, nil. Without the context of an Earthly cell, the sequence of the human genome would be meaningless gibberish.

In the twenty first century, our ability to transmit and process information has become astounding, but we still don’t know how information conveys meaning. Is there even a glimmering of a hope that we can reach beyond the limitations of our humanity to convey meaning to an alien mind? In the final installment of this report, we’ll consider some possibilities

Carl Sagan on Science and Spirituality.


 “The notion that science and spirituality are somehow mutually exclusive does a disservice to both.”

The friction between science and religion stretches from Galileo’s famous letter to today’s leading thinkers. And yet we’re seeing that, for all its capacity for ignorance, religion might havesome valuable lessons for secular thought and the two need not be regarded as opposites.

sagan_life

In 1996, mere months before his death, the greatCarl Sagan — cosmic sagevoracious reader,hopeless romantic — explored the relationship between the scientific and the spiritual in The Demon-Haunted World: Science as a Candle in the Dark (public library). He writes:

Plainly there is no way back. Like it or not, we are stuck with science. We had better make the best of it. When we finally come to terms with it and fully recognize its beauty and its power, we will find, in spiritual as well as in practical matters, that we have made a bargain strongly in our favor.

But superstition and pseudoscience keep getting in the way, distracting us, providing easy answers, dodging skeptical scrutiny, casually pressing our awe buttons and cheapening the experience, making us routine and comfortable practitioners as well as victims of credulity.

And yet science, Sagan argues, isn’t diametrically opposed to spirituality. He echoes Ptolemy’s timeless awe at the cosmos and reflects on what Richard Dawkins has called the magic of reality, noting the intense spiritual elevation that science is capable of producing:

In its encounter with Nature, science invariably elicits a sense of reverence and awe. The very act of understanding is a celebration of joining, merging, even if on a very modest scale, with the magnificence of the Cosmos. And the cumulative worldwide build-up of knowledge over time converts science into something only a little short of a trans-national, trans-generational meta-mind.

“Spirit” comes from the Latin word “to breathe.” What we breathe is air, which is certainly matter, however thin. Despite usage to the contrary, there is no necessary implication in the word “spiritual” that we are talking of anything other than matter (including the matter of which the brain is made), or anything outside the realm of science. On occasion, I will feel free to use the word. Science is not only compatible with spirituality; it is a profound source of spirituality. When we recognize our place in an immensity of light years and in the passage of ages, when we grasp the intricacy, beauty and subtlety of life, then that soaring feeling, that sense of elation and humility combined, is surely spiritual. So are our emotions in the presence of great art or music or literature, or of acts of exemplary selfless courage such as those of Mohandas Gandhi or Martin Luther King Jr. The notion that science and spirituality are somehow mutually exclusive does a disservice to both.

Reminding us once again of his timeless wisdom on the vital balance between skepticism and openness and the importance of evidence, Sagan goes on to juxtapose the accuracy of science with the unfounded prophecies of religion:

Not every branch of science can foretell the future — paleontology can’t — but many can and with stunning accuracy. If you want to know when the next eclipse of the Sun will be, you might try magicians or mystics, but you’ll do much better with scientists. They will tell you where on Earth to stand, when you have to be there, and whether it will be a partial eclipse, a total eclipse, or an annular eclipse. They can routinely predict a solar eclipse, to the minute, a millennium in advance. You can go to the witch doctor to lift the spell that causes your pernicious anaemia, or you can take vitamin Bl2. If you want to save your child from polio, you can pray or you can inoculate. If you’re interested in the sex of your unborn child, you can consult plumb-bob danglers all you want (left-right, a boy; forward-back, a girl – or maybe it’s the other way around), but they’ll be right, on average, only one time in two. If you want real accuracy (here, 99 per cent accuracy), try amniocentesis and sonograms. Try science.

Think of how many religions attempt to validate themselves with prophecy. Think of how many people rely on these prophecies, however vague, however unfulfilled, to support or prop up their beliefs. Yet has there ever been a religion with the prophetic accuracy and reliability of science? There isn’t a religion on the planet that doesn’t long for a comparable ability — precise, and repeatedly demonstrated before committed skeptics — to foretell future events. No other human institution comes close.

Source: http://www.brainpickings.org