The race to save the Internet from quantum hackers


The quantum computer revolution could break encryption — but more-secure algorithms can safeguard privacy.

Cartoon showing a digital orb locked with chains being attacked by quantumness
Illustration by Dalbert B. Vilarino

In cybersecurity circles, they call it Q-day: the day when quantum computers will break the Internet.

Almost everything we do online is made possible by the quiet, relentless hum of cryptographic algorithms. These are the systems that scramble data to protect our privacy, establish our identity and secure our payments. And they work well: even with the best supercomputers available today, breaking the codes that the online world currently runs on would be an almost hopeless task.

But machines that will exploit the quirks of quantum physics threaten that entire deal. If they reach their full scale, quantum computers would crack current encryption algorithms exponentially faster than even the best non-quantum machines can. “A real quantum computer would be extremely dangerous,” says Eric Rescorla, chief technology officer of the Firefox browser team at Mozilla in San Francisco, California.

As in a cheesy time-travel trope, the machines that don’t yet exist endanger not only our future communications, but also our current and past ones. Data thieves who eavesdrop on Internet traffic could already be accumulating encrypted data, which they could unlock once quantum computers become available, potentially viewing everything from our medical histories to our old banking records. “Let’s say that a quantum computer is deployed in 2024,” says Rescorla. “Everything you’ve done on the Internet before 2024 will be open for discussion.”

Even the most bullish proponents of quantum computing say we’ll have to wait a while until the machines are powerful enough to crack encryption keys, and many doubt it will happen this decade — if at all.

But the risk is real enough that the Internet is being readied for a makeover, to limit the damage if Q-day happens. That means switching to stronger cryptographic systems, or cryptosystems. Fortunately, decades of research in theoretical computer science has turned up plenty of candidates. These post-quantum algorithms seem impervious to attack: even using mathematical approaches that take quantum computing into account, programmers have not yet found ways to defeat them in a reasonable time.

Which of these algorithms will become standard could depend in large part on a decision soon to be announced by the US National Institute of Standards and Technology (NIST) in Gaithersburg, Maryland.

In 2015, the US National Security Agency (NSA) announced that it considered current cryptosystems vulnerable, and advised US businesses and the government to replace them. The following year, NIST invited computer scientists globally to submit candidate post-quantum algorithms to a process in which the agency would test their quality, with the help of the entire crypto community. It has since winnowed down its list from 65 to 15. In the next couple of months, it will select a few winners, and then publish official versions of those algorithms. Similar organizations in other countries, from France to China, will make their own announcements.

But that will be only the beginning of a long process of updating the world’s cryptosystems — a change that will affect every aspect of our lives online, although the hope is that it will be invisible to the average Internet user. Experience shows that it could be a bumpy road: early tests by firms such as Google haven’t all run smoothly.

“I think it’s something we know how to do; it’s just not clear that we’ll do it in time,” Peter Shor, a mathematician at the Massachusetts Institute of Technology in Cambridge whose work showed the vulnerabilities of present-day encryption, told Nature in 2020.

Even if Q-day never happens, the possibility of code-breaking quantum machines has already changed computer science — and, in particular, the ancient art of cryptography. “Most people I know think in terms of quantum-resistant crypto,” says computer scientist Shafi Goldwasser, director of the Simons Institute for the Theory of Computing at the University of California, Berkeley.

Peter Shor, winner of the 2020 BBVA Foundation Frontiers of Knowledge Award in Basic Sciences.
Peter Shor showed that quantum algorithms could defeat cryptographic systems.Credit: BBVA Foundation

Birth of public-key cryptography

Armies and spies have always been able to send messages securely even when a channel — be it a messenger pigeon or a radio link — is susceptible to eavesdropping, as long as their messages were encrypted. However, until the 1970s, this required the two parties to agree on a shared secret cipher in advance.

Then, in 1976, three US computer scientists, Whitfield Diffie, Martin Hellman and Ralph Merkle, came up with the revolutionary concept of public-key cryptography, which allows two people to exchange information securely even if they had no previous agreement. The idea rests on a mathematical trick that uses two numbers: one, the public key, is used to encrypt a message, and it is different from the second, the private key, used to decrypt it. Someone who wants to receive confidential messages can announce their public key to the world, say, by printing it in a newspaper. Anyone can use the public key to scramble their message and share it openly. Only the receiver knows the private key, enabling them to unscramble the information and read it.

In practice, public keys are not typically used to encrypt the data, but to securely share a conventional, symmetric key — one that both parties can use to send confidential data in either direction. (Symmetric-key systems can also be weakened by existing quantum algorithms, but not in a catastrophic way.)

For the first two decades of the Internet age from the mid-1990s, the most commonly used public-key-exchange algorithm was RSA, named after its inventors, Ron Rivest, Adi Shamir and Leonard Adleman.

RSA is based on prime numbers — whole numbers such as 17 or 53 that are not evenly divisible by any numbers except themselves and 1. The public key is the product of at least two prime numbers. Only one party knows the factors, which constitute the private key. Privacy is protected by the fact that, although multiplying two large numbers is straightforward, finding the unknown prime factors of a very large number is extremely hard.

More recently, the Internet has been transitioning away from RSA, which is vulnerable even to classical — as opposed to quantum — attacks. In 2018, the Internet Engineering Task Force (IETF), a consensus-based virtual organization that steers the adoption of security standards on a global scale, endorsed another public-key system to replace it. That system is called elliptic-curve cryptography, because its mathematics grew out of a branch of nineteenth-century geometry that studies objects called elliptic curves.

Elliptic-curve cryptography is based on calculating the nth power of an integer (which is associated with a point on the curve). Only one party knows the number n, which is the private key. Calculating the exponential of a number is easy, but given the result, it is extremely hard to find what n was. This technique is faster and more secure than RSA.

All sorts of devices, from mobile phones to cars, use public-key encryption to connect to the Internet. The technology has also spread beyond cyberspace: for example, the radio-frequency chips in everything from credit cards to security passes typically use elliptic-curve algorithms.

Breaking RSA

Just as the number of Internet users worldwide — and the use of public-key cryptosystems such as RSA — was beginning to grow exponentially, Shor, then at AT&T Bell Laboratories in Murray Hill, New Jersey, laid the groundwork for those algorithms’ demise. He showed in 1994 how a quantum computer should be able to factor large numbers into primes exponentially faster than a classical computer can (P. W. Shor Proc. 35th Annu. Symp. Found. Comput. Sci. 124–134; 1994). One of the steps in Shor’s quantum algorithm can efficiently break an elliptic-curve key, too.

Shor’s was not the first quantum algorithm, but it was the first to show that quantum computers could tackle practical problems. At the time, it was largely a theoretical exercise, because quantum computers were still dreams for physicists. But later that decade, researchers at IBM performed the first proofs of principle of quantum calculations, by manipulating molecules in a nuclear magnetic resonance machine. By 2001, they had demonstrated that they could run Shor’s algorithm — but only to calculate that the prime factors of 15 are 3 and 5. Quantum-computing technology has made enormous progress since then, but running Shor’s algorithm on a large integer is still a long way off.

Still, after Shor’s breakthrough, the crypto-research world began to pay attention to the possibility of a Q-day. Researchers had already been studying alternative public-key algorithms, and the news attracted lots of talent to the field, says Goldwasser.

Lattice-based systems

The majority of the algorithms that made it to NIST’s final roster rely, directly or indirectly, on a branch of cryptography that was developed in the 1990s from the mathematics of lattices. It uses sets of points located at the crossings of a lattice of straight lines that extend throughout space. These points can be added to each other using the algebra of vectors; some can be broken down into sums of smaller vectors. If the lattice has many dimensions — say, 500 — it is very time-consuming to calculate the smallest such vectors. This is similar to the situation with prime numbers: the person who knows the short vectors can use them as a private key, but solving the problem is extremely hard for everyone else.

Since the 1990s, researchers have developed a plethora of public-key encryption algorithms that either use lattices directly, or are somehow related to them. One of the earliest types, developed in 1996, is called NTRU. Its keys consist of polynomials with integer coefficients, but it is considered secure because of its theoretical similarity to lattice problems. To show that a cryptosystem is trustworthy, researchers often prove that it is at least as hard to crack as a lattice problem.

A popular approach to lattice-based cryptography is called learning with errors (LWE), which forms the basis for several of the NIST finalists. It was introduced in 2005 by computer scientist Oded Regev at New York University. In its simplest form, it relies on arithmetic. To create a public key, the person who wants to receive a message picks a large, secret number — the private key. They then calculate several multiples of that number and add random ‘errors’ to each: the resulting list of numbers is the public key. The sender adds up these whole numbers and another number that represents the message, and sends the result.

To get the message back, all the receiver has to do is divide it by the secret key and calculate the remainder. “It’s really high-school level of mathematics,” Regev says.

The profound step was Regev’s proof in 2009 that anyone who breaks this algorithm would also be able to break the seemingly more complex lattice problem. This means that LWE has the same security as lattices, but without having to deal with multi-dimensional vectors, Goldwasser says. “It’s a great formulation, because it makes it easy to work with.” Ironically, Regev discovered LWE during an unsuccessful attempt to find a quantum algorithm that would break the lattice problem. “Sometimes failure is success,” he says.

Oded Regev headshot.
Oded Regev introduced a branch of lattice-based cryptography called learning with errors.Credit: Oded Regev

Researchers have since worked on tackling a drawback of lattice-based systems. “Lattice-based cryptography suffers from huge public keys,” says Yu Yu, a cryptographer at Shanghai Jiao Tong University in China. Whereas the public key of a current Internet application is the size of a tweet, lattice-based encryption typically requires keys that are as large as one megabyte or more. ‘Structured lattice’ systems use what are essentially algebraic tweaks to drastically reduce the public key’s size, but that can leave them more open to attack. Today’s best algorithms have to strike a delicate balance between size and efficiency.

Quantum candidates

In 2015, the NSA’s unusually candid admission that quantum computers were a serious risk to privacy made people in policy circles pay attention to the threat of Q-day. “NSA doesn’t often talk about crypto publicly, so people noticed,” said NIST mathematician Dustin Moody in a talk at a cryptography conference last year.

Under Moody’s lead, NIST had already been working on the contest that it announced in 2016, in which it invited computer scientists to submit candidate post-quantum algorithms for public-key cryptography, releasing them for scrutiny by the research community. At the same time, NIST called for submissions of digital-signature algorithms — techniques that enable a web server to establish its identity, for example, to prevent scammers from stealing passwords. The same mathematical techniques that enable public-key exchanges usually apply to this problem, too, and current digital-signature systems are similarly vulnerable to quantum attacks.

Teams from academic laboratories and companies, with members from four dozen countries on six continents, submitted 82 algorithms, of which 65 were accepted. True to their creators’ nerd credentials, many of the algorithms’ names had Star Wars, Star Trek or Lord of the Rings themes, such as FrodoKEM, CRYSTALS-DILITHIUM or New Hope.

The algorithms are being judged by both their security and their efficiency, which includes the speed of execution and compactness of the public keys. Any algorithms that NIST chooses to standardize will have to be royalty-free.

As soon as the algorithms were submitted, it was open season. Crypto researchers delight in breaking each other’s algorithms, and after NIST’s submissions were made public, several of the systems were quickly broken. “I think people had a lot of fun looking at those algorithms,” says Moody.

Although NIST is a US government agency, the broader crypto community has been pitching in. “It is a worldwide effort,” says Philip Lafrance, a mathematician at computer-security firm ISARA Corporation in Waterloo, Canada. This means that, at the end of the process, the surviving algorithms will have gained wide acceptance. “The world is going to basically accept the NIST standards,” he says. He is part of a working group that is monitoring the NIST selection on behalf of the European Telecommunications Standards Institute, an umbrella organization for groups worldwide. “We do expect to see a lot of international adoption of the standard that we’ll create,” says Moody.

Still, because cryptography affects sensitive national interests, other countries are keeping a close eye — and some are cautious. “The maturity of post-quantum algorithms should not be overestimated: many aspects are still at a research state,” says cryptography specialist Mélissa Rossi at the National Cybersecurity Agency of France in Paris. Nevertheless, she adds, this should not delay the adoption of post-quantum systems to strengthen current cryptography.

China is said to be planning its own selection process, to be managed by the Office of State Commercial Cryptography Administration (the agency did not respond to Nature’s request for comment). “The consensus among researchers in China seems to be that this competition will be an open international competition, so that the Chinese [post-quantum cryptography] standards will be of the highest international standards,” says Jintai Ding, a mathematician at Tsinghua University in Beijing.

Meanwhile, an organization called the Chinese Association for Cryptologic Research has already run its own competition for post-quantum algorithms. Its results were announced in 2020, leading some researchers in other countries to mistakenly conclude that the Chinese government had already made an official choice.

Updating systems

Of NIST’s 15 candidates, 9 are public-key systems and 6 are for digital signatures. Finalists include implementations of NTRU and LWE, as well as another tried-and-tested system that uses the algebra of error-correction techniques. Known as ‘code-based algorithms’, these systems store data with redundancy that makes it possible to reconstruct an original file after it has been slightly damaged by noise. In cryptography, the data-storage algorithm is the public key, and a secret key is needed to reconstruct an original message.

In the next few months, the institute will select two algorithms for each application. It will then begin to draft standards for one, while keeping the other as a reserve in case the first choice ends up being broken by an unexpected attack, quantum or otherwise.

Selecting and standardizing algorithms will not be the end of the story. “It’s certainly a solid step to bless a candidate, but as a follow-up, the Internet has to agree on how to integrate an algorithm into existing protocols,” says Nick Sullivan, an applied cryptographer at Internet-services company Cloudflare, who is based in New York City.

Jiuzhang2.0 experimental setup.
To crack encryption, quantum computers such as China’s Jiuzhang 2.0 will need more qubits.Credit: Chao-Yang Lu

Both Cloudflare and Google — often in cooperation — have started running real-life tests of some post-quantum algorithms by including them in some beta versions of the Chrome browser and in server software. Testing is crucial because, for Internet communications to go smoothly, it is not enough to have perfectly compatible servers and browsers. To connect them, data must also run through network devices that might block traffic that they flag as unusual because of its unfamiliar encryption protocols. (These systems can be used to prevent hacking or stop users accessing prohibited content.) Antivirus software could cause similar problems. The issues also exist “on a broader, Internet-wide scale, in some countries that keep track of what users are doing”, says Sullivan. Network-security workers refer to these issues as ‘protocol ossification’, he says; it has already complicated the transition from RSA, and might disrupt the roll-out of quantum-secure algorithms, too.

An early test in 2016 implemented New Hope — a structured version of LWE named after the original Star Wars movie — in a Chrome beta version, and it ran without a hitch. “This trial showed that it is usable,” says Erdem Alkım, a computer scientist now at Dokuz Eylül University in İzmir, Turkey, who wrote some of the code as part of his thesis. “I thought it was a good result for my PhD.”

But a larger-scale experiment conducted in 2021 by Google on a different algorithm ran into some snags. Some Internet devices apparently ‘broke’ — network-security parlance for a gadget that blocks a connection when a client’s browser tries to communicate with an unusual protocol. The issue could have been that the browser’s opening message was longer than expected, because it carried a large public key. Algorithms that break the Internet in this way could be shelved until these issues are resolved.

“Sometimes you run into situations in which some network element misbehaves when you add something new,” comments Rescorla. Persuading vendors to adapt their products — something that can often be done with a simple software update — could take some nudging, he says. “This could take a while.”

Still, Rescorla is optimistic, at least when it comes to Internet browsers. Because only a small number of companies control most browsers and many servers, all that needs to happen is that they change encryption systems. “Everybody is pretty confident that once NIST and IETF specify new standards, we’ll be able to roll them out pretty quickly.”

Where the transition might be trickier is the multitude of modern connected devices, such as cars, security cameras and all kinds of ‘smart home’ machines, that suffer from protocol ossification — especially those that might have security features hardwired into their chips and that are not replaced often. “It takes five to seven years to design a vehicle, and it’s going to be on the road for a decade,” says Lafrance. “Is it still going to be secure ten years down the line?”

Either way, initial implementations will be hybrid, using post-quantum technology for added security on top of existing systems. Vadim Lyubashevsky, a computer scientist at IBM in Zurich, Switzerland, whose team has two lattice-based algorithms among the NIST finalists, says he thinks both post-quantum and current encryption methods should run together for a decade before the new algorithms are used exclusively.

If all goes to plan, the Internet will be well into its post-quantum era by the time computing enters its quantum era. This post-quantum Internet could some day be followed, confusingly, by a quantum Internet — meaning a network that uses the principles of quantum physics to make information exchange hacker-proof.

Researchers estimate that to break cryptosystems, quantum computers will need to have in the order of 1,000 times more computing components (qubits) than they currently do. “There’s a very good chance that we’ll have a quantum computer that can do positive things way before they can break crypto,” says Lyubashevsky.

But that is no reason to be complacent. Fully transitioning all technology to be quantum resistant will take a minimum of five years, Rescorla says, and whenever Q-day happens, there are likely to be gadgets hidden somewhere that will still be vulnerable, he says. “Even if we were to do the best we possibly can, a real quantum computer will be incredibly disruptive.”

The Connections Between Sugar, Poor Sleep, and Diabetes


If you have diabetes, you already know that you need to keep a very close eye on how much sugar you eat. It’s the one food that most reliably raises blood sugar, and it is highly associated with obesity, to boot.

What may be less obvious is the link between sugar consumption and sleep quality.

Sleep is surprisingly important for people with diabetes: poor sleep increases insulin resistance and, when chronic, is associated with all sorts of negative health outcomes. A bad night of sleep also seems to make us crave sugary junk food – exactly the food that just makes diabetes even tougher to manage.

new study from Brigham Young University is just the latest of many to find that there is a very real connection between sugar intake and inadequate sleep. In this experiment, ninety-three adolescents were randomized to five consecutive nights of healthy sleep (9.5 hours “sleep opportunity”) or short sleep (6.5 hours); researchers then tracked what they ate.

The teens with less sleep – they averaged 2 hours and 20 minutes fewer – ended up eating more added sugar, more carbohydrates, more sweetened beverages, and fewer fruits and vegetables. Most of the difference between the two groups was attributed to late-night snacking. The study wasn’t designed to explain why the exhausted teens were snacking more, but its lead author speculated that “tired teens are looking for quick bursts of energy to keep them going until they can go to bed.”

Many other similar studies have been conducted, and have found that a lack of sleep both provokes you to overeat and to prefer higher-calorie foods.

There is also some evidence that poor sleep causes the metabolism to operate less efficiently. A small but thought-provoking 2010 study took 10 overweight adults and asked them to eat the same diet for two weeks. Half were told to sleep 5.5 hours per night, the other half 8.5 hours. Remarkably, the dieters that got plenty of sleep lost 55% more fat than the sleep-deprived group, suggesting that lack of sufficient sleep can really sabotage a weight loss effort.

It seems clear that a lack of sleep prompts people to make unhealthy food choices, and perhaps also to blunt the impact of good food choices. Does it work the opposite way, too? Does eating poorly reduce sleep quality?

Scientists appear to have paid somewhat less attention to this question. At least one study found diets with high saturated fat, low fiber, and high sugar were associated with light sleep and more frequent nocturnal arousal. The authors concluded that “it is possible that a diet rich in fiber, with reduced intake of sugars and other non-fiber carbohydrates, may be a useful tool to improve sleep depth and architecture in individuals with poor sleep.”

If it’s true that bad food leads to bad sleep, it suggests that there may be a vicious cycle at play – poor sleep causes us to reach for unhealthy foods, which just causes more poor sleep. For people with diabetes, this seems even more likely to be true, given the way that suboptimal eating choices can lead to nighttime glucose management problems.

There needs to be more study to tease out correlation from causation, and figure out if suboptimal sleep habits are causing suboptimal dieting habits, or vice versa, or both. But in the meantime, it must be unsurprising that so many surveys of sleep habits and diets, whether they examine Danish school children or middle-aged Japanese women, find that poor diets and poor sleep go hand in hand.

Poor Sleep and High Blood Sugar

Of course, people with diabetes always have to be thinking about their blood sugar, too. High blood sugar – a direct result of sugar consumption – is also correlated with poor sleep, even in people without diabetes. Poor sleep habits interrupt a complex range of hormonal and metabolic changes, resulting in increased inflammation and insulin resistance, both of which can help cause high blood sugar.

The causation appears to go in both directions here, too, as high blood sugar and insulin resistance are thought to contribute to common sleep problems such as sleep apnea and sleep-disordered breathing.

According to the experts at the National Institute of Diabetes and Digestive and Kidney Diseases, there is little direct evidence as of yet that improving sleep can lead directly to improved glucose metabolism. But the subject has lately become an area of intense interest: “the efficacy and effectiveness of interventions that optimize sleep and circadian function to prevent the development or reduce the severity of these metabolic disorders need to be urgently evaluated.”

Takeaways

There is a great deal of evidence linking poor sleep habits with poor diet and with suboptimal metabolic function, although the details of the interactions are not always clear. Many studies show that poor diet is associated with lack of quality sleep, and that lack of quality sleep is associated with insulin resistance and diabetes risk factors.

If you’re trying to optimize your diabetes management, don’t forget about sleep! It’s just one more reason to lay off the sugar today. And a night without enough sleep might make it more likely for you to reach for sweet junky food tomorrow, which might just make it less likely that you’ll get enough sleep tomorrow night, sparking an unhealthy cycle.

Are Your Diabetes Drugs Making You Fat?


A new study of government statistics has found that 20% of American adults are using prescription medications known to cause weight gain.

The survey revealed that antidiabetic drugs are the second most commonly used class of obesogenic drugs. A whopping 5.7% of the nation’s adults are using potentially fattening drugs to treat diabetes – that’s almost 15 million people! (Beta-blockers, which are used to treat high blood pressure and heart disease, were the most common, at 9.8%.)

The results, published in the medical journal Obesity, also found some good news for people with diabetes: the diabetes drugs prescribed today are less likely to prompt weight gain than those prescribed a generation ago. In 1999, a grand total of 82.9% of people using diabetes medication were using obesogenic drugs, a number that declined to just 52.5% in the latest survey.

It’s unclear to what extent these medications promote weight gain across the population – the study also found that “proportional use of obesogenic medications was not associated with weight status, except for antipsychotics.” In patients with diabetes, this could be in part due to the fact that several diabetes drugs, including the popular metformin, actually promote weight loss.

Nevertheless, the study may be a wake-up call to readers struggling with excess fat. It’s good to be aware of the ways that your medications may impact your efforts to lose weight.

How Diabetes Drugs Affect Weight

The evidence shows that some diabetes drugs promote weight loss, while others promote weight gain. It’s a mixed bag:

MedicationWeight Effect
MetforminSlight Loss
GLP-1 RA: glucagon-like peptide-1 receptor agonistLoss
SGLT-2i; sodium glucose co-transporter 2 inhibitorLoss
DPP-4i: dipeptidyl peptidase 4 inhibitorNeutral
AGI: alpha-glucosidase inhibitorNeutral
TZD: thiazolidinedioneGain
SU: sulfonylureaGain
InsulinGain

Here’s a little bit more detail on the non-neutral drugs above. First, the gainers:

  • Insulin, which helps facilitate fat storage in the human body, has long been blamed for unintentional weight gain. Many studies have found that insulin has a profound ability to provoke weight gain, as much as 12-13 lbs. When we asked our writer Maria Muccioli to tackle the question – she has a Ph.D. in molecular and cellular biology, and she has type 1 diabetes – she concluded that “it is not insulin, but rather excess insulin that can promote undesired weight gain.” The best way to minimize the use of excess insulin is to make diet and exercise choices that reduce the body’s insulin requirements without introducing hypo- or hyperglycemia.
  • Sulfonylureas can cause patients to gain about 5 lbs, one of several reasons that authorities argue that they “should be avoided in patients with obesity.”
  • Thiazolidinediones often cause a weight gain of around 5-10 lbs. It is believed that weight gain is related to “fluid retention and redistribution of adipose tissue.”

Next, the drugs that can actually create weight loss.

  • Metformin, the most commonly prescribed glucose-lowering drug, is known to cause modest weight loss in patients both with and without diabetes.
  • GLP-1 agonists have tremendous potential to promote weight loss. Clinical trials of various formulations have found that patients lose a few pounds to as much as 15 lbs.  Semaglutide, a GLP-1 agonist, has been found to induce weight loss so effectively that it has recently been approved by the FDA for use as a weight-loss drug, the first such approval since 2014. Other drugs in this class (such as Trulicity) are being investigated now to see if they can provide the same effect.
  • SGLT-2 inhibitors, which cause a patient to expel excess glucose in their urine, can lead to modest weight loss (2-6 lbs), possibly by creating a caloric deficit. This weight loss is also accompanied by healthy improvements in body composition and visceral adiposity.

Many non-diabetes medications can also create weight changes. About half of patients with diabetes have hypertension, and a sizable minority have heart disease, two conditions for which a doctor might be prescribed Beta-blockers, the most common obesogenic drugs in the country. Other obesogenic drugs include certain antidepressants, corticosteroids, and antihistamines.

Takeaway

You may be taking one or more prescription medications, whether for diabetes or for any other condition, that can have an effect on your weight. If you are trying to lose weight, it would be smart to review your medications with your doctor and explore your options. You may be able to switch your prescriptions to medications that aid your weight loss efforts instead of hampering them.

Migraines during pregnancy present risks for adverse maternal, newborn outcomes


Pregnancies complicated by migraines are associated with adverse outcomes for both women and their babies, according to a presentation at The Pregnancy Meeting.

“Migraines or severe headache are very common disorders of the nervous system, affecting approximately 20% of women in the United States at any given time,” Ayellet Tzur-Tseva, MD, MSc, a maternal-fetal medicine fellow in the department of obstetrics and gynecology at Jewish General Hospital, McGill University, Montreal, said during the presentation.

Pregnant women with migraines had a 2.04 odds ratio for preeclampsia, a 1.35 odds ratio for preterm birth, and a 1.83 odds ratio for maternal death.
Data were derived from Tzur-Tseva A, et al. The effect of migraines on obstetrical and newborn outcomes. Presented at: The Pregnancy Meeting; Jan. 26-Feb. 5, 2022 (virtual meeting).

Calling it the third most common disease in the world, Tzur-Tseva also said that migraines are most active in women of reproductive age and a frequent cause of headache during pregnancy.

Ayellet Tzur-Tseva

“About 60% of migraines improve during pregnancy, especially in the second and third trimesters,” she said. “But it also can occur then for the first time.”

The researchers studied nearly 14 million pregnant women who gave birth between 1999 and 2015 based on data from the Healthcare Cost and Utilization Project Nationwide Inpatient Sample (HCUP-NIS) U.S. database.

Using ICD-9 code 346, the researchers identified 51,736 women who also had a diagnosis of migraines — a prevalence of 37 per 10,000 deliveries. The prevalence of migraines in this population rose during the study period (P < .001), according to the researchers.

Preexisting health conditions such as diabetes, chronic hypertension and obesity were more common among women with migraines. Also, women with migraines were more likely to be smokers.

Adjusted analysis showed that women with migraines were more likely to develop:

  • preeclampsia (adjusted OR = 2.04; 95% CI, 1.97-2.1; P < .0001);
  • placenta previa (aOR = 1.43; 95% CI, 1.3-1.57; P < .0001);
  • abruptio placenta (aOR = 1.16; 95% CI, 1.08-1.26; P < .0001);
  • chorioamnionitis (aOR = 1.2; 95% CI, 1.13-1.27; P < .0001);
  • preterm premature rupture of membranes (aOR = 1.17; 95% CI, 1.09-1.26; P < .0001);
  • myocardial infarction (aOR = 2.57; 95% CI, 1.06-6.26; P = .0374);
  • venous thromboembolisms (aOR = 2.37; 95% CI, 2.14-2.63; P < .0001);
  • cerebrovascular attack (aOR = 11.24; 95% CI, 7.71-16.39; P < .0001); and
  • postpartum hemorrhage (aOR = 1.36; 95% CI, 1.3-1.42; P < .0001);

These women were more likely to have a cesarean delivery (aOR = 1.16; 95% CI, 1.14-1.18; P < .0001) and experience maternal death (aOR = 1.83; 95% CI, 1.12-3; P = .0166) as well.

Babies born to mothers who experience migraines were at greater risk for prematurity (aOR = 1.35; 95% CI, 1.31-1.39; P < .0001), congenital anomalies (aOR = 2.5; 95% CI, 2.33-2.7; P < .0001) and intrauterine growth restriction (aOR = 1.37; 95% CI, 1.3-1.43; P < .0001).

The researchers noted that migraine involves the cerebrovascular system and autonomic dysfunction in addition to a wide range of ischemic vascular disorders. While hypertensive disorders and migraine share the same risk factors, the pathophysiology is still unclear.

Possible explanations for this relationship include hypercoagulability due to platelet hyperaggregation, decreased prostacyclin production, altered vasoreactivity and endothelial dysfunction, the researchers said, adding that pregnancy enhances this procoagulant state.

Patients with abnormal vascularization are more prone to migraines, the researchers continued, and will have higher risk for preeclampsia since their vascular pathology cannot compensate well during the stress of pregnancy.

This vascular pathology is related to cardiovascular disease during pregnancy puerperium, the researchers said, adding that hypertensive diseases during pregnancy are known risk factors for cardiovascular diseases. Migraines, especially those with aura, are associated with myocardial infarction, stroke and venous thromboembolisms as well, the researchers said.

“Our conclusion was that pregnancies complicated by migraines are associated with adverse outcomes for both women and their babies,” Tzur-Tseva said. “Close surveillance of women who experience migraines during pregnancy is recommended.”

Loneliness May Triple Dementia Risk


Even without dementia, lonely feelings linked with brain changes

email article

A senior man sits in a chair next to his bed and looks out the window.

Dementia incidence tripled in lonely older adults who otherwise would be expected to have relatively low risk based on age and genes, researchers found.

Lonely older people under age 80 without an APOE4 allele had a threefold greater risk of dementia (adjusted HR 3.03, 95% CI 1.63-5.62) over 10 years than similar people who weren’t lonely, reported Joel Salinas, MD, MBA, MSc, of NYU Grossman School of Medicine in New York City, and colleagues.

Regardless of age or APOE4 status, lonely older adults had a higher 10-year dementia risk compared with those who weren’t lonely (adjusted HR 1.54, 95% CI 1.06-2.24).

Among people without dementia, loneliness was associated with poorer executive function, lower total cerebral volume, and greater white matter injury, the researchers wrote in Neurology.

The study provides Class I evidence that loneliness increases the 10-year risk of developing dementia. “This magnifies the population health implications of observed trends in the growing prevalence of loneliness,” Salinas said.

“These findings not only establish the link between loneliness and dementia risk much more firmly, but also have implications for how we think about risk factors for dementia, the relevance of basic loneliness screening in assessing individuals at greater risk, and how there is a potential to underestimate this risk in lonely adults, especially if they don’t have any known genetic risk factors like the APOE4 allele,” Salinas told MedPage Today.

“Future studies need to clarify the underlying biological pathways involved, but there is much individuals can do now to help address loneliness in themselves, their friends and families, and their communities,” he added.

Health risks associated with social isolation and loneliness shifted to the forefront during the COVID-19 pandemic, but the biology of loneliness has been investigated for many years.

“The body treats loneliness as a state of threat and responds by activating defensive systems like the sympathetic nervous system, which in turn prompts the immune system to enhance inflammation,” observed Steve Cole, PhD, of the University of California Los Angeles, who wasn’t part of the study. “That’s one pathway by which social isolation could accelerate the progression of Alzheimer’s disease and other inflammation-related chronic diseases,” he told MedPage Today.

“However, it’s difficult to sort out the mechanisms involved in correlational studies such as this,” Cole pointed out. “We also know that inflammation alters brain function and social motivation, raising the possibility that early Alzheimer’s-related biological processes might actually promote loneliness and social isolation.”

“Regardless of the causal directions involved, this study adds to the growing body of evidence that social process and cognitive health are deeply linked and may open novel opportunities for maintaining cognitive health as we age,” Cole said.

Using prospectively collected data from the population-based Framingham Study, Salinas and colleagues assessed 2,308 participants who were dementia-free at baseline with an average age of 73. More than half (56%) were women and 80% of the cohort did not have an APOE4 allele.

Loneliness was recorded at baseline using the Center for Epidemiologic Studies Depression Scale (CES-D) and was defined as feeling lonely 3 or more days in the past week. Models were adjusted for age, sex, and educational level.

A total of 144 people (6%) were lonely. Over 10-year follow-up, incident dementia occurred in 14% of all participants and in 22% of lonely people.

There was no significant association between loneliness and dementia in people 80 or older, but lonely people who were younger — ages 60 to 79 — were more than twice as likely to develop dementia (adjusted HR 2.27, 95% CI 1.32-3.91).

In a second sample of 1,875 dementia-free, stroke-free Framingham Study participants with a mean age of 62, loneliness was associated with poorer cognition in the executive function domain. Of 1,611 people in this sample who had imaging, lonely participants had total cerebral volumes that were 0.25 standard deviation units (SDU) lower and white matter hyperintensity volumes that were 0.28 SDU greater.

These findings suggest that loneliness may be involved in the earliest stages of Alzheimer’s or dementia neuropathogenesis, Salinas and colleagues noted.

Like most Framingham Study cohorts, the ones used in this analysis included mainly people who were white. The possibility of reverse causality in this study cannot be ruled out, Salinas’ group acknowledged. Loneliness was slightly more prevalent among the oldest participants, but “it remains possible that ‘depression without loneliness’ had a more influential role than ‘loneliness without depression’ in this age group,” the researchers wrote.

Tirofiban: A Questionable Addition to Stroke Thrombectomy


Glycoprotein IIb/IIIa receptor inhibitor does show promise for one subgroup in randomized trial

The glycoprotein IIb/IIIa receptor inhibitor tirofiban (Aggrastat) generally failed to improve clinical outcomes for stroke patients undergoing endovascular therapy (EVT), according to the Chinese RESCUE BT trial.

People randomized to tirofiban or placebo shared a similar distribution in modified Rankin scale (mRS) scores at 90 days, with a median score of 3 for both groups (adjusted common OR 1.09, 95% CI 0.87-1.37). Furthermore, there was no difference either in secondary outcomes such as mRS 0-1 or mRS 0-2, reported Raul Nogueira, MD, of University of Pittsburgh School of Medicine.

Safety events did not favor the tirofiban group, which tended toward excesses in symptomatic intracerebral hemorrhaging (9.7% vs 6.4%, P=0.06), any cranial hemorrhage (34.9% vs 28.0%, P=0.02), and 90-day mortality (18.1% vs 16.9%, P=0.62).

Yet subgroup analysis delivered a somewhat redeeming data point for the antiplatelet in the form of an apparent benefit for stroke patients with large artery atherosclerosis in particular (adjusted common OR 1.43, 95% CI 1.02-2.00), Nogueira said in a presentation at the American Stroke Association International Stroke Conference.

Whether tirofiban really improves EVT outcomes in this subgroup will be further investigated, he stated.

The glycoprotein IIb/IIIa receptor inhibitor is highly selective and features high affinity, reversibility of inhibition, rapid onset of action, and a short half-life, according to Nogueira. The rationale for its use in stroke is that tirofiban may help with potential thrombotic complications and re-occlusion after stroke thrombectomy.

Tirofiban has long been FDA approved as a safe and effective treatment for acute coronary syndrome (ACS). Notably, the tirofiban bolus used in that setting is more concentrated than the one tested for stroke by Nogueira’s group.

RESCUE BT was a phase III randomized double-blind trial conducted at 55 hospitals in China.

Eligible participants were stroke patients presenting with acute ischemic stroke within 24 hours of time last known well (median age 67; 41.2% women).

All were undergoing EVT and were not receiving IV thrombolysis. Patients had to have a large vessel occlusion (LVO) in the internal carotid artery or middle cerebral artery segments M1/M2, an NIH Stroke Scale score of 30 or below, and score 6 or higher on ASPECTS imaging at baseline. People on dual antiplatelet therapy within 1 week of stroke were excluded.

Nogueira’s group had 950 individuals randomized to tirofiban (IV bolus 10 μg/kg before EVT followed by continuous infusion for 24 hours) or placebo.

Patients got antiplatelets at hour 20 after starting their assigned study drug.

Tirofiban and placebo groups were well-balanced except there was a greater preponderance of large artery atherosclerosis in the latter (49.1% vs 42.6%).

Beyond tirofiban, Nogueira and his Chinese collaborators are also studying the effects of methylprednisolone after EVT in the RESCUE BT2 study and IV tenecteplase bridging in BRIDGE-TNK.

Midlife Cognition Tied to Gut Microbiota


But not many specific microbial features were strongly linked with cognitive function

A computer rendering of bacteria on the intestinal epithelium surface

Gut microbiota was linked with midlife cognition, researchers found.

In a large population-based sample, β-diversity, a measure of gut microbial community composition, was significantly associated with cognitive scores in a cross-sectional analysis of middle-age CARDIA participants, reported Katie Meyer, ScD, of University of North Carolina at Chapel Hill, and co-authors.

Several specific genera also were tied to one or more measures of cognitive function, the researchers wrote in JAMA Network Open.

“There has been a lot of research on the gut microbiota in animal models, as well as in small, often clinic-based samples,” Meyer told MedPage Today. “Our study provides data supporting findings from other studies, but in a more representative sample.”

“In addition, the age of CARDIA participants — 48 to 60 — illustrates that these associations can be observed before appreciable cognitive decline,” Meyer pointed out. “The study of cognitive function in middle-aged adults is valuable because it can provide clues to early declines.”

Research about the gut-liver-brain axis has shown connections between the digestive system and dementia or Alzheimer’s disease, though whether relationships are causal is not clear.

Several studies have shown associations between gut microbes and neurological outcomes, including cognition. While mechanisms have not been fully established, there’s growing support for a role in microbiota-generated short-chain fatty acids, Meyer and colleagues noted.

The researchers analyzed data collected from the prospective Coronary Artery Risk Development in Young Adults (CARDIA) cohort in four U.S. metropolitan centers from 2015 to 2016. They sequenced stool DNA and analyzed gut microbial measures, including β-diversity (between-person), α-diversity (within-person), and taxonomy.

They used six clinic-administered tests to assess cognitive status — the Montreal Cognitive Assessment (MoCA), Digit Symbol Substitution Test (DSST), Rey-Auditory Verbal Learning Test (RAVLT), timed Stroop test, category fluency, and letter fluency — and derived a global measure from these scores.

Microbiome and cognitive data were available for 597 CARDIA participants who had a mean age of 55. Overall, 44.7% were men, and 45.2% were Black.

Most findings for α-diversity and cognition were not significant. Multivariate analysis of variance tests for β-diversity were statistically significant for nearly all cognition measures, except letter fluency.

In fully adjusted models, after accounting for demographic variables, health behaviors, and clinical covariates:

  • Barnesiella was positively associated with the global measure (β 0.16, 95% CI 0.08-0.24), DSST (β 1.18, 95% CI 0.35-2.00), and category fluency (β 0.59, 95% CI 0.31-0.87)
  • Lachnospiraceae FCS020 group was positively associated with DSST (β 2.67, 95% CI 1.10-4.23)
  • Akkermansia was positively associated with DSST (β 1.28, 95% CI 0.39-2.17)
  • Sutterella was negatively associated with MoCA (β −0.27, 95% CI −0.44 to −0.11)

“Findings from our genera-specific analysis are consistent with proposed pathways through production of the short-chain fatty acid butyrate, many members of which are within class Clostridia,” Meyer and co-authors wrote. “In animal models, administration of butyrate has been shown to be protective against vascular dementia and cognitive impairment, as well as against metabolic risk factors for cognitive decline and dementia.”

Longitudinal studies are needed to see how declining health itself may influence the gut microbial community, the researchers noted. Results seen in midlife may not translate to mild cognitive impairment or later disease states, they added.

“It’s important to recognize that we are still learning about how to characterize the role of this dynamic ecological community and delineate mechanistic pathways,” Meyer said.

“This is reflected in our findings,” she added. “The strongest results from our study were from a multivariate statistical analysis that can be considered a test of the overall community, and we were not able to identify many specific microbial features that were strongly related to cognitive function.”

FDA Staff Dumps Cold Water on Anti-PD-1 Candidate for NSCLC


Agency advisors to consider whether Chinese trial data alone are sufficient to recommend approval

FDA ODAC sintilimab over a computer rendering of a tumor in the lungs

Despite positive trial results, it looks like an uphill path to FDA approval for the investigational checkpoint inhibitor sintilimab as a treatment for non-small cell lung cancer (NSCLC), suggested agency briefing documents released ahead of an advisory meeting.

On Thursday, the FDA’s Oncologic Drugs Advisory Committee will discuss whether data that come solely from a Chinese clinical trial are sufficient to support a U.S. marketing application for sintilimab, an anti-PD-1 monoclonal antibody. But FDA staff cited other major hurdles to the drug’s approval, including the “lesser” endpoint used in the phase III trial supporting the application, along with an already crowded field of PD-1/L1 inhibitors in this setting.

“Sintilimab does not fulfill an unmet need for U.S. patients with NSCLC, limiting the degree of regulatory flexibility that is warranted regarding the acceptability of this data to support FDA approval,” agency staff asserted.

Sintilimab’s developers Innovent Biologics and Eli Lilly will present results from ORIENT-11, a phase III study that tested sintilimab in combination with pemetrexed (Alimta) and platinum-based chemotherapy as an initial treatment for adults with metastatic NSCLC.

The study is an ongoing, randomized, double-blind trial that enrolled 397 patients from August 2018 to July 2019 exclusively in China. Patients received chemotherapy plus pemetrexed and either sintilimab or placebo. The trial met its primary endpoint of progression-free survival (PFS) at the interim efficacy analysis. Median PFS was 8.9 months with sintilimab compared to 5.0 months with placebo (HR 0.48, 95% CI 0.36-0.64, P<0.00001).

But FDA staff was critical of the PFS findings, calling it “a lesser endpoint” compared to “endpoints used for approval of currently available therapies in this space,” noting that all approvals of first-line immunotherapy-based regimens for metastatic NSCLC have been based on a statistically significant overall survival improvement.

Still, other outcome measures in ORIENT-11 favored the sintilimab arm as well, including objective response rate (51.9% vs 29.8% in the placebo arm) and disease control rate (86.8% vs 75.6%). Median duration of response was not reached in the sintilimab arm versus 5.5 months in the placebo arm.

The investigators called the safety profile of sintilimab in combination with pemetrexed and platinum-based chemotherapy acceptable, with toxicities that were tolerable and generally manageable with dose interruptions and supportive care.

Yet FDA staff contended that sintilimab “offers no advantage in safety or mode of administration to the U.S. patient population.”

But in large part, agency reviewers trained their focus on the trial’s population. ORIENT-11 “raises significant questions regarding data from a single foreign country to support a U.S. approval and its generalizability to a diverse American population” as it “enrolled a patient population which lacks the racial and ethnic diversity of the U.S. population, notably with regards to currently underserved groups,” they wrote.

FDA staff also observed that the trial doesn’t adhere to the most recent and relevant guidelines for global, harmonized drug development through the strategic use of multi-regional clinical trials.

Korean Ginseng – The Root of Immortality


Korean (Panax) Ginseng is one of Nature’s true Super Herbs, otherwise known as the root of immortality or Asian/Oriental Ginseng.  This herb has a very positive effect in making the whole body healthy. If you take it daily, it can balance all of the systems in the body and make you healthier in almost every way.

Korean ginseng is considered an energizer, stress reducer.  It has been used traditionally to ward off fatigue and stress. It’s also used as a general tonic to invigorate a person mentally and physically.

It is considered to be an “adaptogen”. An adaptogen is a substance that can increase resistance to stress of any kind, either mental or physical and can invigorate the user in a non-specific way.

The root must be at least six years old to be effective. The older the root, the better it is for improving your health. Also the price is directly related to the age of the root. The older the root, the more it costs per ounce. Wild roots are very rare and command a great price. Some older  varieties  have been known to go for tens of thousands of dollars.

Korean ginseng has been used to lower cholesterol, reduce stress, decrease stress and boost energy, treat diabetes, treat depression and sharpen the mind and memory. It is a herb that really does a lot for you.

History Of Korean Ginseng

Panax ginseng was discovered and first used around five thousand years ago in the Manchuria, China mountains. First used as food, it soon was discovered that the people eating it were experiencing an increase in strength and a better resistance to stresses of all kinds. The root is a fleshy, multi branched and man-shaped. It has become a symbol of overall health to the Chinese people.

According to Chinese legend, emperor Shen Nung discovered ginseng around 2700 BC. The Chinese considered him to be the father of herbal therapy. It was Shen Nung who said that the ginseng root can be used to calm the mind, bring the soul into harmony, eliminate fears and clarify thinking. Only the emperors could use ginseng at that time, as it was considered too special for the common man.

It’s considered an “Ultimate Tonic”. Numerous books have been written about ginseng. It is probably the best known herb in the world. The extreme popularity of ginseng eventually lead to it’s over-harvesting to the point that wild ginseng at one time was more valuable per ounce than gold!

The good news about the powerful herb soon spread throughout the world and ginseng was traded with other cultures. Korea started to harvest the root around the 1900”²s and now supplies the herb to many people in the world today.

It’s believed to be helpful in preventing illnesses like cancer and arthritis  and can also lower high blood sugar. It’s a great remedy for people suffering from fatigue.

This type of ginseng really shines as an immune booster, helping prevent ailments caused by viruses and bacteria and can be taken at the onset of the common cold to reduce it’s severity and duration.

Korean ginseng relaxes the lungs and prevents constriction of the airways.  It can be taken as a remedy for asthma  and other types of lung problems.

Many students take this kind of ginseng while studying and taking tests as it is believed to sharpen concentration and memory and make learning new subjects easier.

For many years, Korean ginseng has been used to boost fertility in men and increase sperm count by activation specific hormones in the body.  It improves blood circulation in the body and helps one recover from illness more quickly.

Korean ginseng is useful for aging adults as it is thought to prevent senility. It’s strengthening and energizing effect are also important to keep the body active well into old age.

Nutritional Composition

Korean ginseng contains contains Vitamins A, E, and B12, sulfur, calcium, iron, manganese, thiamin, riboflavin, tin, niacin, phosphorous, sodium, potassium and magnesium.

Korean Ginseng also contains many saponins, called ginsenosides and panaxosides. Ginsenosides have been studied extensively and have numerous actions, including: stimulation of the immune system, balancing blood sugar, inhibiting tumor growth, stabilizing blood pressure, and stimulation of bone marrow production, detoxifying the liver, and many other tonic effects. Ginseng also contains many other active substances and nutritional components.

The body’s glands, such as the adrenal glands secrete hormones such as adrenaline. Other glands secrete their own important hormones. Ginseng regulates all these glands and puts them in better working order. They in turn control the body’s organs, helping them to stay balanced and work properly.

Precautions and Side Effects

Korean ginseng should not be taken if pregnant or nursing without the consent of your doctor. Avoid large doses or prolonged use if you have high blood pressure. Korean ginseng can warm the body and is best taken in the colder months. American  and Siberian Ginseng  are considered cooling, and are better suited for the warmer months. High doses of Korean ginseng can sometimes cause irritability.

In ancient Asian spritual practices, Yang would be considered the Sun element and Yin the cold. Korean Ginseng is considered a Yin herb and this is why it is recommended to be consumed more in the Winter months than at warmer times of the year.

The recommended dosage of this herb is 2 grams a day, not alot (1/2 a teaspoon) and resultantly this makes it quite cheap to add to your diet.  With 100 grams (200 days supply) priced at roughly £10 (or $17) theres alot of reasons to give the ‘root of immortality’ a chance at helping you as nature intended.

Article Sources:

The Many Origins of Depression


What is depression?

Is depression a flat mood, an inability to participate in ‘normal life activities,’ or unexplained bouts of sadness? In spite of its singular clinical classification, depression looks different for each person. Like Leo Tolstoy noted in his famous novel Anna Karenina, “All happy families are alike; each unhappy family is unhappy in its own way.”

While all happy families aren’t necessarily alike, this adage speaks truth in terms of depression. Each person’s depressive symptoms – remember, depression is a symptom, not a disease – depend on their unique circumstances, bodily health, emotional history, and held beliefs. As the serotonin model of depression continues to lose its hold on mainstream psychiatry, a theory of depression as evolutionary mismatch has emerged. In this theory, depression is the result of modern living; we did not evolve in the context of environmental toxinsisolated living, and near-constant stress. Some argue that depression is a response to this mismatch, also called paleo-deficit disorder, and depression is simply a message from our bodies trying to protect us from the madness of the modern world.

However, even the evolutionary mismatch theory of depression relies on the dangerous assumption that all depression is the same: that depression is one disease, with one origin and a universal set of symptoms. Anyone who has been affected by depression will challenge this assumption. Depression can be caused by a constellation of factors that cause chronic inflammation – inflammatory foods, medications like the birth control pillreduced sunlight exposure, and loneliness, to name a few – and manifest differently in different people. Some of the symptoms that qualify a person for a diagnosis of depression seem downright paradoxical: increased and decreased appetite, insomnia or fatigue, motor agitation or impairment. Even in one person, different depressive symptoms can appear at different times.

A recent scientific review article entitled ‘Depression subtyping based on evolutionary psychiatry: Proximate mechanisms and ultimate functions’ attempts to re-classify depression into twelve subtypes.1 For each of these subtypes, researchers propose different causes for depressive symptoms, as well as potential reasons that these subtypes evolved and purposes they serve. In this framework, depression may be (1) an beneficial adaptation that effectively addresses a specific problem (2) an adaptation that does not solve the problem (3) a byproduct of other adaptations or (4) a general pathological state that serves no purpose and is harmful.

The proposed twelve subtypes of depression

Twelve Causes of Depression, Explained by Scientists

In infection-induced depressionsymptoms result from underlying inflammation. This classification is supported by studies showing that anti-inflammatory agents reduce symptoms of depression.2 Further, the ‘sickness behavior’ of chronic inflammation, including social withdrawal, might worsen depression.

Long-term stress activates the immune system, leading to chronic inflammation that creates depressive symptoms. Why would stress activate the immune system? For a good reason, actually – in our evolutionary history, stress meant a higher chance of being wounded, and our immune systems ramped up to protect from infections that could result from those wounds. But nowadays, stress is rarely caused by true danger. Instead, stress comes from working long hours (against circadian rhythms), feeling pressured to meet deadlines, and financial worries.3 The response of inflammation to stress seems to be an evolutionary mismatch; the immune response that served us for centuries is no longer beneficial.

In the ancestral world, loneliness literally meant death. If you were separated from the tribe, you were vulnerable to predators and other forces of nature. Loneliness is a powerful and protective message that impels us to seek the company of others, which was crucial to survival for many generations. While loneliness is admittedly less dangerous now, this fear remains imprinted on us and leads to loneliness-induced depression.

People who have experienced significantly traumatic events are more likely to be diagnosed with depression, which researchers call trauma-induced depression. In fact, one study of almost 700 randomly-selected patients with depression found that 36% of them were also diagnosed with post-traumatic stress disorder (PTSD),4 and a large meta-analysis of 57 studies revealed that the comorbidity of depression and PTSD was 52%.5 Like those suffering from loneliness, people with PTSD show elevated levels of pro-inflammatory markers.6

Depressive symptoms can result from conflicts in modern hierarchies, such as the workplace, social groups, and families. Humans and social animals establish hierarchies, and those at the top enjoy many benefits. Therefore, we all want to be at a comfortable hierarchical position to meet our needs. If we don’t reach our desired place in the hierarchy, our self-esteem suffers.

Hierarchy conflicts, such as unemployment,7 bullying,8 and striving for unreachable career goals9 are all associated with depression.

Grief is a common driver of depression diagnoses. Up to 20% of people who lose a loved one and are grieving are saddled with the label of depression. Even in animals, losing a mate, sibling, or offspring leads to depressive symptoms. 10

Similarly, romantic rejection can cause depressive symptoms. Researchers found that after two months, 40% of people who had been left by their romantic partners showed symptoms of clinical depression.11 The sadness following a breakup may indicate true love and disappointment, and these feelings might also help make more aligned choices in future romantic relationships.

Six months after childbirth, 10-15% of women are diagnosed with postpartum depression. Symptoms of postpartum depression include crying, hopelessness, anger, and loss of interest in the new baby. Many studies indicate that mothers who feel that they are receiving inadequate childcare support from the father or her family are more likely to be diagnosed with postpartum depression.12 That is, a mother’s feelings of overwhelm, tiredness, and depletion are often categorized as postpartum depression. It has been hypothesized that the symptoms of postpartum depression may serve as a signal that the mother requires more support.

Seasonal Affective Disorder (SAD), also called seasonal depression, is a mood disorder that strikes a person at the same time each year, usually in the winter. A person diagnosed with SAD exhibits general fatigue, decreased libido, and increased appetite for starchy foods. SAD is more frequent in people with evening chronotypes, and light therapy can help resolve the symptoms.

Chemically-induced depression is a subtype of depression that results from substance abuse, such as alcohol or cocaine, or a side effect of medications like benzodiazepines. Yes, a side effect of anti-anxiety and antidepressant medications may be more depression. This type of depression appears to resolve when people stop ingesting the drugs or alcohol.13 Furthermore, as many people who feel sad self-medicate with alcohol, alcohol abuse may confound other drivers of depressive symptoms.

Interestingly, evidence is piling up that environmental toxicants, such as heavy metals, neurotoxic compounds, plastics, and pesticides may cause depressive symptoms.14 15

Being diagnosed with a disease like Alzheimer’s, migraine, and cancer increases the risk of also being diagnosed with depression. In fact, almost two-thirds of women who suffer from breast cancer are also diagnosed with depression.16 Of course, the diagnosis of cancer is traumatic and causes many types of anxieties, ranging from financial to emotional, and cancer treatments may cause further injury that adds to the stress burden.

Overall, depression is a meaningless label until you find its personal meaning.

This peer-reviewed article presents 12 research-backed possibilities that could be root cause drivers of depressive symptoms – and there are likely more than twelve. Scientific evidence continues to show that depression is a sign of imbalance, not an inherited genetic condition that you are powerless to change. Imbalances can be caused by inflammatory foods, toxins, medications, life events like trauma, and stress – and taking a one-size-fits-all antidepressant is like turning off the smoke alarm and ignoring the fire. Release the fear and move into curiosityCommit to lean into your symptoms, realizing that they’re only messages, reduce your toxic exposuresturn down the noise, and explore the root cause of these symptoms for true healing.

References: