AI-Generated ‘Deepfakes’ Are Wildly Impressive—And Dangerous. Here’s How to Spot Them


The deepfake game is moving so quickly that looking for the usual signs (like an unsettling pair of eyes or hands with six fingers) isn’t your best defense mechanism.

barrack obama looking at his deep fake in the face

Back in the spring, an Instagram video surfaced of Britney Spears dancing. Nothing out of the ordinary there. But then, online sleuths noticed that when the pop star swept her arms up over her head, her face seemed to suddenly change in an instant, like she’d been wiping off a mask. Unsurprisingly, the footage blew up, and was swiftly deleted from the account.

After the dust settled, there were two schools of thought on the video: one staunchly claiming the video was a deepfake—or a piece of synthetic media that’s been altered in a deceptive way using a specific type of artificial intelligence—and the other claiming it was a simple Instagram filter that started glitching as Spears moved.

View full post on Tiktok

The Spears incident illustrates just how dangerous the internet can be when you take images at face value. That’s a grave mistake, as the internet is teeming with deepfakes. Compared to the same period in 2022, there have been three times as many deepfake videos and eight times as many voice deepfakes posted online in 2023, according to DeepMedia, a company working with the U.S. Department of Defense, the United Nations, and global tech companies to detect deepfakes.

So how does this trickery work? How can we figure out which videos of Britney Spears are real and which have been doctored?

What Is a Deepfake?

Deepfakes have been around since 2014, when Ian Goodfellow first invented them during his time as a machine learning Ph.D. student at the University of Montréal. (Today, he’s a research scientist at DeepMind, an AI research lab run by Google.) To create deepfakes, he came up with a machine learning model called generative adversarial networks, or GANs.

Machine learning is a subfield of artificial intelligence focused on creating statistical algorithms that can perform tasks without explicit instructions.

GANs help algorithms move beyond the simple task of classifying data into the arena of creating data—in this case, deceptive media. This happens when two neural networks try to fool each other into thinking that an image is real. Using as little as one image, a tried-and-tested GAN can create a video clip of, say, Richard Nixon saying something that is patently false and that he never even actually said. (Yes, this has already been made).

As AI has progressed, these dupes have become more convincing and easier to create. Deepfakers once needed to grind away on software tools like Adobe Photoshop or AfterEffects, editing each video frame by frame in an arduous, time-consuming process. Today, deep neural networks have advanced to help GANs operate more accurately, and large public databases and deep learning methods are now widely available thanks to free, accessible tools online. As a result, it’s cheaper to create deepfakes: as recently as late 2022, a voice clone cost about $10,000 in server and AI training costs, but now you can create deepfakes with off-the-shelf software for just a few dollars, according to DeepMedia.

“It’s democratizing access to tools that have historically been in the hands of the few and now are in the hands of many,” Hany Farid, Ph.D., a University of California, Berkeley professor of computer science and electrical engineering, tells Popular Mechanics.

How to Spot Deepfakes

Tons of online resources showcase red herrings to look for when cracking footage that you suspect has been tampered with to create a deepfake. We’ve seen this previously with AI image generators struggling to accurately replicate hands, often giving people extra fingers that were sometimes viciously contorted. (However, this issue has improved, with image generators like MidJourney able to accurately depict human hands.)Whether these detection tips are coming from Reddit or the MIT Media Lab, the visual artifacts in question might be gone by the time they’ve been spotted.

“Here’s the problem: I can tell you a few things, but six months from now, that advice is going to be useless,” Farid says. The deepfake game is moving so quickly that looking for mistakes isn’t your best defense mechanism.

“I can tell you a few things, but six months from now, that advice is going to be useless.”

“The best way to protect yourself against fraud and disinformation online is the old-fashioned way, which is to think about where you’re getting your information,” Farid explains. Look at most of the content you absorb from social media with a grain of salt; there’s a much lesser chance that well-trusted editorial outlets like the New York Times, NPR, the Washington Post, BBC, etc. would knowingly publish a deepfake—let alone take it at face value. Most outlets have a process for verifying (to the best of their availability) that videos are real.

One of those methods is a simple reverse image search—effective for both videos and images—to verify footage/photographs that are sent in. This fairly straightforward process allows you to find where images/videos came from, and where else they’ve been used (or possibly plagiarized.) This is one of the most basic forms of digital forensics that you can use to spot-check content quickly.

Digital forensics is a branch of forensic science that focuses on identifying, acquiring, processing, analyzing, and reporting on data that’s stored electronically.

Creating Deepfakes

We’re obviously not going to tell you how to generate deepfakes of your own, but the creation process is still an important part of understanding them. Generating a convincing video still takes considerable time, processing power, and money. Most consumer-level computers won’t have nearly enough processing power to crank out deepfakes quickly enough to be effective. Deepfakes can take weeks, sometimes months to dial in perfectly.

Photo deepfakes are considerably easier—and faster—to create. AI image generators like Dall-E, Stable Diffusion, and Midjourney are incredibly close to creating truly photorealistic images based on simple text prompts. “And so it’s just a matter of time now, which we can probably measure in months, before these cross through what we call the uncanny valley,” Farid says. This “uncanny valley” refers to software that can create images that are indistinguishable from reality. We’re not there quite yet, but we’re incredibly close

Audio dupes are even closer to the uncanny valley, with the ability to create clips using short samples of someone’s voice—even just a few seconds’ worth. Farid tells us that it won’t be long before we live in a world where hackers can fake someone’s voice in real-time during a phone call.

Deepfakes in the News

The whole Britney Spears debacle is proof that deepfakes can have real and legitimate consequences. If we look back seven months ago, the Spears footage led to claims that she was dead. It’s even been reported that some went as far as calling the cops to conduct wellness checks on her—an obvious breach of privacy. The stakes get even higher when it comes to deepfakes being used as political weaponry.

Bad actors have already been caught using AI image generators to create deepfakes of “shock and awe” scenes from the ongoing Gaza conflict that simply never happened. Both sides have been using deepfakes as a tool to sway the narrative of such an emotionally charged moment in history.

Reading through Farid’s November 2022 research paper, “Protecting World Leaders Against Deep Fakes,” published in the Proceedings of the National Academy of Sciences of the United States of America, he mentions a particularly compelling example from comedian and filmmaker Jordan Peele. It’s a modified video of Obama’s many presidential addresses where the audio has been altered—with the lips synced to match. Below we’ve embedded a much more politically neutral video that demonstrates how the University of Washington created another convincing deepfake of former President Barack Obama.

View full post on Youtube

The key takeaway is that we’ll be able to reverse engineer deepfakes … once we fully understand how they’re created.

What’s Next?

We learned that digital forensics is a viable solution for detecting deepfakes in theory, but isn’t scalable to churn through the firehose of deepfakes that are dousing social media platforms. “They’re not designed really to work at that scale, at the scale of 500 hours of video uploaded to YouTube every minute,” Farid says.

Some of the most promising work involves systems that add a digital tag to original content that you created on your device. “The device will determine who you are, where you are, when you were there, and what you recorded,” he says. This would provide a quick and easy solution for social media platforms to authenticate real content from fakes.

On the forensic side, Farid has plenty of tools to detect deepfakes after they’ve been generated. Some of them are AI-based, and others use physics-based analysis. “We know that real images and synthetic images are constructed differently,” Farid says. “We have AI models that are trained on hundreds and hundreds of thousands of real images and fake images looking for statistical patterns.”

So, the proper solution in the future is a combination of digital forensic tools and digital fingerprints to mark original content. In the meantime, the best way to avoid being fooled is to use your head. If something seems off, it probably is.

The Truth About Sentient AI: Could Machines Ever Really Think or Feel?


“We’re talking about more than just code; we’re talking about the ability of a machine to think and to feel, along with having morality and spirituality,” a scientist tells us.

humanoid robots facing each other, illustration

Gear-obsessed editors choose every product we review. We may earn commission if you buy from a link. Why Trust Us?

Amid the surge of interest in large language model bots like ChatGPT, an Oxford philosopher recently claimed that artificial intelligence has shown traces of sentience. We currently see AI reaching the singularity as a moving goalpost, but Nick Bostrom, Ph.D. claims that we should look at it as more of a sliding scale. “If you admit that it’s not an all-or-nothing thing … some of these [AI] assistants might plausibly be candidates for having some degree of sentience,” he told The New York Times.

To make sense of Bostrom’s claim, we need to understand what sentience is and how it differs from consciousness within the confines of AI. Both of these phenomena are closely related and have been discussed in philosophy long before artificial intelligence entered the picture. It’s no small accident then that sentience and consciousness are often conflated.

Plain and simple, all sentient beings are conscious beings, but not all conscious beings are sentient. But what does that actually mean?

Consciousness

Consciousness is your own awareness that you exist. It’s what makes you a thinking, sentient being—separating you from bacteria, archaea, protists, fungi, plants, and certain animals. As an example, consciousness allows your brain to make sense of things in your environment—think of it as how we learn by doing. American psychologist William James explains consciousness as a continuously moving, shifting, and unbroken stream—hence the term “stream of consciousness.”

Sentience

Star Trek: The Next Generation looks at sentience as consciousness, self-awareness, and intelligence—and that was actually pretty spot on. Sentience is the innate human ability to experience feelings and sensations without association or interpretation. “We’re talking about more than just code; we’re talking about the ability of a machine to think and to feel, along with having morality and spirituality,” Ishaani Priyadarshini, a Cybersecurity Ph.D. candidate from the University of Delaware, tells Popular Mechanics.

💡AI is very clever and able to mimic sentience, but never actually become sentient itself.

Philosophical Difficulties

The very idea of consciousness has been heavily contested in philosophy for decades. The 17th-century philosopher René Descartes famously said, “I think therefore I am.” A simple statement on the surface, but it was the result of his search for a statement that couldn’t be doubted. Think about it: he couldn’t doubt his existence as he was the one doubting himself in the first place.

Multiple theories talk about the biological basis of consciousness, but there’s still little agreement on which should be taken as gospel. The two schools of thought look at whether consciousness is a result of neurons firing in our brain or if it exists completely independently from us. Meanwhile, quite a lot of the work that’s been done to identify consciousness in AI systems merely looks to see if they can think and perceive the same way we do—with the Turing Test being the unofficial industry standard

While we have reason to believe AI can exhibit conscious behaviors, it doesn’t perceive consciousness—or sentience, for that matter—in the same way that we do. Priyadarshini says AI involves a lot of mimicry and data-driven decision making, meaning it theoretically could be trained to have leadership skill. So, it would be able to feign the business acumen needed to process difficult business decisions through data-driven decision-making for example. AI’s current fake-it-till-you-make-it strategy makes it incredibly difficult to classify if it’s truly conscious or sentient.

Can We Test For Sentience or Consciousness?

View full post on Youtube

Many look at the Turing Test as the first standardized evaluation for discovering consciousness and sentience in computers. While it has been highly influential, it’s also been widely criticized.

In 1950, Alan Turing created the Turing Test—initially known as the Imitation Game—in an effort to discover if computing “machines” could exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Human evaluators would engage in blind, text-based conversations with a human and a computer. The computer passes the test if its conversational virtuosity can dupe the evaluator into not being able to reliably identify it from the human participant; this would mean that the system is sentient.

The Turing Test has returned to the spotlight with AI models like ChatGPT that are tailor-made to replicate human speech. We’ve seen conflicting claims about whether ChatGPT has actually passed the Turing Test, but its abilities remain apparent; for perspective, the famed AI model has passed the Bar exam, the SAT, and even select Chartered Financial Analyst (CFA) exams. That’s all well and good, but the fact still remains that many experts believe we need an updated test to evaluate this latest AI tech—and that we are possibly looking at AI completely wrong.

Alternatives to the Turing Test

Many experts have stated that it’s time for us to create a new Turing Test that provides a more realistic measure of AI’s capabilities. For instance, Mustafa Suleyman recently published his book, The Coming Wave: Technology, Power, and The Twenty-First Century’s Greatest Dilema, which not only talks about a new benchmark, but also how our understanding of AI needs to change. The book talks about a misplaced narrative about AI’s ability to match or surpass the intelligence of a human being—sometimes referred to as artificial general intelligence.

Rather, Suleyman believes in what he calls artificial capable intelligence (ACI) which refers to programs that can complete tasks with little human interaction. This next-generation Turing Test asks AI to build a business game plan that would be able to turn $100,000 of seed money into $1 million. This would all have to center around e-commerce by putting together a blueprint for a product and how it would be sold—ie: Alibaba, Amazon, Walmart, etc. AI systems are currently unable to pass this theoretical test, but that hasn’t stopped wannabe entrepreneurs from asking ChatGPT to dream up the next great business idea. Regardless, sentience remains a moving target that we can aim for.

Suleyman writes in his book that he doesn’t really care about what AI can say. He cares about what it can do. And we think that really says it all.

What Happens If AI Becomes Sentient?

We often see a fair amount of doom and gloom associated with AI systems becoming sentient, and reaching the point of singularity; this is defined as the moment machine intelligence becomes equal to or surpasses that of humans. We’ve seen signs of sentience as early as 1997 when IBM’s Deep Blue supercomputer beat Garry Kasparov in a chess match.

In reality, our biggest challenge with AI reaching singularity is eliminating bias while programming these systems. I always revisit Priyadarshini’s 21st-century example of the Trolley Problem, a hypothetical scenario where the occupant of a driverless car approaches an intersection. Five pedestrians jump into the road and there’s little time to react. While swerving out of the way would save the pedestrians, the resulting crash would kill the driver. The Trolley Problem itself is a moral dilemma comparing what is good to what sacrifices are acceptable.

AI is currently nothing more than decision-making based on rules and parameters, so what happens when it has to make ethical decisions? We don’t know. Away from the confines of the Trolley Problem, Bostrom mentions that with room for AI to learn and grow, there’s a chance that these large-language models will be able to develop consciousness, but the resultant capabilities are still unknown. We don’t actually know what sentient AI would be capable of doing because we’re not superintelligent ourselves.

Closing the global gap in adolescent mental health.


Effective and sustainable interventions to address the global burden of mental disorders in children and adolescents require evidence-based research that fully acknowledges the social, cultural and economic challenges.

People experience substantial physical, social and psychological transformations during adolescence that are linked to susceptibility to both positive influences and negative influences. Adolescents in low- to middle-income countries (LMICs) are particularly vulnerable to adverse and traumatic experiences, as they often must deal with negative socioeconomic, cultural, political and environmental circumstances.

At the end of 2023, the World Health Organization (WHO) and UNICEF published their first psychological intervention, called Early Adolescent Skills for Emotions (EASE)1, for young adolescents affected by distress. The WHO–UNICEF EASE intervention addresses a major unmet need for new approaches to the prevention or treatment of mental health conditions in the global adolescent population. Although considerable research already exists on the effectiveness of psychological interventions for young people, these data are almost entirely from high-resource settings and high-intensity interventions delivered by mental health professionals2. Given that 90% of the world’s 1.2 billion adolescents live in LMICs3, more evidence is needed to show how these interventions can be implemented in these settings, in which healthy development is often threatened by rapid social and economic change, exposure to conflict, increased urbanization, the widening gap between rich and poor, and gender disparities.

Major challenges limit the capacity to effectively deliver mental health interventions in LMICs. Governments and donors in these regions have not made the psychosocial well-being and development of children and adolescents a priority, and investment has often been inadequate and misplaced. This has resulted in a lack of specialists, with the average number of psychiatrists who specialize in treating children and adolescents being less than 0.1 per 100,000 in LMICs3, which is 100 times less than the median in the United States4. The lack of investment also limits the capacity of frontline workers, such as community-based workers, who are not equipped to address the toll of mental health across multiple sectors, including primary healthcare, education, social protection and child protection. Even when interventions are available, stigma and discrimination against children and adolescents with mental health conditions and their families act as a barrier to accessing treatment options5. Children and adolescents with mental health conditions also face the risk of human rights violations, including unequal access to education and health services, unnecessary separation from caregivers, institutionalization, exposure to violence and neglect5. Finally, comprehensive age- and sex-disaggregated data on child and adolescent mental health and development are lacking in most LMICs, where only 2% of the mental health data available is on children and adolescents3. Without data, it is difficult to know which interventions should be scaled up to give more young people access to the support they need.

Priorities must focus on closing the research gaps in epidemiology, intervention and implementation approaches to improve child and adolescent mental health in low-resource settings. Epidemiological studies using standardized methods and International Classification of Diseases criteria need to be done to determine the extent or severity of untreated mental disorders in adolescents living in LMICs, as has been done for adults6. Interventions must be designed to be delivered at low cost by non-specialists, and at scale, such as lay-counselor-delivered interventions for adolescent mental health problems in schools in low-income areas in India7. Collaborations with local stakeholders, community leaders and people with lived experiences are needed in order to design interventions that are culturally acceptable and address the challenges of the community. Finally, a bigger focus on implementation research is needed to better understand the feasibility, acceptance and cost of an intervention within the setting before implementation. For example, a recent implementation trial demonstrated the feasibility and acceptance of a newly developed internet-based application in helping to close communication gaps for Farsi- or Arabic-speaking refugees receiving inpatient treatment for depression, anxiety disorder or post-traumatic stress disorder in Germany8. These implementation studies should also look at the effectiveness of interventions across Sustainable Development Goal outcomes beyond health, such as school achievement, employment and gender equity.

Studies indicate that local data have a decisive role in modifying professional practice in health care. A large survey found that healthcare providers in LMICs believe that research done and published in their own country is more likely to change their clinical practice than is research generated in high-income countries9. Despite that, only about 10% of randomized controlled clinical trials of mental health in children and adolescents are in LMICs, with the vast majority focusing on psychopharmacological interventions10. The lack of high-quality studies assessing psychosocial or combined treatments for childhood mental health conditions, particularly in LMICs, is particularly relevant, as these interventions require culture-specific evidence.

The switch to delivering mental health services or interventions through digital platforms has the potential to make health, education and social service systems more effective, efficient and equitable, and expand access to children and adolescents who have typically not been reached by conventional means. Reports show that an increasing number of people in low-resourced settings are finding ways to access the internet, particularly through mobile devices. Pilot studies have highlighted the feasibility and acceptance of video-conferencing-based platforms used by psychiatrists for diagnosis or follow-up care for people with depression and other mental disorders in Somaliland, South Africa and India11. A guided, internet-delivered cognitive behavioral treatment for anxiety and depression yielded high remission rates in resource-constrained settings in two Latin American countries12, and online-video-game-based interventions are promising strategies for supporting depression treatment in young female adolescents13. These technologies may be able to address some of the major challenges in LMICs, including a lack of frontline workers and/or specialists, and the reluctance of some to seek services because of stigma, long travel distances or out-of-pocket expenses.

Failure to adequately meet and protect the mental health and psychosocial well-being needs of children and adolescents in LMICs now could put an entire generation at risk, with profound social and economic consequences over the long term.

Identifying the tumor immune microenvironment-associated prognostic genes for prostate cancer


Abstract

Purpose

This study aimed to explore novel tumor immune microenvironment (TIME)-associated biomarkers in prostate adenocarcinoma (PRAD).

Methods

PRAD RNA-sequencing data were obtained from UCSC Xena database as the training dataset. The ESTIMATE package was used to evaluate stromal, immune, and tumor purity scores. Differentially expressed genes (DEGs) related to TIME were screened using the immune and stromal scores. Gene functions were analyzed using DAVID. The LASSO method was performed to screen prognostic TIME-related genes. Kaplan–Meier curves were used to evaluate the prognosis of samples. The correlation between the screened genes and immune cell infiltration was explored using Tumor IMmune Estimation Resource. The GSE70768 dataset from the Gene Expression Omnibus was used to validate the expression of the screened genes.

Results

The ESTIMATE results revealed that high immune, stromal, and ESTIMATE scores and low tumor purity had better prognoses. Function analysis indicated that DEGs are involved in the cytokine–cytokine receptor interaction signaling pathway. In TIME-related DEGs, METTL7B, HOXB8, and TREM1 were closely related to the prognosis. Samples with low expression levels of METTL7B, HOXB8, and TREM1 had better survival times. Similarly, both the validation dataset and qRT-PCR suggested that METTL7B, HOXB8, and TREM1 were significantly decreased. The three genes showed a positive correlation with immune infiltration.

Conclusions

This study identified three TIME-related genes, namely, METTL7B, HOXB8, and TREM1, which correlated with the prognosis of patients with PRAD. Targeting the TIME-related genes might have important clinical implications when making decisions for immunotherapy in PRAD.

1 Introduction

Prostate adenocarcinoma (PRAD) remains the leading cause of cancer related mortality among men in the United States [1]. Despite advances in clinical care, mortality rates remain high, indicating a need for better understanding of the factors influencing PRAD prognosis and treatment response [2, 3].

Recent evidence suggests that PRAD prognosis is heavily dependent on the tumor microenvironment [4]. The tumor immune microenvironment (TIME), comprised of extracellular matrix, stromal cells, and other tumor associated cells, can modulate the tumor’s response to therapy and can influence the progression of the tumor [5]. TIME has been reported to profoundly influence the growth and metastasis of cancer. It affects prognosis, tumor growth, and treatment response through a variety of mechanisms [6]. The composition of the tumor microenvironment can affect prognosis by influencing the proliferation, metastasis, and drug resistance of cancer cells [7]. Factors like the abundance of immune cells, angiogenesis (new blood vessel formation), and cytokines (types of proteins) released by the environment can either inhibit or promote the growth of cancer cells [8]. These signals can also have a large impact on how a patient responds to treatment; for example, certain treatments may be targeted more directly at immunoprivileged environments, and certain drugs tested in preclinical trials may not reach the tumor due to an immunosuppressive microenvironment [9]. In terms of tumor growth, TIME affect the rate of metastasis, and provide signals for cell growth, movement, and survival [10]. TIME could also be a source of genetic mutations, which can lead to the selection of drug resistant tumor cells [11]. Additionally, the presence of certain immune cells can influence the growth and progression of tumors [12]. Finally, the microenvironment can also influence response to treatments. Factors like hypoxia (low oxygen) or the immune cell composition can affect the effectiveness of certain therapies [13]. Additionally, certain signaling pathways in the microenvironment can be targeted by drugs to suppress tumor growth and improve outcomes [14]. Overall, the tumor microenvironment is an essential component of tumor biology and can affect the prognosis, growth, and response to treatments of cancer.

In this study, we downloaded the PRAD RNA-sequencing (RNA-seq) data from the UCSC Xena database. Then, the ESTIMATE method was employed to analyze the immune and stromal scores and tumor purity of PRAD samples. In addition, we analyzed the correlation between TIME and clinical information, from which we obtained the novel TIME-related prognostic genes for PRAD.

Discussion

Multiple factors, stages, and genes are involved in the occurrence and development of PRAD, where TIME is an important factor. Recently, immunotherapy was the novel treatment for PRAD tumors, whereas the clinical outcome was related to the characteristics of malignant tumors, such as hormone dependence, low tumor mutation load, and immunosuppressive microenvironment. Besides, previous studies have found that TIME correlated with the prognosis of patients with PRAD [27]. Thus, further exploration of TIME in PRAD was significant to help doctors make decisions about the treatment method and predict the prognosis for PRAD.

The results of this study revealed that the PRAD samples with high immune and stromal scores had a better prognosis, and those with high tumor purity had a worse prognosis. In addition, we collected the OS, RFS, and DFS of samples to examine the correlation in immune and stromal scores and survival time using KM curves. The results obtained were similar to the ESTIMATE results. These observations were consistent with the results of previous studies. For example, Chen et al. [28] suggested that stromal, immune, and ESTIMATE scores closely correlated with the OS of patients with PRAD. In addition, similar results were obtained in multiple cancer types, such as breast cancer, bladder cancer, and lung adenocarcinoma, indicating that the stromal, immune, and ESTIMATE scores and tumor purity in TIME played a significant role in immunotherapy [29,30,31]. Xiang et al. [32] indicated that the stromal, immune, and ESTIMATE scores and tumor purity in the microenvironment were associated with TIME. Thus, our study screened TIME-related DEGs by comparing stromal scores and immune scores. Subsequently, 1229 TIME-related DEGs were screened using the limma package. GO BP results indicated the involvement of DEGs in the immune response. Immune response regulated the development of PRAD tumors, which played an important role when making decisions for immunotherapy [33, 34]. The DEGs might be novel biomarkers for treating PRAD. In addition, KEGG results indicated that the DEGs were related to the cytokine–cytokine receptor interaction pathway. Previous studies have found that this pathway always involved some immune-related genes that are involved in different cancers, such as renal cell carcinoma and hepatocellular carcinoma [35, 36]. This pathway might be an important factor in the immunotherapy for PRAD. Further experiments must be performed to understand the mechanism of immunotherapy in PRAD.

Previous studies have indicated that the TIME was related to the prognosis of patients with PRAD. In this study, three prognostic DEGs, namely, METTL7B, HOXB8, and TREM1, were identified. The KM curves were used to evaluate the correlation between the three DEGs and patient prognosis, suggesting that patients with low expression levels of METTL7B, HOXB8, and TREM1 had good OS, RFS, and DFS. The results were validated using the external cohort, which obtained the same results as TCGA dataset. Our results were similar with the previous studies that have used external cohorts to validated the prognostic value of the biomarkers in PRAD [37,38,39]. The method further confirmed the results in this study. This novel TIME hub genes-related risk score model provides a new theoretical basis for the prognosis assessment of PRAD patients, which is expected to be further applied in the future clinical management. A prospective study of clinical cohorts recruiting PRAD patients in different stage will help validating this risk score model. The expression of METTL7B, HOXB8, and TREM1 was examined each month. Then, the follow-up was performed to observe the prognosis of PRAD patients. The KM curves and survival analysis will be carried out to the correlation between risk score model and prognosis. This study is expected to be conducted for 5 years or even longer to obtain good persuasiveness.

Meanwhile, significant differences in METTL7B, HOXB8, and TREM1 were found between the controls and tumor samples in both mRNA levels. The three genes might be novel TIME-related biomarkers for PRAD. METTL7B, an alkyl thiol methyltransferase, could metabolize hydrogen sulfide (H2S) [40]. H2S was found to participate in the epithelial–mesenchymal transition and tumor migration and invasion [41]. A recent study found that the expression of METTL7B positively correlated with immunosuppressive cells suggesting that it might play a significant role in modulating TIME [42]. Meanwhile, METTL7B expression was positively associated with CD4 + T cells and dendritic cells. All the results indicated that METTL7B could be used to predict the TIME in PRAD. Moreover, Redecke et al. [43] reported that HOXB8 transfected in mouse bone marrow cells with unlimited proliferative capacity that could enable investigations of immune cell differentiation and function. This study found that HOXB8 is closely correlated with CD4 + T cells. Besides, Zhao et al. [44] pointed out that high expression levels of TREM1 had improved the infiltration of regulatory T cells and reduced the infiltration of CD8 + T cells. Similarly, this study found that the expression of TREM1 could regulate the TIME, including neutrophils and dendritic cells. Previous studies have suggested that CD4 + T cells, CD8 + T cells, neutrophils, and dendritic cells are associated with the impairment of proliferation, cytokine production, and migratory capacities of effector T cells [45]. Besides, Meng et al. [46] firstly pointed out the infiltration of immunocytes among PRAD via the CIBERSORT algorithm. This study indicated that M2 macrophages was related to gene markers, whick could predict the prognosis of PRAD patients. These results were consistent with our study that we found that METTL7B, HOXB8, and TREM1 were positively correlated with M2 macrophages. Regulating the expression levels of METTL7B, HOXB8, and TREM1 may have remarkable clinical applications in enhancing immunotherapy. Immunotherapy has shown good prospects in treating cancer. We will continue to focus on the genes related to tumor microenvironment of PRAD in the future. Further exploration on genes related to tumor microenvironment will help treating patients with PRAD using immunotherapy as soon. Thus, more experiments such as mice experiments, molecular biology research and clinical test need be performed to validate these results in this study.

5 Conclusions

This study explored the expression levels of three TIME-related genes including METTL7B, HOXB8, and TREM1, which correlated with the prognosis of patients with PRAD. Moreover, targeting the TIME-related genes might have important clinical implications when making decisions for immunotherapy in PRAD.

When all else fails: Treating hyperhidrosis with endoscopic thoracic sympathectomy


Key takeaways:

  • Endoscopic thoracic sympathectomy (ETs) is often avoided by dermatologists due to its risks.
  • In these interviews, a patient recounts his journey with ETS and a surgeon discusses the merits of this surgery.

It was just another day at school when sophomore Cole Villaflor and his classmates were asked to write on-demand essays in class. As he approached this seemingly easy assignment, his hands began to sweat, dampening the page. Trying to ignore it,Villaflor continued to write but his perspiration caused the ink to smear and the page to tear. After struggling through his essay, Cole, embarrassed and defeated, handed his paper to the teacher explaining the reason for its battered presentation: hyperhidrosis.

Cole’s treatment journey

During elementary school, Villafor was diagnosed with palmar and plantar hyperhidrosis, and decided to seek help for his condition due to the large, negative impact that hyperhidrosis had on his life.

“It made it uncomfortable to go to school and shake hands with others because my palms were dripping with sweat,” Villafor told Healio. “It was difficult to do schoolwork as my hands would dampen the page and tear through it and … it made gripping a basketball difficult, as it became slippery. This made me fearful of social interactions and made me anxious to participate in activities I loved.”

Although he tried many treatments to block or lessen the sweating of his hands including prescribed and over-the-counter antiperspirants such as Drysol (aluminum chloride hexahydrate, Person & Covey), glycopyrrolate pills, Qbrexza wipes (glycopyrronium, Journey Medical) and Carpe lotion (aluminum sesquichlorohydrate, Carpe), nothing worked.

“The products I tried either showed no results or only minimally lessened the volume of sweat and some made my hands swell more,” Villafor said. “My parents also spent money to do a panel of bloodwork to rule any other issues out.”

Eventually Cole turned to Botox, receiving 30 to 40 injections in each palm every 4 to 6 months.

“These painful injections showed little results and didn’t provide the relief I was looking for,” Villafor explained after enduring 4 years of failed treatments. “So, after trying just about every solution I could find, I set my sights to surgery.”

Endoscopic thoracic sympathectomy surgery process

Endoscopic thoracic sympathectomy (ETS) is a procedure that is performed to correct hyperhidrosis by interrupting the transmission of nerve signals from the spinal column to the sweat glands.

Michael Levy, MD, PhD, chief of pediatric neurosurgery at Rady Children’s Hospital-San Diego and professor at UC San Diego School of Medicine, along with Timothy Fairbanks, MD, pediatric surgeon at Rady Children’s Hospital, were the surgeons who performed Villafor’s procedure to correct his hyperhidrosis.

In an exclusive interview with Healio, Levy explained every detail of how the procedure works.

First, the patient is brought into the operating room and the anesthesiologist uses a special tube to intubate the patient. This tube goes into each side of the lung in order to deflate one side of the lung at each time.

“This must be done because the sympathetic chain travels perpendicular to the ribs and cannot be accessed otherwise,” Levy explained.

Next, three small incisions are made in the mid axillary line which allow the surgeons to safely place the endoscope, visualize and retract the lung as it is deflated and expose the sympathetic chain, which is located over the ribs at the third and fourth thoracic levels.

After initially identifying the first and second thoracic vertebrae, the surgeons mobilize the nerve in between to ensure that none of the attachments are remaining. Smaller nerve fibers may connect the chain at different levels. If these fibers are not separated, the procedure will not be successful.

“Any attachments can lead to further sweating,” Levy explained. “We operate a little lower than most surgeons. Some will operate as high as the second cervical level, but that’s riskier. Operating at the third and fourth vertebrae means that we aren’t taking any chances with regard to creating a Horner’s syndrome as a complication.”

After this is complete, the lung is reinflated, the endoscope is withdrawn to make sure the lung has expanded, and extra air is removed. This process is then repeated on the other side.

Safety concerns

While many dermatologists do not recommend this procedure due to the safety concerns surrounding the deflation and reinflation of the lungs, Levy recommends it to patients with hyperhidrosis because “it is a minimally invasive endoscopic outpatient surgery.”

According to Levy, the process of partially deflating the lung to move it off the sympathetic chain takes 7 to 10 minutes, after which the lung is reinflated. Many safety precautions are taken to ensure that no complications arise, Levy said, including a chest X-ray after surgery to confirm that the lungs are completely reinflated. Additionally, complications related to the lungs are minimal given the usual lack of pulmonary problems in children and adolescents.

The surgery is normally completed in 1 day, and less than 5% of patients require an overnight stay due to possible complications, including infection, which can be treated with oral antibiotics, or a lack of lung reinflation.

“Fortunately, most of the children and adolescents we operate on have no medical problems, so the lungs refill easily,” he said. “We haven’t had any infections or failures so far and no other complications.”

These safety findings are supported by other studies. In a retrospective study by Huang et al published in Frontiers in Surgery in 2023, of 109 patients that underwent ETS, 13 developed a slight pneumothorax resulting in less than a 30% unilateral pulmonary compression. During reexamination 1 month later, gas absorption was observed with chest radiographs despite no special treatment being pursued. In fact, no serious complication occurred in any of the patients.

Dermatologists may also be concerned with the risk for excessive compensatory sweating — a form of rebound sweating that occurs on other parts of the body that were not previously affected — that may follow ETS.

In a 2022 study published in Surgical Endoscopy, Woo et al conducted a survey-based study with 231 patients who underwent ETS for palmar and/or axillary hyperhidrosis. Of these patients, 86.1% experienced compensatory sweating following their surgery, however, 94% expressed satisfaction with the surgery. In fact, when the authors compared satisfaction ratings of the compensatory sweating group vs. the non-compensatory sweating group, there was no difference.

According to another 2018 study, published in the Journal of Thoracic Disease, Weng and colleagues surveyed data from nearly 30 randomized controlled-trials, cohort studies and case series and found that ETS performed at the fourth thoracic level was linked to a satisfaction rate of 94% to 100%. Regarding compensatory sweating, incidence ranged from 0% to 88%, but less than 1% experienced it in the severe form, indicating that this procedure is safe, effective and “could be recommended as one of the standard treatment methods” for primary palmar hyperhidrosis, according to the researchers.

According to Levy’s experience, all patients that were chosen appropriately were satisfied with the outcomes even in the presence of compensatory sweating.

“Overall, the procedure is a permanent — and the best — way to ensure you get a complete resolution of the hyperhidrosis,” Levy said.

Giving ETS a chance

It is no secret that many dermatologists are quite wary of this surgery. In fact, Villafor experienced this aversion firsthand during his treatment journey.

“After receiving Botox, I consulted with a dermatologist who told me that surgery for hyperhidrosis was outdated and ‘archaic,’” Villafor recounted. “I then went to another dermatologist for a second opinion who told me the same thing, claiming that it is harmful and not a viable solution.”

“Hearing this initially discouraged me from ever getting a permanent solution to my problems,” he added.

After conducting his own research, Villafor found Levy and Fairbanks at Rady Children’s Hospital.

“I sat down with them and they showed me that the dermatologists’ perception of the surgery was not accurate,” he said.

According to Villafor, Levy and Fairbanks described the surgery as a minimally invasive outpatient procedure that achieves high success rates. After Cole was confirmed as a viable candidate for the surgery, he received the procedure — and it has only changed his life for the better.

“Post-surgery, my life has been greatly improved,” Cole said. “My hands now sweat less than or equal to that of a normal person and I no longer have to fear the previously uncomfortable situations I had to bear.”

Is ETS worth it?

“Hyperhidrosis greatly impacted my daily life and the activities I participated in,” Villafor said, echoing the struggle of many other patients, especially those with palmar hyperhidrosis as this condition affects the constant handling of objects.

For example, musicians may not be able to pursue their musical passions due to an inability to hold the instrument, writers must wear gloves simply to grip a pencil, or, in the case of Villafor, athletes that use their hands may give up their sporting ambitions.

In fact, in the study conducted by Woo et al, 16% of the patients that underwent ETS cited “substantial inconvenience related to their jobs” as the reason for having the surgery.

Furthermore, the stigma surrounding this condition can cause sufferers to isolate themselves from social interactions for fear of having to shake hands. Many even revamp their entire wardrobe to accommodate their condition, sticking to dark or baggy clothing made of material that repels sweat.

It is clear that those suffering from hyperhidrosis deserve effective treatment options, and although Botox injections, antiperspirants and the multitude of other treatments that Villafor tried did not work for him, they may work for others.

“I do believe that there are many good options out there that others should try in order to treat their hyperhidrosis as it may work for them,” he said. “But ETS provided me the relief that I was looking for that I couldn’t find in other products.”

So, while many dermatologists may continue to disregard ETS as an “archaic” and unsafe way to resolve hyperhidrosis, to those suffering from the stigma and life-changing effects of this condition, the surgery may be worth it.

“I am extremely grateful to everyone that had a role in my surgery,” Villafor said, “and looking back, I would have definitely done it again.”

Multimodal neuro-nanotechnology: Challenging the existing paradigm in glioblastoma therapy


Abstract

Integrating multimodal neuro- and nanotechnology-enabled precision immunotherapies with extant systemic immunotherapies may finally provide a significant breakthrough for combatting glioblastoma (GBM). The potency of this approach lies in its ability to train the immune system to efficiently identify and eradicate cancer cells, thereby creating anti-tumor immune memory while minimizing multi-mechanistic immune suppression. A critical aspect of these therapies is the controlled, spatiotemporal delivery of structurally defined nanotherapeutics into the GBM tumor microenvironment (TME). Architectures such as spherical nucleic acids or poly(beta-amino ester)/dendrimer-based nanoparticles have shown promising results in preclinical models due to their multivalency and abilities to activate antigen-presenting cells and prime antigen-specific T cells. These nanostructures also permit systematic variation to optimize their distribution, TME accumulation, cellular uptake, and overall immunostimulatory effects. Delving deeper into the relationships between nanotherapeutic structures and their performance will accelerate nano-drug development and pave the way for the rapid clinical translation of advanced nanomedicines. In addition, the efficacy of nanotechnology-based immunotherapies may be enhanced when integrated with emerging precision surgical techniques, such as laser interstitial thermal therapy, and when combined with systemic immunotherapies, particularly inhibitors of immune-mediated checkpoints and immunosuppressive adenosine signaling. In this perspective, we highlight the potential of emerging treatment modalities, combining advances in biomedical engineering and neurotechnology development with existing immunotherapies to overcome treatment resistance and transform the management of GBM. We conclude with a call to action for researchers to leverage these technologies and accelerate their translation into the clinic.

The pharmacogenetics of tacrolimus in renal transplant patients: association with tremors, new-onset diabetes and other clinical events


Abstract

Our study is the first study to investigate the effect of SNPs in CYP3A5, CYP3A4, ABCB1 and POR genes on the incidence of tremors, nephrotoxicity, and diabetes mellitus. A total of 223 renal transplant patients receiving tacrolimus and mycophenolate mofetil (MMF) were recruited. Both adults and children patients participated in the study. Genotyping was performed using PROFLEX-PCR followed by RFLP. MPA and tacrolimus plasma concentrations were measured by immunoassay. The AUC0-12h of MMF was estimated by a Bayesian method. We found a statistically significant association between the CYP3A5*3 and CYP3A4*1B genotypes and the tacrolimus exposure. We found a lower occurrence of nephrotoxicity (p = 0.03), tremor (p = 0.01), and new-onset diabetes (p = 0.002) associated with CYP3A5*1 allele. The CYP3A4*1B allele was significantly associated with a lower occurrence of new-onset diabetes (p = 0.026). The CYP3A5*1 allele was significantly associated with an increased risk of acute and chronic rejection (p = 0.03 and p < 0.001, respectively). Our results support the usefulness of tacrolimus pharmacokinetics in pre-kidney transplant assessments.

Sleep, Circadian Rhythm, and Mental Health


Summary: A new study highlights the critical link between sleep, circadian rhythms, and psychiatric disorders, suggesting that disturbances in sleep and internal body clocks can trigger or exacerbate mental health issues. The research underscores the prevalence of sleep-circadian disturbances across all psychiatric disorders, pointing to the need for holistic treatments that address these factors.

The review emphasizes the potential of new therapeutic approaches, such as light therapy and cognitive behavioral therapy for insomnia (CBT-I), to improve mental health outcomes. This work, incorporating insights from an international team, marks a step forward in understanding and treating psychiatric conditions by focusing on sleep and circadian science.

Key Facts:

  1. Prevalence of Sleep-Circadian Disturbances: Sleep and circadian rhythm disturbances are commonly found across psychiatric disorders, with significant impacts on conditions like insomnia, bipolar disorder, and early psychosis.
  2. Potential Mechanisms: The review explores mechanisms such as genetic predispositions, exposure to light, and changes in neuroplasticity that contribute to the link between sleep-circadian disturbances and psychiatric disorders.
  3. Therapeutic Approaches: Highlighting the effectiveness of treatments like light therapy and CBT-I, the review suggests targeting sleep and circadian factors could lead to innovative treatments for psychiatric conditions.

Source: University of Southampton

Problems with our sleep and internal body clock can trigger or worsen a range of psychiatric disorders, according to a new review of recent research evidence.

The review, published today [19 February] in Proceedings of the National Academy of Sciences (PNAS), suggests gaining a better understanding of the relationship between sleep, circadian rhythms and mental health could unlock new holistic treatments to alleviate mental health problems.

This shows a man sleeping.
Cognitive Behavioural Therapy for Insomnia (CBT-I) has been shown to reduce anxiety and depressive symptoms, as well as trauma symptoms in people experiencing PTSD.

“Sleep-circadian disturbances are the rule, rather than the exception, across every category of psychiatric disorders,” says Dr Sarah L. Chellappa from the University of Southampton, senior author of the review. “Sleep disturbances, such as insomnia, are well understood in the development and maintenance of psychiatric disorders, but our understanding of circadian disturbances lags behind.

“It is important to understand how these factors interact so we can develop and apply sleep-circadian interventions that benefit the sleep and mental health symptoms of patients.”

An international team of researchers from the University of Southampton, Kings College London, Stanford University and other institutions explored recent evidence on sleep and circadian factors, focusing on adolescents and young adults with psychiatric disorders. This is a time when people are most at risk of developing mental health disorders and when disruption to sleep and circadian rhythms are likely to occur.

Insomnia is more common in people with mental health disorders than in the general population – during remission, acute episodes and especially in early psychosis, where difficulty falling and staying asleep affects over half of individuals.

Around a quarter to a third of people with mood disorders have both insomnia and hypersomnia, where patients find it hard to sleep at night, but are sleepier in the daytime. Similar proportions of people with psychosis experience this combination of sleep disorders.

Meanwhile, the few studies looking at circadian rhythm sleep-wake disorders (CRSWD) suggest that 32 per cent of patients with bipolar disorder go to sleep and wake later than usual (a condition called Delayed Sleep-Wake Phase Disorder). Body clock processes (such as endogenous cortisol rhythms) have been reported to run seven hours ahead during manic episodes and four to five hours behind during the depressive phase. Timing is normalised upon successful treatment.

What are the mechanisms?

The researchers examined the possible mechanisms behind sleep-circadian disturbances in psychiatric disorders. During adolescence, physiological changes in how we sleep combine with behavioural changes, such as staying up later, getting less sleep on school nights and sleeping in on weekends.

Dr Nicholas Meyer, from King’s College London, who co-led the review said: “This variability in the duration and timing of sleep can lead to a misalignment between our body clock and our sleep-wake rhythms can increase the risk of sleep disturbances and adverse mental health outcomes.”

Researchers also looked at the role of genes, exposure to light, neuroplasticity and other possible factors. Those with a genetic predisposition towards a reduced change in activity levels between rest and wake phases are more likely to experience depression, mood instability, and neuroticism. Population-level surveys show self-reported time outdoors was associated with a lower probability of mood disorder. Sleep is thought to play a key role in how the brain forms new neural connections and processes emotional memories.

New treatments

Dr Renske Lok, from Stanford University, who co-led the review said: “Targeting sleep and circadian risk factors presents the opportunity to develop new preventative measures and therapies. Some of these are population-level considerations, such as the timing of school and work days, or changes in the built environment to optimise light exposure. Others are personalised interventions tailored to individual circadian parameters.”

Cognitive Behavioural Therapy for Insomnia (CBT-I) has been shown to reduce anxiety and depressive symptoms, as well as trauma symptoms in people experiencing PTSD.

In unipolar and bipolar depression, light therapy (delivered on rising in the morning) was effective compared with a placebo. Using it in combination with medication was also more effective than using medication alone. Other findings suggest light is effective in treating perinatal depression.

The timing of medication, meals and exercise could also impact circadian phases. Taking melatonin in the evening can help people with Delayed Sleep-Wake Phase Disorder to shift their body clock forward towards a more conventional sleep pattern and may have beneficial effects in comorbid psychiatric disorders. Nightshift work can adversely affect mental health but eating in the daytime rather than during the night could help, with research showing daytime eating prevents mood impairment.

The review also points to innovative multicomponent interventions, such as Transdiagnostic Intervention for Sleep and Circadian dysfunction (Trans-C). This combines modules that address different aspects of sleep and circadian rhythms into a sleep health framework that applies to a range of mental health disorders.

Dr Chellappa said: “Collectively, research into mental health is poised to take advantage of extraordinary advances in sleep and circadian science and translate these into improved understanding and treatment of psychiatric disorders.”

Dopamine’s Role in Movement Explored


Summary: A new study explains how dopamine influences movement sequences, offering hope for Parkinson’s disease (PD) therapies. Researchers observed that dopamine not only motivates movement but also controls the length and lateralization of actions, with different neurons activating for movement initiation and reward reception.

Through innovative experiments involving genetically modified mice, the team discovered that dopamine’s effect on movement is side-specific, enhancing actions on the opposite side of the body where neurons are active.

These findings underscore dopamine’s complex role in movement and its potential for developing targeted treatments for PD, focusing on the restoration of specific motor functions.

Key Facts:

  1. Dopamine and Movement Sequences: Dopamine signals directly influence the length and initiation of movement sequences, suggesting a nuanced role beyond general motivation.
  2. Lateralization of Dopamine’s Effects: The study reveals that dopamine’s impact on movement is contralateral, meaning it specifically enhances movements on the opposite side of the body from where the dopamine neurons are active.
  3. Potential for Targeted PD Therapies: Understanding the distinct roles of movement-related and reward-related dopamine neurons opens new avenues for creating PD treatments that address specific movement impairments.

Source: Champalimaud Centre for the Unknown

Imagine the act of walking. It’s something most able-bodied people do without a second thought. Yet it is actually a complex process involving various neurological and physiological systems. PD is a condition where the brain slowly loses specific cells, called dopamine neurons, resulting in reduced strength and speed of movements.

However, there’s another important aspect that gets affected: the length of actions. Someone with PD might not only move more slowly but also take fewer steps in a walking sequence or bout before stopping.

This study shows that dopamine signals directly affect the length of movement sequences, taking us a step closer to unlocking new therapeutic targets for enhancing motor function in PD.

This shows the outline of a person.
The team noticed that the neurons excited by movement lit up more when the mouse used the paw opposite to the brain side being observed.

“Dopamine is most closely associated with reward and pleasure, and is often referred to as the ‘feel-good’ neurotransmitter”, points out Marcelo Mendonça, the study’s first author. “But, for dopamine-deficient individuals with PD, it’s typically the movement impairments that most impact their quality of life. One aspect that has always interested us is the concept of lateralisation.

“In PD, symptoms manifest asymmetrically, often beginning on one side of the body before the other. With this study, we wanted to explore the theory that dopamine cells do more than just motivate us to move, they specifically enhance movements on the opposite side of our body”.

Shedding Light on the Brain

To this end, the researchers developed a novel behavioural task, which required freely-moving mice to use one paw at a time to press a lever in order to obtain a reward (a drop of sugar water). To understand what was happening in the brain during this task, the researchers used one-photon imaging, similar to giving the mice a tiny, wearable microscope.

This microscope was aimed at the Substantia nigra pars compacta (SNc), a dopamine-rich region deep within the brain that is significantly impacted in PD, allowing the scientists to see the activity of brain cells in real-time.

They genetically engineered these mice so that their dopamine neurons would light up when active, using a special protein that glows under the microscope. This meant that every time a mouse was about to move its paw or succeeded in getting a reward, the scientists could see which neurons were lighting up and getting excited about the action or the reward.

Observing these glowing neurons, the discoveries were, quite literally, illuminating. “There were two types of dopamine neurons mixed together in the same area of the brain”, notes Mendonça. “Some neurons became active when the mouse was about to move, while others lit up when the mouse got its reward. But what really caught our attention was how these neurons reacted depending on which paw the mouse used”.

How Dopamine Chooses Sides

The team noticed that the neurons excited by movement lit up more when the mouse used the paw opposite to the brain side being observed. For example, if they were looking at the right side of the brain, the neurons were more active when the mouse used its left paw, and vice versa. Digging deeper, the scientists found that the activity of these movement-related neurons not only signalled the start of a movement but also seemed to encode, or represent, the length of the movement sequences (the number of lever presses).

Mendonça elaborates, “The more the mouse was about to press the lever with the paw opposite the brain side we were observing, the more active neurons became. For example, neurons on the right side of the brain became more excited when the mouse used its left paw to press the lever more often.

“But when the mouse pressed the lever more with its right paw, these neurons didn’t show the same increase in excitement. In other words, these neurons care not just about whether the mouse moves, but also about how much they move, and on which side of the body”.

To study how losing dopamine affects movement, the researchers used a neurotoxin to selectively reduce dopamine-producing cells on one side of a mouse’s brain. This method mimics conditions like PD, where dopamine levels drop and movement becomes difficult. By doing this, they could see how less dopamine changes the way mice press a lever with either paw.

They discovered that reducing dopamine on one side led to fewer lever presses with the paw on the opposite side, while the paw on the same side remained unaffected. This provided further evidence for the side-specific influence of dopamine on movement.

Implications and Future Directions

Rui Costa, the study’s senior author, picks up the story, “Our findings suggest that movement-related dopamine neurons do more than just provide general motivation to move – they can modulate the length of a sequence of movements in a contralateral limb, for example. In contrast, the activity of reward-related dopamine neurons is more universal, and doesn’t favour one side over the other. This reveals a more complex role of dopamine neurons in movement than previously thought”.

Costa reflects, “The different symptoms observed in PD patients could be perhaps related to which dopamine neurons are lost—for instance, those more linked to movement or to reward. This could potentially enhance management strategies in the disease that are more tailored to the type of dopamine neurons that are lost, especially now that we know there are different types of genetically defined dopamine neurons in the brain”.


Abstract

Dopamine neuron activity encodes the length of upcoming contralateral movement sequences

Highlights

  • Developed a freely moving task where mice learn individual forelimb sequences
  • Movement-modulated DANs encode the length of contralateral movement sequences
  • The activity of reward-modulated DANs is not lateralized
  • Dopamine depletion impaired contralateral, but not ipsilateral, sequence length

Summary

Dopaminergic neurons (DANs) in the substantia nigra pars compacta (SNc) have been related to movement speed, and loss of these neurons leads to bradykinesia in Parkinson’s disease (PD). However, other aspects of movement vigor are also affected in PD; for example, movement sequences are typically shorter.

However, the relationship between the activity of DANs and the length of movement sequences is unknown. We imaged activity of SNc DANs in mice trained in a freely moving operant task, which relies on individual forelimb sequences.

We uncovered a similar proportion of SNc DANs increasing their activity before either ipsilateral or contralateral sequences. However, the magnitude of this activity was higher for contralateral actions and was related to contralateral but not ipsilateral sequence length.

In contrast, the activity of reward-modulated DANs, largely distinct from those modulated by movement, was not lateralized. Finally, unilateral dopamine depletion impaired contralateral, but not ipsilateral, sequence length.

These results indicate that movement-initiation DANs encode more than a general motivation signal and invigorate aspects of contralateral movements.

AI Determines Sex of Person From Brain Scans


Summary: Researchers developed an artificial intelligence model that accurately determines the sex of individuals based on brain scans, with over 90% success. This breakthrough supports the theory that significant sex differences in brain organization exist, challenging long-standing controversies.

The AI model focused on dynamic MRI scans, identifying specific brain networks—such as the default mode, striatum, and limbic networks—as critical in distinguishing male from female brains.

This research not only deepens our understanding of brain development and aging but also opens new avenues for addressing sex-specific vulnerabilities in psychiatric and neurological disorders.

Key Facts:

  1. High Accuracy in Sex Determination: The AI model’s ability to distinguish between male and female brain scans with more than 90% accuracy highlights intrinsic sex differences in brain organization.
  2. Key Brain Networks Identified: Explainable AI tools identified the default mode network, striatum, and limbic network as crucial areas the model analyzed to determine the sex of the brain scans, underscoring their roles in cognitive functions and behaviors.
  3. Potential for Personalized Medicine: The findings suggest that acknowledging sex differences in brain organization is vital for developing targeted treatments for neuropsychiatric conditions, paving the way for personalized medicine approaches.

Source: Stanford

A new study by Stanford Medicine investigators unveils a new artificial intelligence model that was more than 90% successful at determining whether scans of brain activity came from a woman or a man.

The findings, to be published Feb. 19 in the Proceedings of the National Academy of Sciences, help resolve a long-term controversy about whether reliable sex differences exist in the human brain and suggest that understanding these differences may be critical to addressing neuropsychiatric conditions that affect women and men differently.

This shows a net model of a face and a woman.
The team then wondered if they could create another model that could predict how well participants would do on certain cognitive tasks based on functional brain features that differ between women and men.

“A key motivation for this study is that sex plays a crucial role in human brain development, in aging, and in the manifestation of psychiatric and neurological disorders,” said Vinod Menon, PhD, professor of psychiatry and behavioral sciences and director of the Stanford Cognitive and Systems Neuroscience Laboratory.

“Identifying consistent and replicable sex differences in the healthy adult brain is a critical step toward a deeper understanding of sex-specific vulnerabilities in psychiatric and neurological disorders.”

Menon is the study’s senior author. The lead authors are senior research scientist Srikanth Ryali, PhD, and academic staff researcher Yuan Zhang, PhD.

“Hotspots” that most helped the model distinguish male brains from female ones include the default mode network, a brain system that helps us process self-referential information, and the striatum and limbic network, which are involved in learning and how we respond to rewards.

The investigators noted that this work does notweigh in on whether sex-related differences arise early in life or may be driven by hormonal differences or the different societal circumstances that men and women may be more likely to encounter.

Uncovering brain differences

The extent to which a person’s sex affects how their brain is organized and operates has long been a point of dispute among scientists. While we know the sex chromosomes we are born with help determine the cocktail of hormones our brains are exposed to — particularly during early development, puberty and aging — researchers have long struggled to connect sex to concrete differences in the human brain.

Brain structures tend to look much the same in men and women, and previous research examining how brain regions work together has also largely failed to turn up consistent brain indicators of sex.

In their current study, Menon and his team took advantage of recent advances in artificial intelligence, as well as access to multiple large datasets, to pursue a more powerful analysis than has previously been employed.

First, they created a deep neural network model, which learns to classify brain imaging data: As the researchers showed brain scans to the model and told it that it was looking at a male or female brain, the model started to “notice” what subtle patterns could help it tell the difference.

This model demonstrated superior performance compared with those in previous studies, in part because it used a deep neural network that analyzes dynamic MRI scans. This approach captures the intricate interplay among different brain regions. When the researchers tested the model on around 1,500 brain scans, it could almost always tell if the scan came from a woman or a man.

The model’s success suggests that detectable sex differences do exist in the brain but just haven’t been picked up reliably before. The fact that it worked so well in different datasets, including brain scans from multiple sites in the U.S. and Europe, make the findings especially convincing as it controls for many confounds that can plague studies of this kind.

“This is a very strong piece of evidence that sex is a robust determinant of human brain organization,” Menon said.

Making predictions

Until recently, a model like the one Menon’s team employed would help researchers sort brains into different groups but wouldn’t provide information about how the sorting happened. Today, however, researchers have access to a tool called “explainable AI,” which can sift through vast amounts of data to explain how a model’s decisions are made.

Using explainable AI, Menon and his team identified the brain networks that were most important to the model’s judgment of whether a brain scan came from a man or a woman. They found the model was most often looking to the default mode network, striatum, and the limbic network to make the call.

The team then wondered if they could create another model that could predict how well participants would do on certain cognitive tasks based on functional brain features that differ between women and men.

They developed sex-specific models of cognitive abilities: One model effectively predicted cognitive performance in men but not women, and another in women but not men. The findings indicate that functional brain characteristics varying between sexes have significant behavioral implications.

“These models worked really well because we successfully separated brain patterns between sexes,” Menon said. “That tells me that overlooking sex differences in brain organization could lead us to miss key factors underlying neuropsychiatric disorders.”

While the team applied their deep neural network model to questions about sex differences, Menon says the model can be applied to answer questions regarding how just about any aspect of brain connectivity might relate to any kind of cognitive ability or behavior. He and his team plan to make their model publicly available for any researcher to use.

“Our AI models have very broad applicability,” Menon said. “A researcher could use our models to look for brain differences linked to learning impairments or social functioning differences, for instance — aspects we are keen to understand better to aid individuals in adapting to and surmounting these challenges.”