Using AI to improve Alzheimer’s treatment through the ‘gut-brain axis’


Researchers use AI to improve Alzheimer's treatment through the 'gut-brain axis'

Cleveland Clinic researchers are using artificial intelligence to uncover the link between the gut microbiome and Alzheimer’s disease.

Previous studies have shown that Alzheimer’s disease patients experience changes in their gut bacteria as the disease develops. The newly published Cell Reports study outlines a computational method to determine how bacterial byproducts called metabolites interact with receptors on cells and contribute to Alzheimer’s disease.

Feixiong Cheng, Ph.D., inaugural director of the Cleveland Clinic Genome Center, worked in close collaboration with the Luo Ruvo Center for Brain Health and the Center for Microbiome and Human Health (CMHH). The study ranks metabolites and receptors by the likelihood they will interact with each other, and the likelihood that the pair will influence Alzheimer’s disease. The data provide one of the most comprehensive roadmaps to studying metabolite-associated diseases to date.

Bacteria release metabolites into our systems as they break down the food we eat for energy. The metabolites then interact with and influence cells, fueling cellular processes that can be helpful or detrimental to health. In addition to Alzheimer’s disease, researchers have connected metabolites to heart disease, infertility, cancers and autoimmune disorders and allergies.

Preventing harmful interactions between metabolites and our cells could help fight disease. Researchers are working to develop drugs to activate or block metabolites from connecting with receptors on the cell surface. Progress with this approach is slow because of the sheer amount of information needed to identify a target receptor.

“Gut metabolites are the key to many physiological processes in our bodies, and for every key there is a lock for human health and disease,” said Dr. Cheng, Staff in Genomic Medicine. “The problem is that we have tens of thousands of receptors and thousands of metabolites in our system, so manually figuring out which key goes into which lock has been slow and costly. That’s why we decided to use AI.”

Dr. Cheng’s team tested whether well-known gut metabolites in the human body with existing safety profiles may offer effective prevention or even intervention approaches for Alzheimer’s disease or other complex diseases if broadly applied.

Study first author and Cheng Lab postdoctoral fellow Yunguang Qiu, Ph.D. spearheaded a team that included J. Mark Brown, Ph.D., Director of Research, CMMH; James Leverenz, MD, Director of Cleveland Clinic Luo Ruvo Center for Brain Health and Director of the Cleveland Alzheimer’s Disease Research Center; and neuropsychologist Jessica Caldwell, Ph.D., ABPP/CN. Director of the Women’s Alzheimer’s Movement Prevention Center at Cleveland Clinic Nevada.

The team used a form of AI called machine learning to analyze over 1.09 million potential metabolite-receptor pairs and predict the likelihood that each interaction contributed to Alzheimer’s disease.

The analyses integrated:

  • genetic and proteomic data from human and preclinical Alzheimer’s disease studies
  • different receptor (protein structures) and metabolite shapes
  • how different metabolites affect patient-derived brain cells

The team investigated the metabolite-receptor pairs with the highest likelihood of influencing Alzheimer’s disease in brain cells derived from patients with Alzheimer’s disease.

One molecule they focused on was a protective metabolite called agmatine, thought to shield brain cells from inflammation and associated damage. The study found that agmatine was most likely to interact with a receptor called CA3R in Alzheimer’s disease.

Treating Alzheimer’s-affected neurons with agmatine directly reduced CA3R levels, indicating metabolite and receptor influence each other. Treated neurons by agmatine also had lower levels of phosphorylated tau proteins, a marker for Alzheimer’s disease.

Dr. Cheng says these experiments demonstrate how his team’s AI algorithms can pave the way for new research avenues into many diseases beyond Alzheimer’s.

“We specifically focused on Alzheimer’s disease, but metabolite-receptor interactions play a role in almost every disease that involves gut microbes,” he said. “We hope that our methods can provide a framework to progress the entire field of metabolite-associated diseases and human health.”

Now, Dr. Cheng and his team are further developing and applying these AI technologies to study interactions between genetic and environmental factors (including food and gut metabolites) on human health and diseases, including Alzheimer’s disease and other complex diseases.

Will AI rescue us from a world without working antibiotics?


The golden age of antibiotic discovery from the ’50s and ’60s lies far behind us and antibiotic resistance is slowly becoming an enormous problem. Will artificial intelligence be able to save us from a world without working antibiotics?

Image Credit: microgen via iStock - HDR tune by Universal-Sci

Antibiotic resistance is slowly becoming an enormous problem in the medical world, will artificial intelligence be able to save us from a world without working antibiotics? 

The golden age of antibiotic discovery from the ’50s and ’60s lies far behind us. Back then, countless new classes of antibiotics were discovered in rapid succession. Sadly though, since then, hardly any new types of antibiotics have been found while antibiotic resistance is slowly becoming an immense problem. 

Without the development of new and stronger antibiotics, an era where small injuries and ordinary infections can kill, and where complex procedures such as chemotherapy and surgery become too risky, is a genuine possibility. The world health organization even stated that antibiotic resistance even endangers the world as we know it as it can lead to huge epidemics if we are not careful. 

Taking all of the above into consideration, the more it becomes clear how critical a recent discovery of a potent antibiotic by MIT researchers actually is. They identified the new antibiotic using artificial intelligence, machine learning to be precise. The drug was able to kill many of the most problematic disease-causing bacteria in laboratory testing. Incredibly enough, it was even able to kill off several strains that are immune to all known current antibiotics. 

The scientists used a computer program designed to pick out possible antibiotics that eliminate bacteria using mechanisms than differ from those of currently existing drugs. Amazingly the program is able to scan over a hundred million chemical compositions per day!

MIT scientist employed a machine-learning algorithm to detect a drug called halicin. It is capable of eliminating many strains of bacteria. Halicin (top row) prevented the development of antibiotic resistance in E. coli, while ciprofloxacin (bottom …
MIT scientist employed a machine-learning algorithm to detect a drug called halicin. It is capable of eliminating many strains of bacteria. Halicin (top row) prevented the development of antibiotic resistance in E. coli, while ciprofloxacin (bottom row) did not.

James Collins, one of the researchers and MIT Professor of medical engineering and science, stated that their goal was to create a platform that would enable them to harness the power of artificial intelligence to usher in a new age of antibiotic drug development. Their approach uncovered a marvelous molecule that is arguably one of the most potent antibiotics that has ever been discovered.

Amazingly, the antibiotic mentioned above was not the only antibiotic the researchers found. It turns out they identified several other promising antibiotic candidates as well. The scientists plan to further test these candidates in the future. As if that isn’t enough, the computer model is also believed to be capable of designing new drugs based on what it has discovered about chemical compositions that facilitate drugs to eliminate bacteria. 

AI now beats humans at basic tasks — new benchmarks are needed, says major report


Stanford University’s 2024 AI Index charts the meteoric rise of artificial-intelligence tools.

A woman wearing traditional Chinese clothing is playing Go with a robot at the Mobile World Congress, 2024.
A woman plays Go with an AI-powered robot developed by the firm SenseTime, based in Hong Kong.

Artificial intelligence (AI) systems, such as the chatbot ChatGPT, have become so advanced that they now very nearly match or exceed human performance in tasks including reading comprehension, image classification and competition-level mathematics, according to a new report (see ‘Speedy advances’). Rapid progress in the development of these systems also means that many common benchmarks and tests for assessing them are quickly becoming obsolete.

These are just a few of the top-line findings from the Artificial Intelligence Index Report 2024, which was published on 15 April by the Institute for Human-Centered Artificial Intelligence at Stanford University in California. The report charts the meteoric progress in machine-learning systems over the past decade.

In particular, the report says, new ways of assessing AI — for example, evaluating their performance on complex tasks, such as abstraction and reasoning — are more and more necessary. “A decade ago, benchmarks would serve the community for 5–10 years” whereas now they often become irrelevant in just a few years, says Nestor Maslej, a social scientist at Stanford and editor-in-chief of the AI Index. “The pace of gain has been startlingly rapid.”

Speedy advances: Line chart showing the performance of AI systems on certain benchmark tests compared to humans since 2012.
Source: Artificial Intelligence Index Report 2024.

Stanford’s annual AI Index, first published in 2017, is compiled by a group of academic and industry specialists to assess the field’s technical capabilities, costs, ethics and more — with an eye towards informing researchers, policymakers and the public. This year’s report, which is more than 400 pages long and was copy-edited and tightened with the aid of AI tools, notes that AI-related regulation in the United States is sharply rising. But the lack of standardized assessments for responsible use of AI makes it difficult to compare systems in terms of the risks that they pose.

The rising use of AI in science is also highlighted in this year’s edition: for the first time, it dedicates an entire chapter to science applications, highlighting projects including Graph Networks for Materials Exploration (GNoME), a project from Google DeepMind that aims to help chemists discover materials, and GraphCast, another DeepMind tool, which does rapid weather forecasting.

Growing up

The current AI boom — built on neural networks and machine-learning algorithms — dates back to the early 2010s. The field has since rapidly expanded. For example, the number of AI coding projects on GitHub, a common platform for sharing code, increased from about 800 in 2011 to 1.8 million last year. And journal publications about AI roughly tripled over this period, the report says.ChatGPT broke the Turing test — the race is on for new ways to assess AI

Much of the cutting-edge work on AI is being done in industry: that sector produced 51 notable machine-learning systems last year, whereas academic researchers contributed 15. “Academic work is shifting to analysing the models coming out of companies — doing a deeper dive into their weaknesses,” says Raymond Mooney, director of the AI Lab at the University of Texas at Austin, who wasn’t involved in the report.

That includes developing tougher tests to assess the visual, mathematical and even moral-reasoning capabilities of large language models (LLMs), which power chatbots. One of the latest tests is the Graduate-Level Google-Proof Q&A Benchmark (GPQA)1, developed last year by a team including machine-learning researcher David Rein at New York University.

The GPQA, consisting of more than 400 multiple-choice questions, is tough: PhD-level scholars could correctly answer questions in their field 65% of the time. The same scholars, when attempting to answer questions outside their field, scored only 34%, despite having access to the Internet during the test (randomly selecting answers would yield a score of 25%). As of last year, AI systems scored about 30–40%. This year, Rein says, Claude 3 — the latest chatbot released by AI company Anthropic, based in San Francisco, California — scored about 60%. “The rate of progress is pretty shocking to a lot of people, me included,” Rein adds. “It’s quite difficult to make a benchmark that survives for more than a few years.”

Cost of business

As performance is skyrocketing, so are costs. GPT-4 — the LLM that powers ChatGPT and that was released in March 2023 by San Francisco-based firm OpenAI — reportedly cost US$78 million to train. Google’s chatbot Gemini Ultra, launched in December, cost $191 million. Many people are concerned about the energy use of these systems, as well as the amount of water needed to cool the data centres that help to run them2. “These systems are impressive, but they’re also very inefficient,” Maslej says.

Costs and energy use for AI models are high in large part because one of the main ways to make current systems better is to make them bigger. This means training them on ever-larger stocks of text and images. The AI Index notes that some researchers now worry about running out of training data. Last year, according to the report, the non-profit research institute Epoch projected that we might exhaust supplies of high-quality language data as soon as this year. (However, the institute’s most recent analysis suggests that 2028 is a better estimate.)AI ‘breakthrough’: neural net has human-like ability to generalize language

Ethical concerns about how AI is built and used are also mounting. “People are way more nervous about AI than ever before, both in the United States and across the globe,” says Maslej, who sees signs of a growing international divide. “There are now some countries very excited about AI, and others that are very pessimistic.”

In the United States, the report notes a steep rise in regulatory interest. In 2016, there was just one US regulation that mentioned AI; last year, there were 25. “After 2022, there’s a massive spike in the number of AI-related bills that have been proposed” by policymakers, Maslej says.

Regulatory action is increasingly focused on promoting responsible AI use. Although benchmarks are emerging that can score metrics such as an AI tool’s truthfulness, bias and even likability, not everyone is using the same models, Maslej says, which makes cross-comparisons hard. “This is a really important topic,” he says. “We need to bring the community together on this.”

The Dark Side of AI in Mental Health


— High demand for AI training data may increase unethical practices in collecting patient data

A photo of a box of tissues and a glass of water sitting on a coffee table in a therapists office.

With the rise in patient-facing psychiatric chatbots powered by artificial intelligence (AI), the potential need for patient mental health data could drive a boom in cash-for-data scams, according to mental health experts.

A recent example of controversial data collection appeared on Craigslist when a company called Therapy For All allegedly posted an advertisement offering money for recording therapy sessions without any additional information about how the recordings would be used.

The company’s advertisement and website had already been taken down by the time it was highlighted by a mental health influencer on TikTokopens in a new tab or window. However, archived screenshots of the websiteopens in a new tab or window revealed the company was seeking recorded therapy sessions “to better understand the format, topics, and treatment associated with modern mental healthcare.”

Their stated goal was “to ultimately provide mental healthcare to more people at a lower cost,” according to the defunct website.

In service of that goal, the company was offering $50 for each recording of a therapy session of at least 45 minutes with clear audio of both the patient and their therapist. The company requested that the patients withhold their names to keep the recordings anonymous.

The website stated that the company was committed to providing “top-quality therapy services” for individuals. And the recordings would be used by its research team “to learn more about approaches to mental healthcare.”

There were no further details about how the company planned to use those recordings, and they did not respond to requests from MedPage Today to clarify their business model.

However, experts suggested this is just one example of an unexpected incentive created from the growth of AI in mental healthcare.

John Torous, MD, director of the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston, told MedPage Today that misuse of patient data related to AI models is an “extremely legitimate concern,” because large language models are only as good as their training data.

“Their chief weakness is they need vast amounts of data to truly be good,” Torous said. “Whoever has the best data will likely have the most practical or — dare I say — the best model.”

He added that high-quality patient data is likely going to be the limiting resource for developing AI-powered tools related to mental healthcare, which will increase the demand and, therefore, the value of this kind of data.

“This is the oil that’s going to power healthcare AI,” Torous added.

“They need to have millions, if not billions, of examples to train on,” he added. “This is gonna become a bigger and bigger trend.”

Torous highlighted that mental healthcare technology companies have already been caught crossing this line with unethical use of patient-facing AI tools.

For example, in early 2023, nonprofit mental health platform Kokoopens in a new tab or window announced that it used OpenAI’s GPT-3 to experiment with online mental health counselingopens in a new tab or window for roughly 4,000 people without their informed consent. The announcement, which came from the CEO Rob Morris’ X (formerly Twitter) accountopens in a new tab or window, highlighted the lack of understanding around ethical concerns related to patient consent from these companies, Torous said.

Another example, he noted, came when users of the text message-based mental health support tool Crisis Text Lineopens in a new tab or window learned that the company was sharing their data with a for-profit AI sister companyopens in a new tab or window called Loris.ai. Eventually, the company ended the relationship after substantial backlash from its usersopens in a new tab or window.

While concerns around patient data persist, there are also notable clinical implications for patient care and safety, according to Jacob Ballon, MD, MPH, of Stanford University in California.

“I would not want someone to do AI therapy on its own,” he told MedPage Today, adding that people seek out psychotherapy to help with complex, sometimes life-threatening, mental health conditions. “These are serious things that people are dealing with and to leave that to an unregulated, unmonitored chatbot is irresponsible and ultimately dangerous.”

Ballon added that he doesn’t think AI models are capable of producing the nuanced expertise needed to help individual patients address their unique mental health concerns. Even if a company could train their AI chatbot on enough high-quality patient data, it would not be able to appreciate the complexity of each patient, he noted.

Despite those concerns, Torous thinks there will be growth in companies attempting to train AI models on patient data, whether it is collected ethically or not.

“There’s probably going to be this whole world where I wonder if patients are going to be pressured or cajoled or convinced to give up their [personal health data],” he said, predicting that the market for patient mental health data will only continue to grow in the coming years.

Don’t Worry, AI Isn’t Taking Over the World


In his new book, Dennis Yi Tenen presents AI as a matter of collaborative labor history.

Columbia University Professor Dennis Yi Tenen
Dennis Yi Tenen is a long-time affiliate of Columbia’s Data Science Institute and a fellow at the Berkman Klein Center for Internet & Society at Harvard.

Literary Theory for Robots, the new book by Dennis Yi Tenen, an associate professor of English and Comparative Literature, explores the history of modern machine intelligence, taking readers on a journey that includes medieval Arabic philosophy, visions of a universal language, Hollywood fiction factories, and missile defense systems trained on Russian folktales. In his reflection on the shared pasts of literature and computer science, Tenen, a former Microsoft engineer, provides crucial context for recent developments in AI.

Tenen, whose research happens at the intersection of people, text, and technology, maintains that intelligence expressed through technology should not be mistaken for a magical genie, capable of self-directed thought or action. Rather, he perceives AI as the mechanics of collaborative work. Something as simple as a spell-checker or a grammar-correction tool, embedded in every word processor, represents the culmination of a shared human effort, spanning centuries.

Smart tools, like dictionaries and grammar books, have always accompanied the acts of writing, thinking, and communicating. That these paper machines are now automated does not bring them to life. By blending history, technology, and philosophy, Literary Theory for Robots presents AI as a matter of labor history, recognizing the long-standing cooperation between authors and engineers.

Tenen discusses the book with Columbia News, as well as what he’s reading and teaching now, and who he’d invite to join him for pizza.

How did this book come about?

My research often begins with naive questions about my immediate working environment. How do our phones or email programs learn to complete our sentences so well, I wondered, when those features came out. That sense of wonder took me down the rabbit hole of machine-assisted composition and the many related tools that make it possible.

Literary Theory for Robots by Columbia University Professor Dennis Yi Tenen

Do you think all the worry about AI taking over the world is excessive?

The term AI is often used as a synonym for technology in general. Allow me to paraphrase then: Do I worry about technology taking over the world? Not really and all the time. Humans have walked hand-in-hand with technology for millennia. Artifice defines human intelligence. It also constantly poses challenges to our well-being.

What do you think the future of AI will be?

If we look at the history of authorship and text generation, a subset of this vague AI assemblage, you see a clear trend toward intimacy. By this, I mean that years (decades, centuries) ago, finding the right word or fact would easily take days. They were literally far away! A visit to the archive could involve a trip around the world.

Over time, the Gutenberg press (invented in the 15th century) placed an encyclopedia in every home, within a hand’s reach. The search engine brought it to our fingertips, ever closer to the eye. Quicker response can only involve more direct communication, by which information systems would be entwined with the human body biologically. Hold the phone in your palm, inches away from your face, and think how much closer thought could get to that screen—a neuron’s pulse away from the word. 

What books have you read lately, and what’s next on your reading list?

On my desk right now are: Doormen by Columbia’s own Peter Bearman and Body & Soul: Notebooks of an Apprentice Boxer by Loïc Wacquant. Both books present lucid ethnographies of everyday life. I’m reading heaps of new material in preparation for a new class in the fall on conspiracy theory, rumor, and disinformation. Related to that, I am also reading everything from folk tales to Zora Neal Hurston‘s Mules and Men, various collections of urban myths, and the sociology of computational propaganda. 

What are you teaching this semester?

Contemporary Civilization as always, alongside Literature in the Age of AI. Conspiracy Theory, as I mentioned earlier, and possibly a class on self-help literature are coming up in the fall.

Which three academics/scholars, dead or alive, would you invite to a dinner party, and why?

That’s so hard! But since Hurston’s name came up, I often imagine her, Franz Boas, and John Dewey out for pizza, perhaps? I’d love to be invited.

Artificial Superintelligence Could Arrive by 2027.


Getty / Futurism

We may not have reached artificial general intelligence (AGI) yet, but as one of the leading experts in the theoretical field claims, it may get here sooner rather than later.

During his closing remarks at this year’s Beneficial AGI Summit in Panama, computer scientist and haberdashery enthusiast Ben Goertzel said that although people most likely won’t build human-level or superhuman AI until 2029 or 2030, there’s a chance it could happen as soon as 2027.

After that, the SingularityNET founder said, AGI could then evolve rapidly into artificial superintelligence (ASI), which he defines as an AI with all the combined knowledge of human civilization.

“No one has created human-level artificial general intelligence yet; nobody has a solid knowledge of when we’re going to get there,” Goertzel told the conference audience. “I mean, there are known unknowns and probably unknown unknowns.”

“On the other hand, to me it seems quite plausible we could get to human-level AGI within, let’s say, the next three to eight years,” he added.

To be fair, Goertzel is far from alone in attempting to predict when AGI will be achieved.

Last fall, for instance, Google DeepMind co-founder Shane Legg reiterated his more than decade-old prediction that there’s a 50/50 chance that humans invent AGI by the year 2028. In a tweet from May of last year, “AI godfather” and ex-Googler Geoffrey Hinton said he now predicts, “without much confidence,” that AGI is five to 20 years away.

Best known as the creator of Sophia the humanoid robot, Goertzel has long theorized about the date of the so-called “singularity,” or the point at which AI reaches human-level intelligence and subsequently surpasses it.

Until the past few years, AGI, as Goertzel and his cohort describe it, seemed like a pipe dream, but with the large language model (LLM) advances made by OpenAI since it thrust ChatGPT upon the world in late 2022, that possibility seems ever close — although he’s quick to point out that LLMs by themselves are not what’s going to lead to AGI.

“My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI — unless the AGI threatens to throttle its own development out of its own conservatism,” the AI pioneer added. “I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level.”

“It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion,” he added, presumably referring to the singularity.

Naturally, there are a lot of caveats to what Goertzel is preaching, not the least of which being that by human standards, even a superhuman AI would not have a “mind” the way we do. Then there’s the assumption that the evolution of the technology would continue down a linear pathway as if in a vacuum from the rest of human society and the harms we bring to the planet.

All the same, it’s a compelling theory — and given how rapidly AI has progressed just in the past few years alone, his comments shouldn’t be entirely discredited.

Preparing for AI in clinical laboratories.


Some year in this decade, AI tools will become ubiquitous within clinical laboratories. AI has the potential to increase the accuracy of laboratory testing and improve the quality and efficiency of operations and service of testing labs.

portrait of David McClintock

Clinical laboratorians must prepare to help lead this initiative, for their knowledge will be the key to successful implementation. They need to learn how AI algorithms are developed and validated, how to justify and analyze impact from the perspective of clinical laboratory medicine, and how to implement them to best benefit the patient and the hospital. Several scientific sessions at the 2023 Association for Diagnostics and Laboratory Medicine (formerly the AACC) Annual Scientific Meeting focused on this topic. 

AI is in its naissance in medical lab development, and this was very apparent in the technical exhibition of the ADLM/AACC meeting. David McClintock, MD, Chair of the Division of Computational Pathology and AI in the Department of Laboratory Medicine and Pathology of Mayo Clinic, pointed out that only 30 out of the 941 exhibitors at the meeting included the terms ‘artificial intelligence’ (AI) and/or “machine learning” (ML) in their product/company descriptions on the AACC exhibitor website. Ten companies included “analytics” in their description, but only four were separate analytics-based companies selling clinical lab AI/ML software. ‘This is an emerging space, just as radiology PACS was 30 years ago,’ McClintock said. ‘Now is the time to learn about it or perhaps even start to developing models that can benefit your lab.’

Uses of AI/ML in the lab

There are numerous ways that both simple and complex AI tools can aid a clinical laboratory. These include: 

  • Automated spectroscopic data analysis and disease detection; multivariate analysis of disease conditions; test interpretation; 
  • Digital image analysis for microbiology, haematopathology, immunology, and forensics; 
  • Data entry automation for specific tasks and processes; 
  • Creating standardised reports for lab test results and automated entry into Laboratory Information System (LIS)
  • Minimising laboratory testing for inappropriate test orders, predicting test results from other available data on patient chart, and reducing redundancy and duplication of lab tests, based on prior type and date of tests already performed; 
  • Data analytics for laboratory operations planning, such as predicting volume workflow, employee staffing requirements, etc.; 
  • Identifying and alerting for abnormal test results; 
  • Auto-verification of test results for quality control.

Automating spectral analysis for kidney stones and fecal analysis

Mayo Clinic spent six years developing an AI model to automate the spectral analysis of stones passed by patients. It is based on the classification of 708 unique kidney stone types. In the first 90 days of implementation, commencing April 2023, 20,000 stones have been reviewed, 40% of which were newly able to be bulk auto-released to patients’ medical records. 

Before the model was implemented, the conventional workflow began with cleaning and drying the stone, after which it was ground into a fine powder and manually analyzed with FTIR spectroscopy. A technician manually entered the results into a LIS, followed by a second technician reviewing the interpretation. Only then are results uploaded to a patient’s electronic medical record (EHR).

Clinical kidney stone workflow - with machine learning

Clinical kidney stone workflow – with machine learning

© ADLM

AI tool to automate multiple processes

After evaluating how AI could improve workflow, reduce costs, and increase efficiency, the Kidney Stone lab at Mayo Clinic Rochester, in conjunction with an innovative AI team from Mayo Clinic Florida, created an AI tool to automate multiple processes following FTIR spectra analysis. The AI model was trained on 70,000 kidney stone spectra and validated with 16,491 kidney stone spectra. Quality assurance required 81,517 kidney stone spectra. 

‘This is a lot of data, which took a lot of work, and a lot of computing time,’ commented McClintock. ‘But now the process is automated and is achieving our expectations. If a stone is not complex, the AI system classifies it and then the information is automatically entered into the LIS and subsequently released to the patient’s EHR. When it identifies a stone as complex, the results produced are manually flagged for review by a technician. The lab is now saving a lot of time, which equates to tangible cost savings and opportunities for laboratory staff work reduction/redirection.’ 

Other applications of AI in the clinical labs also exist, such as commercially developed AI tools in clinical microbiology to detect fecal ova and parasites (O&P). For most labs, up to 95% of O&P cases can be negative, and thus the process of reading slides can be monotonous. The investigational AI-assisted screening tool (Techcyte, Orem, UT) uses a convolutional neural network to identify and count parasite cysts and trophozoites, yeast, and red and white blood cells and groups them by class. Techcyte claims the tool is five times more sensitive than manual examination, with a sensitivity of 98.9%. It produces findings within 30 seconds instead of the average five minutes, automatically uploading negative findings to a LIS. Positive samples are flagged for technologist review and assessment. At Mayo Clinic, this test has just been implemented, with initial impressions positive by laboratory staff who can now remotely review slides, improving employee experience.

What clinical lab managers need to think about

‘Don’t get enamoured with AI for your lab,’ McClintock cautioned. ‘Always remember a clinical lab’s primary objective: to deliver the right information to the right person at the right place and right time in the right way. No system today can integrate all potential outputs of AI tools,’ he emphasized. ‘LIS, EHR, and middleware solutions take considerable effort to integrate with any AI tool without encumbering the pathologist, laboratorian, or clinician. For starters, you need to think about data governance, data pipelines, regulatory guidelines, ethical review, custom programming and coding, computational computing power, either locally or in the cloud, cybersecurity, and risk management. 

I think that AI will be another tool in our tool bag, to aid in efficiency and quality controlChristopher Lee Williams

‘Don’t forget the cost and availability of AI maintenance, support and quality control, which all require new IT support skills. There are also AI specific tools, such as algorithm drift. And then, in the end, will the AI tool save you enough money or at some point be reimbursable so that it can pay for itself?’ 

Experts recommend that lab managers need to focus on generating clinical evidence for AI benefits and understand the barriers and challenges to implementation when they select a tool. New training on AI is essential. In general, experts recommend implementing the new tool while still keeping the existing system functioning, to give practitioners time to learn and get comfortable with AI while maintaining the status quo as back-up.

portrait of Christopher Lee Williams

McClintock also described a new framework for clinical AI life cycle implementation, from idea generation  to final validation, go-live, and system maintenance. By adopting a similar AI lifecycle, he encouraged attendees to embrace the potential of AI in their labs. 

Co-presenter Christopher Lee Williams, MD, Assistant Professor of Pathology at the University of Oklahoma Health Sciences Center, concurs. ‘Will AI be replacing staff in labs?’ he asked rhetorically. ‘Probably not. The practice of laboratory medicine has been constantly evolving, often due to increasing automation. I think that AI will be another tool in our tool bag, to aid in efficiency and quality control. With a steady increase in an aging population in the United States, we are going to need all the help we can get.’ 

Profiles: 

David McClintock, MD, is the Chair of the Division of Computational Pathology and Artificial Intelligence within the Department of Laboratory Medicine and Pathology at Mayo Clinic in Rochester, Minnesota, USA. His primary clinical interests include clinical informatics, laboratory workflow optimisation, digital pathology implementation, analytics, and clinical ML/AI model deployment. His research interests include understanding the role and effects of digital pathology within the clinical laboratories and the use of AI and ML for improved diagnostics, more efficient workflows, and better patient outcomes. 

Christopher Lee Williams, MD, is the Director of Informatics in the Department of Pathology at the University of Oklahoma Health Sciences Center in Oklahoma City, where he also serves as an Assistant Professor. Dr Williams’ current research interests include how to operationalise AI tools for analysis and reporting in the laboratory setting, and in optimizing UI/UX design for laboratory workflows.

AI Reveals Brain Oscillations for Memory and Disease


Summary: A recent study showcases a significant leap in the study of brain oscillations, particularly ripples, which are crucial for memory organization and are affected in disorders like epilepsy and Alzheimer’s. Researchers have developed a toolbox of AI models trained on rodent EEG data to automate and enhance the detection of these oscillations, proving their efficacy on data from non-human primates.

This breakthrough, stemming from a collaborative hackathon, offers over a hundred optimized machine learning models, including support vector machines and convolutional neural networks, freely available to the scientific community. This development opens new avenues in neurotechnology applications, especially in diagnosing and understanding neurological disorders.

Key Facts:

  1. AI-Driven Innovation: The study introduces a toolbox of AI models capable of detecting brain ripples, key in memory organization and neurological diseases.
  2. Cross-Species Application: Initially trained on rodent data, these models have been successfully tested on non-human primate EEG data, indicating potential for human application.
  3. Open-Source Contribution: Over a hundred machine learning models from the project are now openly available for research use and further development, demonstrating the collaborative spirit of the scientific community.

Source: CSIC

The study of brain oscillations has advanced our understanding of brain function. Ripples are a type of fast oscillations underlying the organization of memories. They are affected in neurological disorders such as epilepsy and Alzheimer’s.

For this reason, they are considered an electroencephalographic (EEG) biomarker. However, ripples exhibit various waveforms and properties that can be missed by standard spectral methods.

This shows a brain.
The model toolbox emerged as a result of a hackathon, which resulted in a short list for the best detection models.

Recently, the neuroscience community called for the need to better automate, harmonize, and improve the detection of ripples across a range of tasks and species. In the study, the authors used recordings obtained in laboratory mice to train a toolbox of machine learning models.

“We have tested the ability of these models using data from non-human primates that were collected at Vanderbilt University (Nashville, USA) by Saman Abbaspoor and lab leader Kari Hoffman as part of the Brain Initiative.  

“We found that it is possible to use rodent EEG data to train AI algorithms that can be applied to data from primates and possibly human, provided the same type of recording techniques are used.”, De la Prida explains.

The model toolbox emerged as a result of a hackathon, which resulted in a short list for the best detection models. These architectures were then harmonized and optimized by the authors who now provide all codes and data openly to the research community.

Models include some of the best-known supervised learning architectures, such as support vector machines, decision trees, and convolutional neural networks.

“We have identified more than one hundred possible models from the different architectures that are now available for application or retraining by other researchers.”, commented Andrea Navas Olivé and Adrián Rubio, who are first authors of the work.

 “This bank of AI models will provide new applications in the field of neurotechnologies and can be useful for detection and analysis of high-frequency oscillations in pathologies such as epilepsy, where they are considered clinical markers” concludes De la Prida, who is part of the CSIC’s AI-HUB connection aimed at advancing the use of AI and its applications.


Abstract

A machine learning toolbox for the analysis of sharp-wave ripples reveals common waveform features across species

The study of sharp-wave ripples has advanced our understanding of memory function, and their alteration in neurological conditions such as epilepsy is considered a biomarker of dysfunction.

Sharp-wave ripples exhibit diverse waveforms and properties that cannot be fully characterized by spectral methods alone.

Here, we describe a toolbox of machine-learning models for automatic detection and analysis of these events.

The machine-learning architectures, which resulted from a crowdsourced hackathon, are able to capture a wealth of ripple features recorded in the dorsal hippocampus of mice across awake and sleep conditions. When applied to data from the macaque hippocampus, these models are able to generalize detection and reveal shared properties across species.

We hereby provide a user-friendly open-source toolbox for model use and extension, which can help to accelerate and standardize analysis of sharp-wave ripples, lowering the threshold for its adoption in biomedical applications.

AI Outshines Humans in Creative Thinking


Summary: ChatGPT-4 was pitted against 151 human participants across three divergent thinking tests, revealing that the AI demonstrated a higher level of creativity. The tests, designed to assess the ability to generate unique solutions, showed GPT-4 providing more original and elaborate answers.

The study underscores the evolving capabilities of AI in creative domains, yet acknowledges the limitations of AI’s agency and the challenges in measuring creativity. While AI shows potential as a tool for enhancing human creativity, questions remain about its role and the future integration of AI in creative processes.

Key Facts:

  1. AI’s Creative Edge: ChatGPT-4 outperformed human participants in divergent thinking tasks, showcasing superior originality and elaboration in responses.
  2. Study’s Caveats: Despite AI’s impressive performance, researchers highlight AI’s lack of agency and the need for human interaction to activate its creative potential.
  3. Future of AI in Creativity: The findings suggest AI could serve as an inspirational tool, aiding human creativity and overcoming conceptual fixedness, yet the true extent of AI’s ability to replace human creativity is still uncertain.

Source: University of Arkansas

Score another one for artificial intelligence. In a recent study, 151 human participants were pitted against ChatGPT-4 in three tests designed to measure divergent thinking, which is considered to be an indicator of creative thought.

Divergent thinking is characterized by the ability to generate a unique solution to a question that does not have one expected solution, such as “What is the best way to avoid talking about politics with my parents?” In the study, GPT-4 provided more original and elaborate answers than the human participants.

This shows a robotic head.
Overall, GPT-4 was more original and elaborate than humans on each of the divergent thinking tasks, even when controlling for fluency of responses.

The study, “The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks,” was published in Scientific Reports and authored by U of A Ph.D. students in psychological science Kent F. Hubert and Kim N. Awa, as well as Darya L. Zabelina, an assistant professor of psychological science at the U of A and director of the Mechanisms of Creative Cognition and Attention Lab.

The three tests utilized were the Alternative Use Task, which asks participants to come up with creative uses for everyday objects like a rope or a fork; the Consequences Task, which invites participants to imagine possible outcomes of hypothetical situations, like “what if humans no longer needed sleep?”; and the Divergent Associations Task, which asks participants to generate 10 nouns that are as semantically distant as possible. For instance, there is not much semantic distance between “dog” and “cat” while there is a great deal between words like “cat” and “ontology.” 

Answers were evaluated for the number of responses, length of response and semantic difference between words. Ultimately, the authors found that “Overall, GPT-4 was more original and elaborate than humans on each of the divergent thinking tasks, even when controlling for fluency of responses. In other words, GPT-4 demonstrated higher creative potential across an entire battery of divergent thinking tasks.” 

This finding does come with some caveats. The authors state, “It is important to note that the measures used in this study are all measures of creative potential, but the involvement in creative activities or achievements are another aspect of measuring a person’s creativity.”

The purpose of the study was to examine human-level creative potential, not necessarily people who may have established creative credentials. 

Hubert and Awa further note that “AI, unlike humans, does not have agency” and is “dependent on the assistance of a human user. Therefore, the creative potential of AI is in a constant state of stagnation unless prompted.” 

Also, the researchers did not evaluate the appropriateness of GPT-4 responses. So while the AI may have provided more responses and more original responses, human participants may have felt they were constrained by their responses needing to be grounded in the real world. 

Awa also acknowledged that the human motivation to write elaborate answers may not have been high, and said there are additional questions about “how do you operationalize creativity? Can we really say that using these tests for humans is generalizable to different people? Is it assessing a broad array of creative thinking? So I think it has us critically examining what are the most popular measures of divergent thinking.”

Whether the tests are perfect measures of human creative potential is not really the point. The point is that large language models are rapidly progressing and outperforming humans in ways they have not before. Whether they are a threat to replace human creativity remains to be seen.

For now, the authors continue to see “Moving forward, future possibilities of AI acting as a tool of inspiration, as an aid in a person’s creative process or to overcome fixedness is promising.”


Abstract

The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks

The emergence of publicly accessible artificial intelligence (AI) large language models such as ChatGPT has given rise to global conversations on the implications of AI capabilities.

Emergent research on AI has challenged the assumption that creative potential is a uniquely human trait thus, there seems to be a disconnect between human perception versus what AI is objectively capable of creating.

Here, we aimed to assess the creative potential of humans in comparison to AI. In the present study, human participants (N = 151) and GPT-4 provided responses for the Alternative Uses Task, Consequences Task, and Divergent Associations Task.

We found that AI was robustly more creative along each divergent thinking measurement in comparison to the human counterparts. Specifically, when controlling for fluency of responses, AI was more original and elaborate.

The present findings suggest that the current state of AI language models demonstrate higher creative potential than human respondents.

AI Helps Predict Survival across Demential Types.


Scientists at the Icahn School of Medicine at Mount Sinai and colleagues say they have harnessed the power of machine learning to identify key predictors of mortality in dementia patients. Their study “Machine learning models identify predictive features of patient mortality across dementia types,” published online in Communications Medicine, addresses on critical challenges in dementia care by pinpointing patients at high risk of near-term death and reportedly uncovers the factors that drive this risk.

Unlike previous studies that focused on diagnosing dementia, this research delves into predicting patient prognosis, shedding light on mortality risks, and contributing factors in various kinds of dementia.

Major cause of death

Dementia has emerged as a major cause of death in societies with increasingly aging populations. However, predicting the exact timing of death in dementia cases is challenging due to the variable progression of cognitive decline affecting the body’s normal functions, say the researchers.

“We developed machine-learning models predicting dementia patient mortality at four different survival thresholds using a dataset of 45,275 unique participants and 163,782 visit records from the U.S. National Alzheimer’s Coordinating Center (NACC). We built multi-factorial XGBoost models using a small set of mortality predictors and conducted stratified analyses with dementia type-specific models,” write the investigators.

“Our models achieved an area under the receiver operating characteristic curve (AUC-ROC) of over 0.82 utilizing nine parsimonious features for all 1-, 3-, 5-, and 10-year thresholds. The trained models mainly consisted of dementia-related predictors such as specific neuropsychological tests and were minimally affected by other age-related causes of death, e.g., stroke and cardiovascular conditions.

“Notably, stratified analyses revealed shared and distinct predictors of mortality across eight dementia types. Unsupervised clustering of mortality predictors grouped vascular dementia with depression and Lewy body dementia with frontotemporal lobar dementia.

“This study demonstrates the feasibility of flagging dementia patients at risk of mortality for personalized clinical management. Parsimonious machine-learning models can be used to predict dementia patient mortality with a limited set of clinical features, and dementia type-specific models can be applied to heterogeneous dementia patient populations.”

Credit: Zhang & Song et al, Communications Medicine
Credit: Zhang & Song et al, Communications Medicine

“Our findings are significant as they illustrate the potential of machine learning models to accurately anticipate mortality risk in dementia patients over varying timeframes,” said corresponding author Kuan-lin Huang, PhD, assistant professor of genetics and genomic sciences at Icahn Mount Sinai. “By pinpointing a concise set of clinical features, including performance on neuropsychological and other available testing, our models empower health care providers to make more informed decisions about patient care, potentially leading to more tailored and timely interventions.”

The study also found that neuropsychological test results were a better predictor of mortality risk in dementia patients than age-related factors such as cancer and heart disease, underscoring dementia’s significant role in mortality among those with neurodegenerative conditions.

“The implications of our research extend beyond clinical practice, as it underscores the value of machine learning in unraveling the complexities of diseases like dementia,” continued Huang.

“This study lays the groundwork for future investigations into predictive modeling in dementia care. However, while machine learning holds great promise for improving dementia care, it’s important to remember that these models aren’t crystal balls for individual outcomes. Many factors, both personal and medical, shape a patient’s journey.”

Next, the research team plans to refine their models by incorporating treatment effects and genetic data and exploring advanced deep-learning techniques for even more precise predictions.

Given the aging population, dementia has emerged as an increasingly pressing public health concern, ranking as the seventh leading cause of death and the fourth most burdensome disease or injury in the United States in 2016, based on years of life lost. As of 2022, Alzheimer’s and other dementias cost an estimated $1 trillion annually, impacting approximately 6.5 million Americans and 57.4 million people worldwide, with projections suggesting a tripling by 2050.