Can this AI Tool Predict Your Death? Maybe, But Don’t Panic


Amid the machine-learning boom, model developers have built an all-purpose digital oracle from a trove of big data

Artist's illustration of a pair of hands hovering above a crystal ball

It may sound like fantasy or fiction, but people predict the future all the time. Real-world fortune tellers—we call them actuaries and meteorologists—have successfully used computer models for years. And today’s accelerating advances in machine learning are quickly upgrading their digital crystal balls. Now a new artificial intelligence system that treats human lives like language may be able to competently guess whether you’ll die within a certain period, among other life details, according to a recent study in Nature Computational Science.

The study team developed a machine-learning model called life2vec that can make general predictions about the details and course of people’s life, such as forecasts related to death, international moves and personality traits. The model draws from data on millions of residents of Denmark, including details about birth dates, sex, employment, location and use of the country’s universal health care system. The study metrics found the new model to be more than 78 percent accurate at predicting mortality in the research population over a four-year period, and it significantly outperformed other predictive methods such as an actuarial table and various machine-learning tools. In a separate test, life2vec also predicted whether people would move out of Denmark over the same period with about 73 percent accuracy, per one study metric. The researchers further used life2vec to predict people’s self-reported responses to a personality questionnaire, and they found promising early signs that the model could connect personality traits with life events.

The study demonstrates an exciting new approach to predicting and analyzing the trajectory of people’s life, says Matthew Salganik, a professor of sociology at Princeton University, who researches computational social science and authored the book Bit by Bit: Social Research in the Digital Age. The life2vec developers “use a very different style that, as far as I know, no one has used before,” he says.

The new tool works in a peculiar way. There are lots of different types of machine-learning models that have different underlying architectures and are understood to be useful for different purposes. For example, there are models that help robots interpret camera inputs and others that help computers spit out images. Life2vec is based on the same type of architecture that underlies popular AI chatbots such as OpenAI’s ChatGPT and Google’s Bard. Specifically, the new predictive model is closest to BERT, a language model introduced by Google in 2018. “We took a principle that has been developed for language modeling … and apply it to some really, really, really interesting sequence data about human beings,” says study author Sune Lehmann, a professor of networks and complexity science at the Technical University of Denmark.

Given a chain of information, usually in the form of written text, these models make predictions by translating inputs into mathematical vectors and acting like a turbocharged autocomplete process that fills in the next section in accordance with learned patterns.

To get a language processing tool to make predictions about people’s future, Lehmann and his colleagues processed individuals’ data into unique time lines that were composed of events such as salary changes and hospitalizations—with specific events represented as digital “tokens” that the computer could recognize. Because their training data captures so much about people and their model architecture is so flexible, the researchers suggest life2vec could offer a foundation that could be easily tweaked and fine-tuned to offer predictions about many still-unexplored aspects of a human life.

Lehmann says medical professionals have already contacted him to ask for help in developing health-related versions of life2vec—including one that could help illuminate population-level risk factors for rare diseases, for example. He hopes to use the tool to detect previously unknown relationships between the world and human life outcomes, potentially exploring questions such as “How do your relationships impact your quality of life?” and “What are the most important factors in determining salary or early death?” The tool could also tease out hidden societal biases, such as unexpected links between a person’s professional advancement and their age or country of origin.

For now, though, there are some serious limitations. Lehmann notes that the model’s data are specific to Denmark. And many gaps remain in the information that was used. Though extensive, it doesn’t capture everything relevant to a person’s mortality risk or life trajectory, and Lehmann points out that some groups of people are less likely to have extensive health and employment records.

One of the biggest caveats is that the study’s accuracy measures aren’t necessarily robust. They’re more proof of concept than they are proof that life2vec can correctly predict if a given person is going to die in a given time period, multiple sources say.

Looking at the study’s statistical analyses, Christina Silcox, research director for digital health at the Duke-Margolis Center for Health Policy, says she wouldn’t put too much stock in life2vec’s individual four-year mortality predictions. “I would not quit my job and go to the Bahamas based on this,” she says, noting that this isn’t a critique of Lehmann and his co-authors’ methods so much as an intrinsic limitation of the field of life outcome prediction.

It’s difficult to know the best way to assess the accuracy of a tool like this because there’s nothing else quite comparable out there, Salganik says. Individual mortality is especially hard to evaluate because, though everybody eventually dies, most young and middle-aged people survive year-to-year. Death is a relatively uncommon occurrence  among the under-65 age cohort covered in the study. If you simply guess that everyone in a group of people between the ages of 35 and 65 living in Denmark (the study population) will survive year-to-year, you’ve already got a pretty accurate death forecast. Life2vec did perform significantly better than that null guess, according to the study, but Salganik says it’s hard to determine exactly how well it does relative to reality.

Michael Ludkovski, a professor of statistics and applied probability at the University of California, Santa Barbara, agrees. “I have a hard time interpreting what the results really mean,” he says. Most of his work has been in actuarial science, or the prediction of risk, and he says the life2vec results are “speaking in a language different from how actuaries talk.” For instance, actuarial predictions assign a risk score, not a binary prediction of dead or not dead, Ludkovski says—and those risk scores account for uncertainty in a way that life2vec doesn’t.

There are also major ethical considerations, Silcox notes. A tool like this could obviously cause harm if it were misapplied. Algorithmic bias is a real risk, and “AI tools need to be very specifically tested for the problem they’re trying to solve,” she says. It would be crucial to thoroughly assess life2vec for every new use and to constantly monitor for common flaws such as data drift—in which past conditions that were reflected in training data no longer apply (after important medical advances, for instance).

Sign Up for Our Daily Newsletter

Email AddressBy giving us your email, you are agreeing to receive the Today In Science newsletter and to our Terms of Use and Privacy Policy.

The researchers acknowledge they’ve waded into fraught territory. Their study emphasizes the fact that Denmark has strong privacy protections and antidiscrimination laws in place. Academics, government agencies and other researchers granted access to life2vec will have to ensure data aren’t leaked or used for nonscientific purposes. Using life2vec “for automated individual decision-making, profiling or accessing individual-level data … is strictly disallowed,” the authors wrote in the paper. “Part of why I feel comfortable with this is that I trust the Danish government,” Lehmann says. He would “not feel comfortable” developing such a model in the U.S., where there is no federal data privacy law.

Yet Lehmann adds that equivalently invasive and powerful machine-learning tools are likely already out there. Some of these tools even verge on the dystopian concept of “precrime” laid out in Philip K. Dick’s 1956 novella The Minority Report (and the 2002 blockbuster science-fiction film based on it). In the U.S. many courts use algorithmic tools to make sentencing decisions. Law enforcement agencies use predictive policing software to decide how to distribute officers and resources. Even the Internal Revenue Service relies on machine learning to issue audits. In all of these examples, bias and inaccuracy have been recurring problems.

In the private sphere tech companies use advanced algorithmic predictions and the incredible amounts of data about users they collect to forecast consumer behavior and maximize engagement time. But the exact details of government and corporate tools alike are kept behind closed doors.

By creating a formidable AI predictive tool that is accessible to academic researchers, Lehmann says he hopes to promote transparency and understanding in the age of prediction that’s already underway. “We can start talking about it, and we can start deciding how we want to use it: what’s possible, what’s right and what we should leave alone,” he says.

“I hope,” Lehmann adds, “that this can be part of a discussion that helps move us in the direction of utopia and away from dystopia.”

An AI Tool That Can Help Forecast Viral Variants.


EVEscape predicts future viral mutations, new variants using evolutionary, biological information

A single viral molecule changing from red to blue on a loop

At a glance:

  • New AI tool called EVEscape uses evolutionary and biological information to predict how a virus could change to escape the immune system.
  • The tool successfully predicted the most concerning new variants that occurred during the COVID-19 pandemic.
  • Researchers say the tool can help inform the development of vaccines and therapies for SARS-CoV-2 and other rapidly mutating viruses.

The COVID-19 pandemic seemed like a never-ending parade of SARS-CoV-2 variants, each equipped with new ways to evade the immune system, leaving the world bracing for what would come next.

But what if there were a way to make predictions about new viral variants before they actually emerge?

A new artificial intelligence tool named EVEscape, developed by researchers at Harvard Medical School and the University of Oxford, can do just that.

The tool has two elements: A model of evolutionary sequences that predicts changes that can occur to a virus, and detailed biological and structural information about the virus. Together, they allow EVEscape to make predictions about the variants most likely to occur as the virus evolves.

In a study published Oct. 11 in Nature, the researchers show that had it been deployed at the start of the COVID-19 pandemic, EVEscape would have predicted the most frequent mutations and identified the most concerning variants for SARS-CoV-2. The tool also made accurate predictions about other viruses, including HIV and influenza.

The researchers are now using EVEscape to look ahead at SARS-CoV-2 and predict future variants of concern; every two weeks, they release a ranking of new variants. Eventually, this information could help scientists develop more effective vaccines and therapies. The team is also broadening the work to include more viruses.

“We want to know if we can anticipate the variation in viruses and forecast new variants — because if we can, that’s going to be extremely important for designing vaccines and therapies,” said senior author Debora Marks, professor of systems biology in the Blavatnik Institute at HMS.

From EVE to EVEscape

The researchers first developed EVE, short for evolutionary model of variant effect, in a different context: gene mutations that cause human diseases. The core of EVE is a generative model that learns to predict the functionality of proteins based on large-scale evolutionary data across species.

In a previous study, EVE allowed researchers to discern disease-causing from benign mutations in genes linked to various conditions, including cancers and heart rhythm disorders.

“You can use these generative models to learn amazing things from evolutionary information — the data have hidden secrets that you can reveal,” Marks said.

As the COVID-19 pandemic hit and progressed, the world was caught off guard by SARS-CoV-2’s impressive ability to evolve. The virus kept morphing, changing its structure in ways subtle and substantial to slip past vaccines and therapies designed to defeat it.

“We underestimate the ability of things to mutate when they’re under pressure and have a large population in which to do so,” Marks said. “Viruses are flexible — it’s almost like they’ve evolved to evolve.”

Watching the pandemic unfold, Marks and her team saw an opportunity to help: They rebuilt EVE into a new tool called EVEscape for the purpose of predicting viral variants.

They took the generative model from EVE — which can predict mutations in viral proteins that won’t interfere with the virus’s function — and added biological and structural details about the virus, including information about regions most easily targeted by the immune system.

“We’re taking biological information about how the immune system works and layering it on our learnings from the broader evolutionary history of the virus,” explained co-lead author Nicole Thadani, a former research fellow in the Marks lab.

Such an approach, Marks emphasized, means that EVEscape has a flexible framework that can be easily adapted to any virus.

Turning back the clock

In the new study, the team turned the clock back to January 2020, just before the COVID-19 pandemic started. Then they asked EVEscape to predict what would happen with SARS-CoV-2.

“It’s as if you have a time machine. You go back to day one, and you say, I only have that data, what am I going to say is happening?” Marks said.

EVEscape predicted which SARS-CoV-2 mutations would occur during the pandemic with accuracy similar to this of experimental approaches that test the virus’s ability to bind to antibodies made by the immune system. EVEscape outperformed experimental approaches in predicting which of those mutations would be most prevalent. More importantly, EVEscape could make its predictions more quickly and efficiently than lab-based testing since it didn’t need to wait for relevant antibodies to arise in the population and become available for testing.

Additionally, EVEscape predicted which antibody-based therapies would lose their efficacy as the pandemic progressed and the virus developed mutations to escape these treatments.

The tool was also able to sift through the tens of thousands of new SARS-CoV-2 variants produced each week and identify the ones most likely to become problematic.

“By rapidly determining the threat level of new variants, we can help inform earlier public health decisions,” said co-lead author Sarah Gurev, a graduate student in the Marks lab from the Electrical Engineering and Computer Science program at MIT.

In a final step, the team demonstrated that EVEscape could be generalized to other common viruses, including HIV and influenza.

Designing mutation-proof vaccines and therapies

The team is now applying EVEscape to SARS-CoV-2 in real time, using all of the information available to make predictions about how it might evolve next.

The researchers publish a biweekly ranking of new SARS-CoV-2 variants on their website and share this information with entities such as the World Health Organization. The complete code for EVEscape is also freely available online.

They are also testing EVEscape on understudied viruses such as Lassa and Nipah, two pathogens of pandemic potential for which relatively little information exists.

Such less-studied viruses can have a huge impact on human health across the globe, the researchers noted.

Another important application of EVEscape would be to evaluate vaccines and therapies against current and future viral variants. The ability to do so can help scientists design treatments that are able to withstand the escape mechanisms a virus acquires.

“Historically, vaccine and therapeutic design has been retrospective, slow, and tied to the exact sequences known about a given virus,” Thadani said.

Noor Youssef, a research fellow in the Marks lab, added, “We want to figure out how we can actually design vaccines and therapies that are future-proof.”

AI Tool Interprets Digital Pathology Images to Identify Patients Who Would Benefit From Immunotherapy


A new artificial intelligence (AI) tool that interprets medical images with unprecedented clarity could allow time-strapped clinicians to dedicate their attention to critical aspects of disease diagnosis and image interpretation.

The tool, called iStar (Inferring Super-Resolution Tissue Architecture), was developed by researchers at the Perelman School of Medicine at the University of Pennsylvania, who believe they can help clinicians diagnose and better treat cancers that might otherwise go undetected. The imaging technique provides both highly detailed views of individual cells and a broader look of the full spectrum of how people’s genes operate, which would allow doctors and researchers to see cancer cells that might otherwise have been virtually invisible. This tool can be used to determine whether safe margins were achieved through cancer surgeries and automatically provide annotation for microscopic images, paving the way for molecular disease diagnosis at that level.

A paper on the method, led by Daiwei “David” Zhang, PhD, a research associate, and Mingyao Li, PhD, a professor of Biostatistics and Digital Pathology, was published today in Nature Biotechnology.

Li said that iStar has the ability to automatically detect critical anti-tumor immune formations called “tertiary lymphoid structures,” whose presence correlates with a patient’s likely survival and favorable response to immunotherapy, which is often given for cancer and requires high precision in patient selection. This means, Li said, that iStar could be a powerful tool for determining which patients would benefit most from immunotherapy.

The development of iStar was taken on as part of the field of spatial transcriptomics, a relatively new field used to map gene activities within the space of tissues. Li and her colleagues adapted a machine learning tool called the Hierarchical Vision Transformer and trained it on standard tissue images. It begins by breaking down images into different stages, starting small and looking for fine details, then moving up and “grasping broader tissue patterns,” according to Li. A network guided by the AI system within iStar uses the information from the Hierarchical Vision Transformer to then absorb all of that information and apply it to predict gene activities, often at near-single-cell resolution.

“The power of iStar stems from its advanced techniques, which mirror, in reverse, how a pathologist would study a tissue sample,” Li explained. “Just as a pathologist identifies broader regions and then zooms in on detailed cellular structures, iStar can capture the overarching tissue structures and also focus on the minutiae in a tissue image.”

To test the efficacy of the tool, Li and her colleagues evaluated iStar on many different types of cancer tissue, including breast, prostate, kidney, and colorectal cancers, mixed with healthy tissues. Within these tests, iStar was able to automatically detect tumor and cancer cells that were hard to identify just by eye. Clinicians in the future may be able to pick up and diagnose more hard-to-see or hard-to-identify cancers with iStar acting as a layer of support.

In addition to the clinical possibilities presented by the iStar technique, the tool moves extremely quickly compared to other, similar AI tools. For example, when set up with the breast cancer dataset the team used, iStar finished its analysis in just nine minutes. By contrast, the best competitor AI tool took more than 32 hours to come up with a similar analysis.

That means iStar was 213 times faster.

“The implication is that iStar can be applied to a large number of samples, which is critical in large-scale biomedical studies,” Li said. “Its speed is also important for its current extensions in 3D and biobank sample prediction. In the 3D context, a tissue block may involve hundreds to thousands of serially cut tissue slices. The speed of iStar makes it possible to reconstruct this huge amount of spatial data within a short period of time.”

And the same goes for biobanks, which store thousands, if not millions, of samples. This is where Li and her colleagues are next aiming their research and extension of iStar. They hope to help researchers gain better understandings of the microenvironments within tissues, which could provide more data for diagnostic and treatment purposes moving forward.

AI and the future of work: 5 experts on what ChatGPT, DALL-E and other AI tools mean for artists and knowledge workers


From steam power and electricity to computers and the internet, technological advancements have always disrupted labor markets, pushing out some jobs while creating others. Artificial intelligence remains something of a misnomer – the smartest computer systems still don’t actually know anything – but the technology has reached an inflection point where it’s poised to affect new classes of jobs: artists and knowledge workers.

Specifically, the emergence of large language models – AI systems that are trained on vast amounts of text – means computers can now produce human-sounding written language and convert descriptive phrases into realistic images. The Conversation asked five artificial intelligence researchers to discuss how large language models are likely to affect artists and knowledge workers. And, as our experts noted, the technology is far from perfect, which raises a host of issues – from misinformation to plagiarism – that affect human workers.



Creativity for all – but loss of skills?

Lynne Parker, Associate Vice Chancellor, University of Tennessee

Large language models are making creativity and knowledge work accessible to all. Everyone with an internet connection can now use tools like ChatGPT or DALL-E 2 to express themselves and make sense of huge stores of information by, for example, producing text summaries.

Especially notable is the depth of humanlike expertise large language models display. In just minutes, novices can create illustrations for their business presentations, generate marketing pitches, get ideas to overcome writer’s block, or generate new computer code to perform specified functions, all at a level of quality typically attributed to human experts.

These new AI tools can’t read minds, of course. A new, yet simpler, kind of human creativity is needed in the form of text prompts to get the results the human user is seeking. Through iterative prompting – an example of human-AI collaboration – the AI system generates successive rounds of outputs until the human writing the prompts is satisfied with the results. For example, the (human) winner of the recent Colorado State Fair competition in the digital artist category, who used an AI-powered tool, demonstrated creativity, but not of the sort that requires brushes and an eye for color and texture.

While there are significant benefits to opening the world of creativity and knowledge work to everyone, these new AI tools also have downsides. First, they could accelerate the loss of important human skills that will remain important in the coming years, especially writing skills. Educational institutes need to craft and enforce policies on allowable uses of large language models to ensure fair play and desirable learning outcomes. Educators are preparing for a world where students have ready access to AI-powered text generators.

Second, these AI tools raise questions around intellectual property protections. While human creators are regularly inspired by existing artifacts in the world, including architecture and the writings, music and paintings of others, there are unanswered questions on the proper and fair use by large language models of copyrighted or open-source training examples. Ongoing lawsuits are now debating this issue, which may have implications for the future design and use of large language models.

As society navigates the implications of these new AI tools, the public seems ready to embrace them. The chatbot ChatGPT went viral quickly, as did image generator Dall-E mini and others. This suggests a huge untapped potential for creativity, and the importance of making creative and knowledge work accessible to all.


Potential inaccuracies, biases and plagiarism

Daniel Acuña, Associate Professor of Computer Science, University of Colorado Boulder

I am a regular user of GitHub Copilot, a tool for helping people write computer code, and I’ve spent countless hours playing with ChatGPT and similar tools for AI-generated text. In my experience, these tools are good at exploring ideas that I haven’t thought about before.

I’ve been impressed by the models’ capacity to translate my instructions into coherent text or code. They are useful for discovering new ways to improve the flow of my ideas, or creating solutions with software packages that I didn’t know existed. Once I see what these tools generate, I can evaluate their quality and edit heavily. Overall, I think they raise the bar on what is considered creative.

But I have several reservations.

One set of problems is their inaccuracies – small and big. With Copilot and ChatGPT, I am constantly looking for whether ideas are too shallow – for example, text without much substance or inefficient code, or output that is just plain wrong, such as wrong analogies or conclusions, or code that doesn’t run. If users are not critical of what these tools produce, the tools are potentially harmful.

Recently, Meta shut down its Galactica large language model for scientific text because it made up “facts” but sounded very confident. The concern was that it could pollute the internet with confident-sounding falsehoods.

Another problem is biases. Language models can learn from the data’s biases and replicate them. These biases are hard to see in text generation but very clear in image generation models. Researchers at OpenAI, creators of ChatGPT, have been relatively careful about what the model will respond to, but users routinely find ways around these guardrails.

Another problem is plagiarism. Recent research has shown that image generation tools often plagiarize the work of others. Does the same happen with ChatGPT? I believe that we don’t know. The tool might be paraphrasing its training data – an advanced form of plagiarism. Work in my lab shows that text plagiarism detection tools are far behind when it comes to detecting paraphrasing.

two rows of six images, each top and bottom pair very similar to each other
Plagiarism is easier to see in images than in text. Is ChatGPT paraphrasing as well?

These tools are in their infancy, given their potential. For now, I believe there are solutions to their current limitations. For example, tools could fact-check generated text against knowledge bases, use updated methods to detect and remove biases from large language models, and run results through more sophisticated plagiarism detection tools.


With humans surpassed, niche and ‘handmade’ jobs will remain

Kentaro Toyama, Professor of Community Information, University of Michigan

We human beings love to believe in our specialness, but science and technology have repeatedly proved this conviction wrong. People once thought that humans were the only animals to use tools, to form teams or to propagate culture, but science has shown that other animals do each of these things.

Meanwhile, technology has quashed, one by one, claims that cognitive tasks require a human brain. The first adding machine was invented in 1623. This past year, a computer-generated work won an art contest. I believe that the singularity – the moment when computers meet and exceed human intelligence – is on the horizon.

How will human intelligence and creativity be valued when machines become smarter and more creative than the brightest people? There will likely be a continuum. In some domains, people still value humans doing things, even if a computer can do it better. It’s been a quarter of a century since IBM’s Deep Blue beat world champion Garry Kasparov, but human chess – with all its drama – hasn’t gone away.

a magazine cover illustration showing an astronaut striding toward the viewer on a desert-like planet
Cosmopolitan magazine used DALL-E 2 to produce this cover.

In other domains, human skill will seem costly and extraneous. Take illustration, for example. For the most part, readers don’t care whether the graphic accompanying a magazine article was drawn by a person or a computer – they just want it to be relevant, new and perhaps entertaining. If a computer can draw well, do readers care whether the credit line says Mary Chen or System X? Illustrators would, but readers might not even notice.

And, of course, this question isn’t black or white. Many fields will be a hybrid, where some Homo sapiens find a lucky niche, but most of the work is done by computers. Think manufacturing – much of it today is accomplished by robots, but some people oversee the machines, and there remains a market for handmade products.

If history is any guide, it’s almost certain that advances in AI will cause more jobs to vanish, that creative-class people with human-only skills will become richer but fewer in number, and that those who own creative technology will become the new mega-rich. If there’s a silver lining, it might be that when even more people are without a decent livelihood, people might muster the political will to contain runaway inequality.


Old jobs will go, new jobs will emerge

Mark Finlayson, Associate Professor of Computer Science, Florida International University

Large language models are sophisticated sequence completion machines: Give one a sequence of words (“I would like to eat an …”) and it will return likely completions (“… apple.”). Large language models like ChatGPT that have been trained on record-breaking numbers of words (trillions) have surprised many, including many AI researchers, with how realistic, extensive, flexible and context-sensitive their completions are.

Like any powerful new technology that automates a skill – in this case, the generation of coherent, albeit somewhat generic, text – it will affect those who offer that skill in the marketplace. To conceive of what might happen, it is useful to recall the impact of the introduction of word processing programs in the early 1980s. Certain jobs like typist almost completely disappeared. But, on the upside, anyone with a personal computer was able to generate well-typeset documents with ease, broadly increasing productivity.

Further, new jobs and skills appeared that were previously unimagined, like the oft-included resume item MS Office. And the market for high-end document production remained, becoming much more capable, sophisticated and specialized.

I think this same pattern will almost certainly hold for large language models: There will no longer be a need for you to ask other people to draft coherent, generic text. On the other hand, large language models will enable new ways of working, and also lead to new and as yet unimagined jobs.

To see this, consider just three aspects where large language models fall short. First, it can take quite a bit of (human) cleverness to craft a prompt that gets the desired output. Minor changes in the prompt can result in a major change in the output.

Second, large language models can generate inappropriate or nonsensical output without warning.

Third, as far as AI researchers can tell, large language models have no abstract, general understanding of what is true or false, if something is right or wrong, and what is just common sense. Notably, they cannot do relatively simple math. This means that their output can unexpectedly be misleading, biased, logically faulty or just plain false.

These failings are opportunities for creative and knowledge workers. For much content creation, even for general audiences, people will still need the judgment of human creative and knowledge workers to prompt, guide, collate, curate, edit and especially augment machines’ output. Many types of specialized and highly technical language will remain out of reach of machines for the foreseeable future. And there will be new types of work – for example, those who will make a business out of fine-tuning in-house large language models to generate certain specialized types of text to serve particular markets.

In sum, although large language models certainly portend disruption for creative and knowledge workers, there are still many valuable opportunities in the offing for those willing to adapt to and integrate these powerful new tools.


Leaps in technology lead to new skills

Casey Greene, Professor of Biomedical Informatics, University of Colorado Anschutz Medical Campus

Technology changes the nature of work, and knowledge work is no different. The past two decades have seen biology and medicine undergoing transformation by rapidly advancing molecular characterization, such as fast, inexpensive DNA sequencing, and the digitization of medicine in the form of apps, telemedicine and data analysis.

Some steps in technology feel larger than others. Yahoo deployed human curators to index emerging content during the dawn of the World Wide Web. The advent of algorithms that used information embedded in the linking patterns of the web to prioritize results radically altered the landscape of search, transforming how people gather information today.

The release of OpenAI’s ChatGPT indicates another leap. ChatGPT wraps a state-of-the-art large language model tuned for chat into a highly usable interface. It puts a decade of rapid progress in artificial intelligence at people’s fingertips. This tool can write passable cover letters and instruct users on addressing common problems in user-selected language styles.

Just as the skills for finding information on the internet changed with the advent of Google, the skills necessary to draw the best output from language models will center on creating prompts and prompt templates that produce desired outputs.

For the cover letter example, multiple prompts are possible. “Write a cover letter for a job” would produce a more generic output than “Write a cover letter for a position as a data entry specialist.” The user could craft even more specific prompts by pasting portions of the job description, resume and specific instructions – for example, “highlight attention to detail.”

As with many technological advances, how people interact with the world will change in the era of widely accessible AI models. The question is whether society will use this moment to advance equity or exacerbate disparities.