The Truth About Sentient AI: Could Machines Ever Really Think or Feel?


“We’re talking about more than just code; we’re talking about the ability of a machine to think and to feel, along with having morality and spirituality,” a scientist tells us.

humanoid robots facing each other, illustration

Gear-obsessed editors choose every product we review. We may earn commission if you buy from a link. Why Trust Us?

Amid the surge of interest in large language model bots like ChatGPT, an Oxford philosopher recently claimed that artificial intelligence has shown traces of sentience. We currently see AI reaching the singularity as a moving goalpost, but Nick Bostrom, Ph.D. claims that we should look at it as more of a sliding scale. “If you admit that it’s not an all-or-nothing thing … some of these [AI] assistants might plausibly be candidates for having some degree of sentience,” he told The New York Times.

To make sense of Bostrom’s claim, we need to understand what sentience is and how it differs from consciousness within the confines of AI. Both of these phenomena are closely related and have been discussed in philosophy long before artificial intelligence entered the picture. It’s no small accident then that sentience and consciousness are often conflated.

Plain and simple, all sentient beings are conscious beings, but not all conscious beings are sentient. But what does that actually mean?

Consciousness

Consciousness is your own awareness that you exist. It’s what makes you a thinking, sentient being—separating you from bacteria, archaea, protists, fungi, plants, and certain animals. As an example, consciousness allows your brain to make sense of things in your environment—think of it as how we learn by doing. American psychologist William James explains consciousness as a continuously moving, shifting, and unbroken stream—hence the term “stream of consciousness.”

Sentience

Star Trek: The Next Generation looks at sentience as consciousness, self-awareness, and intelligence—and that was actually pretty spot on. Sentience is the innate human ability to experience feelings and sensations without association or interpretation. “We’re talking about more than just code; we’re talking about the ability of a machine to think and to feel, along with having morality and spirituality,” Ishaani Priyadarshini, a Cybersecurity Ph.D. candidate from the University of Delaware, tells Popular Mechanics.

💡AI is very clever and able to mimic sentience, but never actually become sentient itself.

Philosophical Difficulties

The very idea of consciousness has been heavily contested in philosophy for decades. The 17th-century philosopher René Descartes famously said, “I think therefore I am.” A simple statement on the surface, but it was the result of his search for a statement that couldn’t be doubted. Think about it: he couldn’t doubt his existence as he was the one doubting himself in the first place.

Multiple theories talk about the biological basis of consciousness, but there’s still little agreement on which should be taken as gospel. The two schools of thought look at whether consciousness is a result of neurons firing in our brain or if it exists completely independently from us. Meanwhile, quite a lot of the work that’s been done to identify consciousness in AI systems merely looks to see if they can think and perceive the same way we do—with the Turing Test being the unofficial industry standard

While we have reason to believe AI can exhibit conscious behaviors, it doesn’t perceive consciousness—or sentience, for that matter—in the same way that we do. Priyadarshini says AI involves a lot of mimicry and data-driven decision making, meaning it theoretically could be trained to have leadership skill. So, it would be able to feign the business acumen needed to process difficult business decisions through data-driven decision-making for example. AI’s current fake-it-till-you-make-it strategy makes it incredibly difficult to classify if it’s truly conscious or sentient.

Can We Test For Sentience or Consciousness?

View full post on Youtube

Many look at the Turing Test as the first standardized evaluation for discovering consciousness and sentience in computers. While it has been highly influential, it’s also been widely criticized.

In 1950, Alan Turing created the Turing Test—initially known as the Imitation Game—in an effort to discover if computing “machines” could exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Human evaluators would engage in blind, text-based conversations with a human and a computer. The computer passes the test if its conversational virtuosity can dupe the evaluator into not being able to reliably identify it from the human participant; this would mean that the system is sentient.

The Turing Test has returned to the spotlight with AI models like ChatGPT that are tailor-made to replicate human speech. We’ve seen conflicting claims about whether ChatGPT has actually passed the Turing Test, but its abilities remain apparent; for perspective, the famed AI model has passed the Bar exam, the SAT, and even select Chartered Financial Analyst (CFA) exams. That’s all well and good, but the fact still remains that many experts believe we need an updated test to evaluate this latest AI tech—and that we are possibly looking at AI completely wrong.

Alternatives to the Turing Test

Many experts have stated that it’s time for us to create a new Turing Test that provides a more realistic measure of AI’s capabilities. For instance, Mustafa Suleyman recently published his book, The Coming Wave: Technology, Power, and The Twenty-First Century’s Greatest Dilema, which not only talks about a new benchmark, but also how our understanding of AI needs to change. The book talks about a misplaced narrative about AI’s ability to match or surpass the intelligence of a human being—sometimes referred to as artificial general intelligence.

Rather, Suleyman believes in what he calls artificial capable intelligence (ACI) which refers to programs that can complete tasks with little human interaction. This next-generation Turing Test asks AI to build a business game plan that would be able to turn $100,000 of seed money into $1 million. This would all have to center around e-commerce by putting together a blueprint for a product and how it would be sold—ie: Alibaba, Amazon, Walmart, etc. AI systems are currently unable to pass this theoretical test, but that hasn’t stopped wannabe entrepreneurs from asking ChatGPT to dream up the next great business idea. Regardless, sentience remains a moving target that we can aim for.

Suleyman writes in his book that he doesn’t really care about what AI can say. He cares about what it can do. And we think that really says it all.

What Happens If AI Becomes Sentient?

We often see a fair amount of doom and gloom associated with AI systems becoming sentient, and reaching the point of singularity; this is defined as the moment machine intelligence becomes equal to or surpasses that of humans. We’ve seen signs of sentience as early as 1997 when IBM’s Deep Blue supercomputer beat Garry Kasparov in a chess match.

In reality, our biggest challenge with AI reaching singularity is eliminating bias while programming these systems. I always revisit Priyadarshini’s 21st-century example of the Trolley Problem, a hypothetical scenario where the occupant of a driverless car approaches an intersection. Five pedestrians jump into the road and there’s little time to react. While swerving out of the way would save the pedestrians, the resulting crash would kill the driver. The Trolley Problem itself is a moral dilemma comparing what is good to what sacrifices are acceptable.

AI is currently nothing more than decision-making based on rules and parameters, so what happens when it has to make ethical decisions? We don’t know. Away from the confines of the Trolley Problem, Bostrom mentions that with room for AI to learn and grow, there’s a chance that these large-language models will be able to develop consciousness, but the resultant capabilities are still unknown. We don’t actually know what sentient AI would be capable of doing because we’re not superintelligent ourselves.