‘Forever chemicals’ have infiltrated food packaging on a wide scale


Nearly 70 “forever chemicals”, also known as PFAS, are commonly found in materials that come into contact with food, some of which have been linked to negative health outcomes

Potentially hazardous chemicals may be in food packaging

Food packaging and utensils commonly contain up to 68 “forever chemicals” that carry possible health risks, with regulators potentially being unaware of the presence of many of them.

Perfluoroalkyl and polyfluoroalkyl substances (PFAS) are a class of synthetic chemicals that are used to produce goods such as non-stick cookware and waterproof clothing. The bonds between the carbon and fluorine atoms in PFAS are so strong that it can take hundreds to thousands of years for them to break down.

Many of these chemicals have been linked to harmful health outcomes, including cancer and reproductive and immune problems.

“There are thousands of these chemicals,” says Birgit Geueke at the Food Packaging Forum organisation in Switzerland. “We wanted to get a picture of what is known about the presence of PFAS in food packaging.”

Geueke and her colleagues analysed 1312 studies carried out around the world that detailed the chemicals that come into contact with food, which could occur during manufacturing, packaging or cooking. They then cross-referenced these chemicals to a list of known PFAS.

The team discovered that 68 PFAS are commonly found across materials that come into contact with food, such as packaging and cookware. Of these, 61 weren’t previously known to be present in such materials and therefore haven’t been included on regulatory lists that dictate the use of PFAS.

Just 39 of the 68 PFAS have been examined for toxicity. One of the substances that has been analysed is perfluorooctanoic acid, which is classified as possibly cancer-causing to people, based on limited evidence that it can cause testicular and kidney cancer, says Geueke.

“I think it should be the responsibility of the manufacturers to make sure that PFAS are used as little as possible,” she says. Regulators around the world are working in the right direction, she says.

For example, there was a recent proposal in the European Union to ban most PFAS. And in February, the US Food and Drug Administration announced that certain grease-proofing materials containing PFAS will no longer be sold for use in food packaging.

The Truth About Sentient AI: Could Machines Ever Really Think or Feel?


“We’re talking about more than just code; we’re talking about the ability of a machine to think and to feel, along with having morality and spirituality,” a scientist tells us.

humanoid robots facing each other, illustration

Gear-obsessed editors choose every product we review. We may earn commission if you buy from a link. Why Trust Us?

Amid the surge of interest in large language model bots like ChatGPT, an Oxford philosopher recently claimed that artificial intelligence has shown traces of sentience. We currently see AI reaching the singularity as a moving goalpost, but Nick Bostrom, Ph.D. claims that we should look at it as more of a sliding scale. “If you admit that it’s not an all-or-nothing thing … some of these [AI] assistants might plausibly be candidates for having some degree of sentience,” he told The New York Times.

To make sense of Bostrom’s claim, we need to understand what sentience is and how it differs from consciousness within the confines of AI. Both of these phenomena are closely related and have been discussed in philosophy long before artificial intelligence entered the picture. It’s no small accident then that sentience and consciousness are often conflated.

Plain and simple, all sentient beings are conscious beings, but not all conscious beings are sentient. But what does that actually mean?

Consciousness

Consciousness is your own awareness that you exist. It’s what makes you a thinking, sentient being—separating you from bacteria, archaea, protists, fungi, plants, and certain animals. As an example, consciousness allows your brain to make sense of things in your environment—think of it as how we learn by doing. American psychologist William James explains consciousness as a continuously moving, shifting, and unbroken stream—hence the term “stream of consciousness.”

Sentience

Star Trek: The Next Generation looks at sentience as consciousness, self-awareness, and intelligence—and that was actually pretty spot on. Sentience is the innate human ability to experience feelings and sensations without association or interpretation. “We’re talking about more than just code; we’re talking about the ability of a machine to think and to feel, along with having morality and spirituality,” Ishaani Priyadarshini, a Cybersecurity Ph.D. candidate from the University of Delaware, tells Popular Mechanics.

💡AI is very clever and able to mimic sentience, but never actually become sentient itself.

Philosophical Difficulties

The very idea of consciousness has been heavily contested in philosophy for decades. The 17th-century philosopher René Descartes famously said, “I think therefore I am.” A simple statement on the surface, but it was the result of his search for a statement that couldn’t be doubted. Think about it: he couldn’t doubt his existence as he was the one doubting himself in the first place.

Multiple theories talk about the biological basis of consciousness, but there’s still little agreement on which should be taken as gospel. The two schools of thought look at whether consciousness is a result of neurons firing in our brain or if it exists completely independently from us. Meanwhile, quite a lot of the work that’s been done to identify consciousness in AI systems merely looks to see if they can think and perceive the same way we do—with the Turing Test being the unofficial industry standard

While we have reason to believe AI can exhibit conscious behaviors, it doesn’t perceive consciousness—or sentience, for that matter—in the same way that we do. Priyadarshini says AI involves a lot of mimicry and data-driven decision making, meaning it theoretically could be trained to have leadership skill. So, it would be able to feign the business acumen needed to process difficult business decisions through data-driven decision-making for example. AI’s current fake-it-till-you-make-it strategy makes it incredibly difficult to classify if it’s truly conscious or sentient.

Can We Test For Sentience or Consciousness?

View full post on Youtube

Many look at the Turing Test as the first standardized evaluation for discovering consciousness and sentience in computers. While it has been highly influential, it’s also been widely criticized.

In 1950, Alan Turing created the Turing Test—initially known as the Imitation Game—in an effort to discover if computing “machines” could exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Human evaluators would engage in blind, text-based conversations with a human and a computer. The computer passes the test if its conversational virtuosity can dupe the evaluator into not being able to reliably identify it from the human participant; this would mean that the system is sentient.

The Turing Test has returned to the spotlight with AI models like ChatGPT that are tailor-made to replicate human speech. We’ve seen conflicting claims about whether ChatGPT has actually passed the Turing Test, but its abilities remain apparent; for perspective, the famed AI model has passed the Bar exam, the SAT, and even select Chartered Financial Analyst (CFA) exams. That’s all well and good, but the fact still remains that many experts believe we need an updated test to evaluate this latest AI tech—and that we are possibly looking at AI completely wrong.

Alternatives to the Turing Test

Many experts have stated that it’s time for us to create a new Turing Test that provides a more realistic measure of AI’s capabilities. For instance, Mustafa Suleyman recently published his book, The Coming Wave: Technology, Power, and The Twenty-First Century’s Greatest Dilema, which not only talks about a new benchmark, but also how our understanding of AI needs to change. The book talks about a misplaced narrative about AI’s ability to match or surpass the intelligence of a human being—sometimes referred to as artificial general intelligence.

Rather, Suleyman believes in what he calls artificial capable intelligence (ACI) which refers to programs that can complete tasks with little human interaction. This next-generation Turing Test asks AI to build a business game plan that would be able to turn $100,000 of seed money into $1 million. This would all have to center around e-commerce by putting together a blueprint for a product and how it would be sold—ie: Alibaba, Amazon, Walmart, etc. AI systems are currently unable to pass this theoretical test, but that hasn’t stopped wannabe entrepreneurs from asking ChatGPT to dream up the next great business idea. Regardless, sentience remains a moving target that we can aim for.

Suleyman writes in his book that he doesn’t really care about what AI can say. He cares about what it can do. And we think that really says it all.

What Happens If AI Becomes Sentient?

We often see a fair amount of doom and gloom associated with AI systems becoming sentient, and reaching the point of singularity; this is defined as the moment machine intelligence becomes equal to or surpasses that of humans. We’ve seen signs of sentience as early as 1997 when IBM’s Deep Blue supercomputer beat Garry Kasparov in a chess match.

In reality, our biggest challenge with AI reaching singularity is eliminating bias while programming these systems. I always revisit Priyadarshini’s 21st-century example of the Trolley Problem, a hypothetical scenario where the occupant of a driverless car approaches an intersection. Five pedestrians jump into the road and there’s little time to react. While swerving out of the way would save the pedestrians, the resulting crash would kill the driver. The Trolley Problem itself is a moral dilemma comparing what is good to what sacrifices are acceptable.

AI is currently nothing more than decision-making based on rules and parameters, so what happens when it has to make ethical decisions? We don’t know. Away from the confines of the Trolley Problem, Bostrom mentions that with room for AI to learn and grow, there’s a chance that these large-language models will be able to develop consciousness, but the resultant capabilities are still unknown. We don’t actually know what sentient AI would be capable of doing because we’re not superintelligent ourselves.

Can AI Be Controlled?


Summary: Dr. Roman V. Yampolskiy, an AI Safety expert, warns of the unprecedented risks associated with artificial intelligence in his forthcoming book, AI: Unexplainable, Unpredictable, Uncontrollable. Through an extensive review, Yampolskiy reveals a lack of evidence proving AI can be safely controlled, pointing out the potential for AI to cause existential catastrophes.

He argues that the inherent unpredictability and advanced autonomy of AI systems pose significant challenges to ensuring their safety and alignment with human values. The book emphasizes the urgent need for increased research and development in AI safety measures to mitigate these risks, advocating for a balanced approach that prioritizes human control and understanding.

Key Facts:

  1. Dr. Yampolskiy’s review found no concrete evidence that AI can be entirely controlled, suggesting that the development of superintelligent AI could lead to outcomes as dire as human extinction.
  2. The complexity and autonomy of AI systems make it difficult to predict their decisions or ensure their actions align with human values, raising concerns over their potential to act in ways that could harm humanity.
  3. Yampolskiy proposes that minimizing AI risks requires transparent, understandable, and modifiable systems, alongside increased efforts in AI safety research.

Source: Taylor and Francis Group

There is no current evidence that AI can be controlled safely, according to an extensive review, and without proof that AI can be controlled, it should not be developed, a researcher warns.

Despite the recognition that the problem of AI control may be one of the most important problems facing humanity, it remains poorly understood, poorly defined, and poorly researched, Dr Roman V. Yampolskiy explains.

This shows a robotic face.
To minimize the risk of AI, he says it needs it to be modifiable with ‘undo’ options, limitable, transparent and easy to understand in human language.

In his upcoming book, AI: Unexplainable, Unpredictable, Uncontrollable, AI Safety expert Dr Yampolskiy looks at the ways that AI has the potential to dramatically reshape society, not always to our advantage.

He explains: “We are facing an almost guaranteed event with potential to cause an existential catastrophe. No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.”

Uncontrollable superintelligence

Dr Yampolskiy has carried out an extensive review of AI scientific literature and states he has found no proof that AI can be safely controlled – and even if there are some partial controls, they would not be enough.

He explains: “Why do so many researchers assume that AI control problem is solvable? To the best of our knowledge, there is no evidence for that, no proof. Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable.

“This, combined with statistics that show the development of AI superintelligence is an almost guaranteed event, show we should be supporting a significant AI safety effort.”

He argues our ability to produce intelligent software far outstrips our ability to control or even verify it.  After a comprehensive literature review, he suggests advanced intelligent systems can never be fully controllable and so will always present certain level of risk regardless of benefit they provide. He believes it should be the goal of the AI community to minimize such risk while maximizing potential benefit.

What are the obstacles?

AI (and superintelligence), differ from other programs by its ability to learn new behaviors, adjust its performance and act semi-autonomously in novel situations.

One issue with making AI ‘safe’ is that the possible decisions and failures by a superintelligent being as it becomes more capable is infinite, so there are an infinite number of safety issues. Simply predicting the issues not be possible and mitigating against them in security patches may not be enough.

At the same time, Yampolskiy explains, AI cannot explain what it has decided, and/or we cannot understand the explanation given as humans are not smart enough to understand the concepts implemented. If we do not understand AI’s decisions and we only have a ‘black box’, we cannot understand the problem and reduce likelihood of future accidents.

For example, AI systems are already being tasked with making decisions in healthcare, investing, employment, banking and security, to name a few. Such systems should be able to explain how they arrived at their decisions, particularly to show that they are bias free.

Yampolskiy explains: “If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers.”

Controlling the uncontrollable

As capability of AI increases, its autonomy also increases but our control over it decreases, Yampolskiy explains, and increased autonomy is synonymous with decreased safety.

For example, for superintelligence to avoid acquiring inaccurate knowledge and remove all bias from its programmers, it could ignore all such knowledge and rediscover/proof everything from scratch, but that would also remove any pro-human bias.

“Less intelligent agents (people) can’t permanently control more intelligent agents (ASIs). This is not because we may fail to find a safe design for superintelligence in the vast space of all possible designs, it is because no such design is possible, it doesn’t exist. Superintelligence is not rebelling, it is uncontrollable to begin with,” he explains.

“Humanity is facing a choice, do we become like babies, taken care of but not in control or do we reject having a helpful guardian but remain in charge and free.”

He suggests that an equilibrium point could be found at which we sacrifice some capability in return for some control, at the cost of providing system with a certain degree of autonomy.

Aligning human values

One control suggestion is to design a machine which precisely follows human orders, but Yampolskiy points out the potential for conflicting orders, misinterpretation or malicious use.

He explains: “Humans in control can result in contradictory or explicitly malevolent orders, while AI in control means that humans are not.”

If AI acted more as an advisor it could bypass issues with misinterpretation of direct orders and potential for malevolent orders, but the author argues for AI to be useful advisor it must have its own superior values.

“Most AI safety researchers are looking for a way to align future superintelligence to values of humanity. Value-aligned AI will be biased by definition, pro-human bias, good or bad is still a bias. The paradox of value-aligned AI is that a person explicitly ordering an AI system to do something may get a “no” while the system tries to do what the person actually wants. Humanity is either protected or respected, but not both,” he explains.

Minimizing risk

To minimize the risk of AI, he says it needs it to be modifiable with ‘undo’ options, limitable, transparent and easy to understand in human language.

He suggests all AI should be categorised as controllable or uncontrollable, and nothing should be taken off the table and limited moratoriums, and even partial bans on certain types of AI technology should be considered.

Instead of being discouraged, he says: “Rather it is a reason, for more people, to dig deeper and to increase effort, and funding for AI Safety and Security research. We may not ever get to 100% safe AI, but we can make AI safer in proportion to our efforts, which is a lot better than doing nothing. We need to use this opportunity wisely.”

There is no proof that AI can be controlled, researcher warns


There is no current evidence that AI can be controlled safely, according to an extensive review, and without proof that AI can be controlled, it should not be developed, a researcher warns.

Despite the recognition that the problem of AI control may be one of the most important problems facing humanity, it remains poorly understood, poorly defined, and poorly researched, Dr. Roman V. Yampolskiy explains.

In his book, AI: Unexplainable, Unpredictable, Uncontrollable, AI Safety expert Dr. Yampolskiy looks at the ways that AI has the potential to dramatically reshape society, not always to our advantage.

He explains, “We are facing an almost guaranteed event with potential to cause an existential catastrophe. No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.”

Uncontrollable superintelligence

Dr. Yampolskiy has carried out an extensive review of AI scientific literature and states he has found no proof that AI can be safely controlled—and even if there are some partial controls, they would not be enough.

He explains, “Why do so many researchers assume that AI control problem is solvable? To the best of our knowledge, there is no evidence for that, no proof. Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable.

“This, combined with statistics that show the development of AI superintelligence is an almost guaranteed event, show we should be supporting a significant AI safety effort.”

He argues our ability to produce intelligent software far outstrips our ability to control or even verify it. After a comprehensive literature review, he suggests advanced intelligent systems can never be fully controllable and so will always present certain level of risk regardless of benefit they provide. He believes it should be the goal of the AI community to minimize such risk while maximizing potential benefit.

What are the obstacles?

AI (and superintelligence), differ from other programs by its ability to learn new behaviors, adjust its performance and act semi-autonomously in novel situations.

One issue with making AI ‘safe’ is that the possible decisions and failures by a superintelligent being as it becomes more capable is infinite, so there are an infinite number of safety issues. Simply predicting the issues not be possible and mitigating against them in security patches may not be enough.

At the same time, Yampolskiy explains, AI cannot explain what it has decided, and/or we cannot understand the explanation given as humans are not smart enough to understand the concepts implemented. If we do not understand AI’s decisions and we only have a “black box,” we cannot understand the problem and reduce likelihood of future accidents.

For example, AI systems are already being tasked with making decisions in health care, investing, employment, banking and security, to name a few. Such systems should be able to explain how they arrived at their decisions, particularly to show that they are bias-free.

Yampolskiy says, “If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers.”

Today’s AI threat: More like nuclear winter than nuclear war


Robots standing with a nuclear winter landscape in distance

Last May, hundreds of leading figures in AI research and development signed a one-sentence statement declaring that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” While ongoing advances in AI clearly demand urgent policy responses, recent attempts to equate AI with the sudden and extreme immediate effects of launching nuclear weapons rest on a misleadingly simple analogy—one that dates back to the early days of the Cold War and ignores important later developments in how nuclear threats are understood. Instead of an all-or-nothing thermonuclear war analogy, a more productive way to approach AI is as a disruption to global systems that more closely resembles the uncertain and complex cascades of a nuclear winter.

Over the last year and a half, headline-grabbing news has fed into the hype around the awe-inspiring potential capabilities of AI. However, while public commentators brace for the rise of the machine overlords, artificial un-intelligence is already kicking off chains of widespread societal disruption. AI-powered disinformation sows distrust, social media algorithms increase polarization, and mass-produced synthetic media degrade democratic engagement while undermining our shared sense of reality.

Uncritically equating acute nuclear attack effects and AI threats risks reproducing the same kind of all-or-nothing thinking that drove some of the most dangerous dynamics of the nuclear arms race. Drawing these analogies also unduly distracts from the dramatic damage that even a comparatively “small” nuclear war or “dumb” AI can cause to today’s interconnected social, ecological, and political systems. Rather than fear a future AI apocalypse, policymakers should recognize that the world is already living through something like the early days of an AI nuclear winter and develop effective frameworks for regulation that factor in how it is disrupting political, social, and ecological systems in unpredictable ways today. Overemphasizing speculative dangers of superintelligence (systems that exceed human intelligence) jeopardizes urgently needed efforts to regulate AI with a view to the systemic impacts of actual and emerging capacities.

Nuclear risk revisited. In 1961, John F. Kennedy warned that “every man, woman and child lives under a nuclear sword of Damocles” based on the contemporary concern that global fallout from thermonuclear war could poison every living being. The Cuban Missile Crisis of October 1962 came within a hair’s breadth of bringing the sword down, elevating nuclear fear to an unprecedented pitch. That very same month, computer pioneer Irving J. Good said “the survival of man depends on the early construction of an ultraintelligent machine.” Such a machine would surpass human intelligence, and Good proposed that human beings stood poised on the cusp of unleashing a self-reinforcing artificial intelligence explosion that could transform human existence just as totally as thermonuclear war. “Ultraintelligence,” he noted, would possess the transcendent power to either solve all human problems or destroy all human life, becoming “the last invention that man need ever make.”

Over the years, this simple and compelling vision of a sudden and transformative AI apocalypse has persisted almost unchanged. Computer scientist Vernor Vinge rechristened Good’s “intelligence explosion” the singularity in the 1990s, further warning that if it cannot be averted or contained, AI could cause “the physical extinction of the human race.” Good’s misgivings finally went mainstream a half-century later with the publication of philosopher Nick Bostrom’s book Superintelligencewhich warned of an impending runaway AI that could see “humanity deposed from its position as apex cogitator over the course of an hour or two”—a transformation so sudden and total that its only “precedent outside myth and religion” would be global thermonuclear war.

At the same time, while visions of extinction by AI explosion remained remarkably fixed, understandings of nuclear danger underwent a sea change. After realizing that radiological risks had been slightly overstated in the 1960s, scientists first began studying the global environmental effects of nuclear weapons in the 1970s. By the early 1980s, they started to realize that the global climatic impacts of nuclear war could be nearly as devastating as the radiological harm and required far fewer weapons to trigger. The firestorms of burning cities would fill the atmosphere with soot and particles that would block sunlight, causing surface temperatures to plummet and setting off a self-reinforcing cascade of collapses across interconnected ecological, agricultural, industrial, and social systems. Subsequent studies have confirmed that the resulting “nuclear winter” would likely kill the majority of those alive today, while even a limited exchange of several hundred warheads between India and Pakistan could still kill as many as two billion by starvation in the gloom of a milder “nuclear autumn.

Over the decades, advances in planetwide data collection and computer modeling transformed understandings of nuclear danger, replacing mistaken certainties about universal death by fallout with a growing awareness of the uncertain consequences that would follow from cascades of environmental and social breakdown. Similarly, the last several years have seen rapidly enhancing AI capacities spread to transform whole networks of human relations—with already destabilizing political and ecological consequences. Deep fakes intended to influence voters erode trust, and digital assistants and chatbots affect humans’ capacity for cooperative behavior and empathy, while producing immense carbon footprints. Just as it would take only a tiny fraction of today’s nuclear arsenals to initiate a chain of global-scale catastrophic events, humans do not need to wait for a moment when “machines begin to set their own objectives” to experience the global, interconnected, and potentially catastrophic harms AI could cause.

Today’s AI products contribute to, and accelerate, global warming and resource scarcity, from mining minerals for computation hardware to the consumption of massive amounts of electricity and water. Notably, the environmental burden of AI gets short shrift from those worried about the technology’s existential threat, as the “Statement of AI Risk” lists AI alongside nuclear war and pandemics but does not include climate change as an existential issue. Beyond environmental harms, existing AI systems can be used for nefarious purposes, such as developing new toxins. OpenAI’s large language model interface ChatGPT has been successfully prompted to share bomb-making instructions and tricked into outlining the steps to engineer the next pandemic. Although these examples still require more human input than many realize, an AI system is reportedly generating targets in Gaza, and the race is on to deploy lethal autonomous weapons systems that could reset the balance of power in volatile regions across the globe. These examples show that it does not take an intelligence explosion to cause immense harm. The ability to leverage automation and machine efficiency to global catastrophic effect is already here.

Arms race to the bottom. More insidiously, the analogy between nuclear weapons and the infinite risk-reward calculus of an “artificial intelligence explosion” reproduces the dynamics of the arms race. There are just enough similarities between the rush for nuclear and AI superiority to encourage repeating the same mistakes, with the phrase “AI arms race” becoming a common refrain. One of the clearest similarities between these cases might be that, much as the nuclear arms race with the Soviet Union was driven by spurious bomber and missile “gaps,” some of today’s most heated arms-race rhetoric hinges on overhyping China’s prowess.

A closer inspection shows that nuclear and AI arms races differ fundamentally. While building nuclear arsenals requires accessing a finite supply of enriched fissile material, AI models consist of binary code that can be infinitely copied, rapidly deployed, and flexibly adopted. This radically transforms the scale of the proliferation hazard of AI, particularly because—in contrast to the strict governmental oversight of nuclear weapons—AI development is highly commercialized and privatized. The difference in proliferation between nuclear technology and AI matters for approaches to their governance. The former can generate both explosions and electric power, but its weaponization can be measured and monitored. Current benchmarks for AI development, by contrast, are too far removed from real-world applications’ effects to usefully assess potential harm. In contrast to nuclear technology, AI is not merely of a dual-use nature. Instead, the remarkable range of activities it can transform makes it a general-purpose, enabling technology like electricity.

Where the vast build-up of the nuclear arms race signaled each adversary’s resolve to potentially destroy the world but otherwise left it intact, the headlong race towards an artificial intelligence explosion promises to radically transform the world regardless of whether its ultimate destination is ever reached (or even proves reachable).

Neither all nor nothing. Disarmingly simple analogies between AI and immediate nuclear risks not only make for powerful rhetoric but also good marketing. Whether or not developers genuinely believe that their products pose an existential threat, framing the near-term future of AI as such has granted executives of OpenAI, Anthropic, Microsoft, and Google access to high-level policy discussions at the White House, the US Senate, and the notoriously secretive Bilderberg conference. The result has been a flurry of promises by the tech firms to police themselves as they rush to release ever-more capable AI products. By encouraging the public to fixate on how these applications might end the world, AI CEOs divert attention from the urgent need to regulate the ways in which they are already actively unraveling the social, economic, and ecological support systems of billions in their drive to outrun their rivals and maximize market share.

While tech companies are stakeholders, they should not be the loudest—let alone only—voices in discussions on AI governance. Policymakers must not be distracted by the specter of superintelligence and take action that goes beyond gathering voluntary commitments from AI developers. Existing guidance and directives are a good start, but policymakers need to push forward to develop binding and enforceable legislation addressing both current and potential AI harms. For example, the Bletchley Declaration resulting from the recent summit on AI safety held by the United Kingdom government widens the horizons of concerns. Going beyond immediate issues of data privacy, bias, and transparency, it also considers the potential effects of AI on political stability, democratic processes, and the environment. However, critics note that it remains a largely symbolic and highly elite-focused agreement without actual enforcement mechanisms.

Looking to the early nuclear era can provide valuable lessons for throttling the pace for AI superiority, but these lessons are not directly translatable. The current and future globe-spanning effects of AI can only be addressed through international cooperation, most importantly between the United States and China as the two major antagonists. While the talks between presidents Joe Biden and Xi Jinping at the Asia-Pacific Economic Cooperation summit in San Francisco in mid-November did not yield specific agreements or commitments on AI regulation from either side, both parties recognized the need for international AI governance. They also showed willingness to establish formal bilateral cooperation on the issue.

However, because the proliferation hazards of AI fundamentally differ from those of nuclear weapons, limiting the arena to those with advanced AI programs, even only initially, is short-sighted. A framework of global AI governance is only as good as its weakest-governed element, so it must be stringent and inclusive from the start. Such an effort won’t be exhausted by one international body modeled after the International Atomic Energy Agency. The general-purpose nature of AI technology calls for more than one regulatory regime of mechanisms that are bound by common principles. In addition to bilateral dialogue, the policymakers should closely follow and support multilateral efforts, such as the newly established High-level Advisory Board on Artificial Intelligence at the United Nations.

To be sure, refocusing on the already-unfolding complex harms of AI does not mean being complacent about the long-term and existential risks it might pose. That humans have narrowly avoided nuclear war since the Cuban Missile Crisis does not diminish the urgency of managing today’s evolving nuclear threat. Similarly, decades of unfulfilled expectations about the imminent creation of an “ultraintelligent machine” does not prove it is impossible. Should a viable path to achieving greater-than-human intelligence ever open, it will be far better to be prepared. The best way to make ready for any such eventuality begins by directly addressing the cascades of planet-wide harms that AI applications are already causing. Every step taken to mitigate ongoing damage and redirect AI development towards goals of greater justice, sustainability, and fairness will help to create societies that are better able to grapple with the unresolved legacies of nuclear weapons and the undiscovered horizons of AI.

Can solar geoengineering save the world?


The concept of solar geoengineering—blocking the sun’s radiation to slow Earth’s warming—is no longer just the realm of science fiction. In 2023, the U.S. government and the UN released reports on the topic. Whether or not solar geoengineering can save the world is up for debate, and Tony Harding, an assistant professor in the School of Public Policy, is contributing to the conversation.

Harding is an alumnus of the School of Economics and returned to Georgia Tech after a postdoc at Harvard University. He studies the impact of innovative technology on climate change policy and governance, focusing on solar geoengineering. In the eight years he’s been researching it, Harding said it’s the scale of the conversation that’s changed the most: not what the researchers are speaking about, but who they’re speaking to.

“A lot of people in the climate policy and academic realms were hesitant to talk about solar geoengineering, and I think that’s starting to change,” Harding said. “There’s definitely wider acceptance of at least talking about it, and in that way, pathways to having spaces to talk about it and research funds are opening up.”

As the idea of solar geoengineering picks up steam, Harding invites everyone to join the conversation, starting with learning about what it is, how it works, and whether or not this once-niche proposition really can save the world.

What is solar geoengineering?

The most commonly proposed method of solar geoengineering, which also goes by names such as solar radiation modification or climate intervention, uses sulfate aerosols. When injected into the Earth’s stratosphere, they reflect a small amount of the sun’s radiation—less than 1%—and reduce Earth’s surface temperature.

This option is the most popular, and the one Harding studies, because we have natural examples, he explained. Volcanoes release sulfates when they erupt, and the largest ones are strong enough to push them into the stratosphere.

“So we have evidence from the past that if sulfate aerosols make it up to the stratosphere, there’s a cooling effect,” he said. “This natural analog gives us a bit more belief that it’s going to work at least in some of the ways we expect it to in the real world and not just on a computer.”

The other two types of solar geoengineering researchers consider most seriously are marine cloud brightening to reflect incoming sunlight and Cirrus cloud thinning to let light escape more easily. Each one has pros and cons. For example, marine cloud brightening would only occur over the deepest and darkest parts of the ocean, Harding said, “which would have a non-uniform cooling effect and could lead to certain adverse outcomes.”

Stratospheric aerosol injection has a more uniform distribution and cooling effect that better mimics the warming we’re experiencing. However, it comes with its own concerns, one of which is that the cooling isn’t permanent.

“If something happened to stop the deployment of the aerosols, whether it was for political or technological reasons, we would bounce right back and experience a rapid heating that we’ve never experienced before, and could have catastrophic impacts,” Harding said.

What are the costs and benefits of solar geoengineering?

This question is where Harding’s research makes the most impact. As an economist, he examines the costs and benefits of solar geoengineering to highlight the tradeoffs involved. Harding has published articles on how solar geoengineering could impact other climate change mitigation policies, how it affects income inequality, and the value of reducing uncertainty around solar geoengineering.

Cardiologists Share The 1 Food They Never (Or Rarely) Eat


Sausage, steak, doughnuts, bacon, and deep-fried chicken. Here’s why heart experts avoid these foods.

Womp.

Most of us are aware that certain habits are flat-out terrible for our hearts. Smoking? Forget about it. A sedentary lifestyle — yep, that will eventually get you.

But with diet culture still running rampant, the foods that are “good” and “bad” can feel a bit murkier. The keto diet, for example, encourages piling on the bacon. And while it may help you lose weight, something about chomping on bacon every day feels — not great.

If you’re eating with your heart health in mind (and we all should be, at least a little bit!), you can read through the American Heart Association’s diet and lifestyle recommendations. Or, if you really want to cut to the chase, you can find out which foods top cardiologists avoid 99% of the time.

While none of these foods will kill you if eaten once in a while, cardiologists say these are the foods they never, or very rarely, eat.

Chopped liver

Some of us wrinkle our noses at the thought of eating chopped liver, while others absolutely love it. If you fall into the former category, you’re in luck. Dr. Eleanor Levin, a cardiologist at Stanford University, says she never eats liver.

“Liver is a red meat that’s extremely high in fat,” Levin said. “In general, I avoid red meat because it’s very high in saturated fat and trans fats, and in addition to being bad for the heart, saturated fat can provoke osteoporosis. Liver is especially bad because it’s also the organ that filters out toxins, so any toxins are typically just sitting there. I used to eat chopped liver when I was a kid, but I haven’t since I became a cardiologist.”

Breakfast sausages

Sad, but true: Dr. Elizabeth Klodas, a cardiologist based in Minneapolis, said she avoids breakfast sausages at all costs.

“These are high in sodium (promoting higher blood pressure) and a rich source of saturated fats, which raise cholesterol readings,” Klodas said. “Plus, because we only have so much room in our stomachs, foods like breakfast sausages can displace other items that might be more health-promoting.”

Klodas noted that all processed meats, including sausages, ham and bacon, have been classified as carcinogens by the World Health Organization.

Neither the bacon nor sausage in your breakfast sandwich are a good idea, especially first thing in the morning when your stomach is empty.
Neither the bacon nor sausage in your breakfast sandwich are a good idea, especially first thing in the morning when your stomach is empty.

Margarine

If you’re still eating fake butter, it’s time to stop, because margarine is just flat-out bad for you.

“Margarine seems like a great idea in theory, but it turns out to be just as bad as butter,” said Dr. Harmony Reynold, a cardiologist at NYU Langone Health. “A study found that with each tablespoon of margarine per day, people were 6% more likely to die over the median 16 years of the study. Olive oil is better, and each tablespoon of olive oil was associated with a 4% lower risk of death. With that in mind, I tell my patients to use olive oil whenever possible, even for cooking eggs, or toast. When nothing but the taste of butter will do, it’s still better to use mostly olive oil with a skinny pat of butter for flavor.”

Steak

Sorry, steak lovers, but this is another food you should probably avoid most of the time.

“I avoid really fatty red meat, like highly marbleized steak, because it’s very high in saturated fat,” Dr. Leonard Lilly, the chief of cardiology at Brigham and Women’s Faulkner Hospital, said. “Clinical studies have shown that saturated fat consumption is associated with increased risk of cardiovascular disease, cancer and diabetes.”

Lilly noted that most people can get away with eating small amounts of almost anything on rare occasions, so he’s guilty of the occasional steak.

Bacon

You were waiting for this one, right? Dr. Francoise Marvel, a cardiologist at Johns Hopkins University, said she typically avoids this salty, delicious breakfast delicacy.

“Bacon is an example of highly processed red meat that is high in saturated fat and increases the bad cholesterol — called low-density lipoprotein LDL — which is linked to increased risk of heart attack and stroke,” Marvel said. “The way bacon is processed is through ‘curing’ the pork, which usually involves adding salts, sugars and nitrates. The large amounts of salt (or sodium) used in this processing may increase blood pressure and fluid retention, causing the heart to work harder to pump blood through the body. Increased blood pressure, or hypertension, is a major risk factor for cardiovascular disease as well.”

Chemicals added to the meat, like nitrates, have been linked to cancer and other health problems, Marvel added.

“It should be noted there is a varying amount of processing and ingredients used by different bacon manufacturers,” Marvel said. “But overall, to lower the risk of cardiovascular disease and other health problems, limiting the intake of processed red meat like bacon is beneficial.”

Next time you order breakfast, Marvel suggests swapping two slices of bacon for two slices of avocado. Your heart will thank you.

Heaven on your taste buds, hell on your heart's health.
Heaven on your taste buds, hell on your heart’s health.

Deep-fried chicken

Fried chicken may be a trendy menu item these days, but it isn’t great for your heart.

“The one food that I rarely eat is deep-fried chicken,” said Dr. Sanjay Maniar, a cardiologist based in Houston. “Regularly eating fried foods will increase your risk of heart disease and stroke by increasing the amount of saturated and trans fats in the body.”

These unhealthy fats can raise LDL (bad cholesterol) and lower HDL (good cholesterol) levels, which serve as the building blocks for fatty buildup (atherosclerosis) in the blood vessels of the body, Maniar said.

“You can get great flavor by adding fresh herbs and grilling or baking chicken rather than deep frying it,” Maniar said. “You’ll keep the taste, but save the calories.”

Doughnuts

Many doughnuts are fried in oils that contain trans fats, which makes them hard on your heart, according to Dr. Jayne Morgan, a cardiologist based in Atlanta.

“Trans fats raise cholesterol levels and blood sugar, contributing to Type 2 diabetes, heart disease and stroke,” Morgan said. “Trans fats are often ‘disguised’ on labels as partially hydrogenated oils, so read your labels and avoid them.”

Still, not all doughnuts are fried in oils that contain trans fats. Dunkin’, for example, fries its doughnuts in palm oil, which is free of trans-fat. Palm oil does contain saturated fat, which isn’t great for your heart when consumed in excess — so make sure you’re eating doughnuts in moderation.

Bologna

Maybe the last time you ate bologna was when you were in third grade, or maybe it’s still part of your diet. In any case, it’s probably best to skip it, according to Dr. James Udelson, chief of cardiology at Tufts Medical Center.

“In some ways, bologna is a symbol in that it incorporates many things that should generally be avoided, including highly processed meats, which are very high in salt content and associated with risk of cardiovascular disease down the line,” Udelson said. “It is important to note that the key to dietary heart health is following the American Heart Association’s recommended Mediterranean-style diet, which is high in vegetables, whole grains, fish and some lean meats, nuts and legumes.

If you eat any of these foods once in a while, you’ll be fine. After all, who can pass up the occasional slice of bacon and a fresh doughnut? But do as these cardiologists suggest — avoid them when you can.

Deep-Sea Mining Could Yield a Nearly Limitless Supply of Rare Metals. Is It Worth the Cost?


New regulations for drilling into the seabed could come any day now, following two years of infighting between an international regulatory body, drilling companies, and scientists.

The deep ocean is a mysterious, pitch-black world populated by creatures specially adapted to the crushing pressure, the dark, and the near-freezing temperatures. Though the ocean covers 70 percent of Earth’s surface, less than one percent of the deep ocean—about 200 meters deep, where light begins to dwindle—has been mapped. Frequently, when scientists explore there, they discover species we never even knew existed. But they also have discovered that this ecosystem is full of metals like manganese, nickel, cobalt, and lithium that humans use in everything from phones and electric cars to wind turbines.

Private companies and a number of countries are eager to mine these metals, though the process is generally very destructive to ecosystems. After scientists simply raked the seabed in a 1972 experiment to test the environmental impacts, the area they tested never recovered.

Yet, companies may soon get the green light to conduct deep-sea mining anyway.

The Fox in the Henhouse

the metals company, san diego, minerals, copper, nickel, manganese nodules, environment, ocean

Gerard Barron, chairman and CEO of The Metals Company, holds a nodule brought up from the seafloor, which he plans for his company to mine in the Clarion Clipperton Zone of the Pacific Ocean, June 2021.

The Kingston, Jamaica-based International Seabed Authority (ISA), an autonomous organization under the auspices of the United Nations, is in charge of that decision. Created in 1994, the ISA is charged with protecting the marine environment in international waters “for all mankind.” It governs about 54 percent of the ocean—the ocean that is essential to planetary health, because it generates 50 percent of the oxygen we use, absorbs 25 percent of carbon dioxide emissions, and captures 90 percent of excess heat humans create. The ISA is also in charge of issuing contracts to extract resources from international waters, and creating regulations for such extraction, which many conservationists liken to putting the fox in charge of the henhouse. It’s further charged with guarding the rights of developing nations, so that if resources are extracted, wealthy nations do not unfairly benefit from what is considered a common human heritage.

Some have accused Michael Lodge, the secretary general in charge of the ISA, of being overzealous when it comes to mining. A 2022 New York Times investigation included documents showing that since 2007, the ISA has been sharing key information with a Canadian mining company, now known as The Metals Company; it also set aside the most promising tracts for that company, including the mineral-rich Clarion-Clipperton Zone (CCZ) in the Pacific. This prompted some agency employees to quit in protest. While the ISA’s role was to protect developing nations, The Metals Company, which anticipates garnering $85 billion from the area, plans to pay only 10 percent to the developing countries sponsoring it

That New York Times investigation was not the only article about Lodge’s apparent advocacy for mining. There have been others in publications including The Los Angeles Times and The Guardian. And a recent New York Times story reports that diplomats from ISA member nations called Lodge out for clashing with council members who wanted to slow the implementation of mining.

The ISA is governed by a 36-member rotating council that represents 169 member states, including 167 countries and the European Union. Some of these members say Lodge has stepped outside his administrative role in pushing for deep-sea mining. Popular Mechanics was unable to reach Lodge for comment.

the metals company, san diego, minerals, copper, nickel, manganese nodules, environment, ocean

Gerard Barron, chairman and CEO of The Metals Company, hopes that his company will be able to mine the seafloor for nickel, cobalt, and manganese in the Pacific Ocean.

Some council members—including Russia, China, South Korea, India, Britain, Poland, and Brazil—have applied and been approved for contracts to explore deep-sea mining; the ISA does not yet have authority to issue contracts to exploit the resources. (Brazil, for its part, just called for a ten-year moratorium on deep-sea mining.) But several other countries—including France, Germany, Spain, New Zealand, and Costa Rica—want a moratorium on deep-sea mining until its impacts are better understood. They are joined by the United Nations Environment Program, the World Bank, and nearly 800 scientists. Meanwhile, several companies, including Google, Volvo, and Samsung, have signed a pledge not to use materials extracted from the deep sea until it is “clearly demonstrated that such activities can be managed in a way that ensures the effective protection of the marine environment.”

Thanks to a rule written into the United Nations Convention on the Law of the Sea (UNCLOS), a set of regulations for deep-sea mining may be forced into place before the science is clear. This rule states that if a government expresses an interest in mining in international waters, the ISA has 24 months to create a regulatory framework for that activity. In 2021, one of The Metals Company’s sponsors, plus a member of the ISA’s council, the Pacific nation of Nauru, triggered that rule. July marks 24 months.

The Deep Sea: Mining Resource or Crucial Ecosystem?

indian deep ocean exploration vehicle under development as part of samudrayaan program

These black polymetallic sea nodules—nickel, manganese, and cobalt-rich mineral deposits—are balls that form naturally deep under the sea.

Proponents of deep-sea mining, such as mining companies and countries interested in cashing in on rare-earth and other minerals, contend that the transition to renewables is absolutely dependent on these minerals. Rare-earth metals are used in electric car batteries, cell phones, and wind farms, and are considered essential in transitioning from fossil-fuel vehicles to electric ones. Failing to mine these metals, they say, will lead to shortages and slow the renewable transition.

They argue that tons of these metals can simply be sucked up from the abyssal plains, like the CCZ, where they now lie in the form of polymetallic nodules, which look like loose rocks resting on the bottom of the ocean. According to the United States Geological Survey, a conservative estimate of polymetallic nodules in the Clarion-Clipperton Zone manganese nodule field, the largest globally, is 21.1 billion dry tons—that is more than the known tonnage of many critical metals in reserves on land.

location of clarion clipperton zone on map

Pacific Ocean, showing locations of Clarion-Clipperton Zone (CCZ), the Mariana Arc, Lau Basin, and the Cook Islands.

Other metals that are expected to be mined are in cobalt-rich ferromanganese crusts and near thermal vents. About 7.5 billion dry tons of cobalt-rich ferromanganese crusts are estimated in the Pacific Ocean Prime Crust Zone. Theoretically, this could stem some of the human rights abuses commonly associated with cobalt mining.

Gerrard Barron, CEO of The Metals Company, suggests that deep-sea mining will allow terrestrial mining to diminish.

“There needs to be an acceptance of the fact that we need more metals,” he tells Reuters. “Why does it make sense to destroy rainforests to mine nickel, but not extract the metal from the bottom of the ocean at a part of the planet with the least life on it?”

In a single expedition to the Clarion-Clipperton Zone in May 2023, more than 5,000 new species were discovered.

Catherine Weller, director of global policy for Fauna & Flora International, tells Popular Mechanics that Barron’s argument is fallacious. (The organization, founded in 1903, lists Sir David Attenborough as a vice president.) It published a widely circulated report in 2020 about the environmental damage of deep-sea mining. There’s no reason to believe that terrestrial mining will stop or even slow just because deep-sea mining commences, Weller says. And as for no life being where the mining is intended to occur, Fauna & Flora’s marine director, Sophie Benbow, says that’s far from true; the deep sea is thriving with biodiversity. In a single expedition to the Clarion-Clipperton Zone in May 2023, more than 5,000 new species were discovered, and most of them were endemic to the region and not found anywhere else on the planet.

The metallic nodules on the abyssal plains that miners want to hoover up form over millions of years in a remote part of the planet where fine sediment filters down at a rate of about a centimeter every millennium. It is home to untold thousands of species.

natural history museum preserves deep sea arthropods

A Psychropotes longicauda sea cucumber is seen as it is transferred into an ethanol-filled specimen jar for scientific preservation, in a laboratory within the Natural History Museum on May 24, 2023 in London, England. Collected from the seabed using a remotely-operated system, researchers have managed to collect thousands of samples of deep-sea anthropods, many of which are being seen for the first time. A new study has highlighted the extent of biodiversity in the Clarion-Clipperton Zone, the world’s largest mineral exploration region. The research has found that over 90 percent of species in one of the most likely future mining sites are currently undescribed by science, and are threatened by deep-sea mining, due to a global surge in demand for metals such as cobalt and nickel.

The ferromanganese crusts host corals, sponges, tuna, sharks, dolphins, and sea turtles. Hydrothermal vents, where ore is also found, are superheated liquid rising to the water column from closer to Earth’s core. They give us clues to the origins of life and harbor their own unique and rare ecosystem that functions largely without photosynthesis.

“Mining in those areas, we would wipe out huge numbers of species,” Benbow says, many of which are “quite funky.” “They’re so uniquely adapted to living in near freezing temperatures, almost complete darkness, massive pressures. And there are some pretty cool creatures.” She lists the Dumbo octopus, species of snailfish with holes in their skulls to deal with the pressure, and the sea pangolin. “The species in the deep, they are unique and beautiful in their own right … each of them have their own role in that ecosystem.

Studying these species, their adaptation strategies, and their ecosystem could also lead to discoveries in areas such as medicine and technology, Weller argues. Conversely, mining in the area is widely believed to mean the complete and irreversible destruction of the ecosystem with far-reaching consequences. Disturbing an ecosystem that has been protected on the seafloor for millennia, Benbow notes, could also exacerbate climate change.

“At the bottom of the sea, there are sediments [that] have been found to be one of the most expansive and critical carbon reservoirs on the planet,” she says. Deep-sea mining would disrupt those sediments, even if a company were just “vacuuming” nodules off the seafloor. Plus, any impacts in the ocean would enter the water column and possibly damage very large areas up to kilometers away.

Is Deep-Sea Mining Worth It?

the sun sets on the mining vessel hidden gem after

The sun sets on the mining vessel Hidden Gem after a demonstration against deep-sea mining, February 8, 2022, the Netherlands.

There is also a question about whether deep-sea mining would, in fact, solve the problems it promises to solve. An article in Nautilus quoted investor and deep-sea explorer Victor Vescovo as noting that the deep sea is an extremely hostile environment to those not designed to live there, and certainly to machinery. Between the incredible pressure of the deep sea, the salt water, the cold, and the darkness, machinery operating there must be incredibly resilient. He argues that the investment needed to mine this inhospitable area would not justify the cost.

Moreover, the permanent batteries constructed from such minerals—while hailed as a solution to the CO2 problem—have created their own problem. One study showed that, between 2010 and 2020, the use of permanent magnets cumulatively resulted in 32 billion tons CO2-equivalent of greenhouse gas emissions globally. (This is a calculation comparing it with greenhouse gas emissions.)

And Fauna & Flora’s Weller notes that emerging technologies may soon make rare-earth mining unnecessary. Experiments are underway to replace the rare-earth metals we currently use with materials that can be created in a lab. And more and more countries are turning to recycling rare-earth materials. Plus, some of the assumptions that are intended to justify deep-sea mining, such as the argument that the renewable transition requires replacing all fossil fueled cars with electric ones, negates other, possibly better solutions. This includes increasing public transit and reducing the number of cars on the road, since most of them spend 90 percent of their time parked, anyway.

the metals company, san diego, minerals, copper, nickel, manganese nodules, environment, ocean

Andrew K. Sweetman, professor and research leader at the Lyell Centre for Earth, Marine Science, and Technology at Heriot-Watt University in Edinburgh, Scotland, holds a deep-water fish, called a Rattail, brought up as part of the research to see the effects mining will have on the environment in the Clarion Clipperton Zone of the Pacific Ocean, June 2021.

Though the ISA was scheduled to start taking applications for deep-sea mining in July, many of its council members believe such decisions are premature. Conservationists hope the ISA will find a different solution to help the renewable transition.

“I think it would be pretty ridiculous,” Weller says, “to be in the process of solving one crisis by creating another huge crisis.”

Can ‘Conversations’ with Whales Teach Us to Talk with Aliens?


A controversial 20-minute interaction with a humpback whale might help scientists communicate with extraterrestrials and nonhuman Earthlings alike

Humpback whale underwater, looking towards camera while rolling towards its side

For some scientists who are eager to talk with aliens, the best avenue isn’t any telescope pointed to the heavens but rather a boat slipping through the glassy waters of Alaska’s Frederick Sound. This area is a seasonal hotspot for humpback whales, whose eerie and enchanting submarine songs may serve as proxies for any alien transmissions that practitioners of SETI (the search for extraterrestrial  intelligence) may eventually receive. In a recent paper published in the journal Aquatic Biology, a group of researchers reported a humpback encounter at Frederick Sound that they say is a compelling case of interspecies communication, with lessons for parsing future messages from the stars. But critics aren’t so sure.

The debate highlights a lingering tension between SETI’s lofty aspirations and the down-to-earth limits of our knowledge: How can we hope to find intelligent life out there—let alone chat with it—when we struggle to perceive and communicate with intelligent nonhuman creatures right here on our home world?

Laurance Doyle, an astronomer at the SETI Institute and a co-author of the study, says such questions are exactly why he and his colleagues were listening to whales in the first place. “We’re trying to get on the outside of nonhuman intelligence,” he says. “We’re trying to understand it so that when we get an extraterrestrial signal, we’ll know what to do.”

The encounter occurred on August 19, 2021, around the 18-meter research vessel Glacier Seal. It began when Doyle and others onboard spotted a nearby humpback, which was later identified as a female nicknamed “Twain.” Attempting to catch Twain’s attention, the researchers used an underwater speaker to play a series of humpback vocalizations they had recorded in Frederick Sound in the prior afternoon. After three broadcasts reverberated through the depths, the creature slowly approached the boat. With every vocalization that echoed from the speaker, Twain responded in kind—often with a nearly identical call. For 20 minutes, the whale circled the Glacier Seal, engaging in the call-and-response exchange and making a total of 36 calls before swimming away—and leaving Doyle and his shipmates with the impression that they had just witnessed a potential milestone in human-cetacean communications.

One World, Many Minds

Notions of similar milestones trace back to SETI’s very roots. At a historic meeting in 1961, a select group of scientists codified key tenets of the then nascent field. One of the invited attendees was John Lilly, a neuroscientist who claimed to have talked with captive bottlenose dolphins at a specialized research center that he’d established in the U.S. Virgin Islands. Impressed by Lilly’s research and his contributions to the meeting, the other participants ultimately called themselves the Order of the Dolphin. Although Lilly’s controversial practices eventually pushed him far beyond the bounds of orthodox science, reputable researchers still acknowledge the importance of his early results, which laid the groundwork for more rigorous subsequent attempts at cetacean communication

But then and now some fundamental stumbling blocks exist: How exactly should we quantify concepts like “communication” and “intelligence” across species? And can we do so in a way that minimizes our decidedly anthropocentric biases? For Doyle and the study’s lead author, Brenda McCowan, an animal behaviorist at the University of California, Davis, one possible answer lies within the presumptive universal language of mathematics, filtered through analytic techniques from a subdiscipline known as information theory. “We have millions of communication systems on Earth—plants, animals, and so on,” Doyle says. “But how communicative are they? The next question requires information theory.”

Birthed in the late 1940s from the work of mathematician and computer scientist Claude Shannon, information theory underpins all modern digital communications. For instance, its methods for quantifying complexity and syntax are crucial for finding signals obscured by noise, as well as for making—and breaking—cryptographic codes. Presumably, then, information theory should be useful for deciphering and cataloging the information that is carried within not only whale songs and dolphin whistles but the signaling behaviors of many other organisms as well. This could, the thinking goes, eventually lead to transformative results—such as a hierarchical library of sorts, in which the diverse communication systems of Earth’s biosphere are classified based on their complexity, whether found in humans, whales, plants, microbes or any other living thing. That, in turn, could help form a framework for understanding different forms of intelligence, both on and beyond our isolated planet.

This objective and similarly audacious goals remain very much a work in progress. For now, researchers such as Doyle and McCowan are focused on simpler applications. In their encounter with Twain, they looked for a communication pattern called latency matching, deliberately shifting the timing between their broadcasts in hopes that the whale would match the temporal delays. That the whale did exactly this, they say, was in large part what made the interaction so compelling and noteworthy. Twain’s latency matching, they speculate, may have been an attempt to engage in a discussion.

A Promising First Step

Yet many researchers—and even the study’s authors themselves—are hesitant to say the exchange can be considered anything close to a “conversation.”

Peter Tyack, a marine biologist at the University of St. Andrews in Scotland, who specializes in cetacean research and was not part of the study, is one such skeptic. Based on other latency-matching work dating back to the 1980s, he says, scientists have long known that whales—and many animals, for that matter—adjust the timing of their calls to avoid overlaps with another caller. So mere expediency, rather than curiosity, can adequately explain Twain’s behavior. “I do not think that similar results from the authors’ playbacks of humpback sound supports the claim that the whale was intentionally trying to create a cross-species interaction,” he says.

Such criticisms—which assume that the whale didn’t realize she was communicating with humans or at least couldn’t discern that the sound wasn’t emanating directly from another nearby whale—strike the study authors as special pleading. It’s far more likely, they argue, that Twain knew the boat—if not its human passengers—was the proximate source of what she heard. “I think initially she could’ve thought it was her own sound or another whale, but she had these periods of time where she could have gone underwater to inspect us,” says Fred Sharpe, a co-author of the study and a marine biologist at the Alaska Whale Foundation. Twain, he notes, stayed within 100 meters of the boat for the duration of the encounter, presumably reducing the chance of mistaken identity. And when the team ceased its broadcast because of protocol-mandated limits on their scientific study, he adds, Twain lingered for a time and kept calling—as if awaiting another response.

Outside of the debate about latency matching, other researchers who were not involved with the study are quick to suggest that its appeal may arise more from the charisma of its cetacean subjects rather than the notability of its science. After all, other animals—Alex the parrot, Koko the gorilla and many less famous creatures—have shown evidence of symbolic communication and abstract thought in various investigations across the decades.

“This type of playback has been done many times with birds and frogs and in a much more rigorous context,” notes behavioral ecologist and animal acoustic communication specialist Mickey Pardo of Cornell University, who wasn’t involved in the new study. “A lot of other animals are highly intelligent but don’t get as much credit as whales, and it hasn’t been considered a ‘conversation’ in the past when we look at studies with other animals.”

Pardo suggests that the team’s encounter with Twain is best seen as a promising first step toward more ambitious future studies—ones in which researchers could attempt more interactive playback, test more honed hypotheses and incorporate participation from a larger number of whales.

“This is very preliminary, and we were very limited in the ways in which we could modify the signal,” McCowan acknowledges. “It was such a rare and opportunistic circumstance with a being that is incredibly difficult to study, that we thought it should be shared. The idea is to go back and replicate something like this,” she says.

Regardless of any extraterrestrial implications, there is hope that the study will contribute to conservation initiatives and the ways in which we engage with animals. Doyle notes that while the “head” of the research is its connection with SETI, its “heart” lies in reshaping our relationship with life here on Earth, to identify and—if possible—reduce our anthropocentric prejudices. “Maybe we need to think about animals differently on this planet. They themselves can be quite alien to us in many ways,” McCowan says. “There’s an analogy to be made here of equity, inclusion and diversity of cultures both in our own species and across species—which is something we need to preach better to.”

Earth’s ocean and the whales that send their songs reverberating through its abyss offer analogies, too—to the vast depths of outer space and the possibilities of beings somehow sending messages across light-years to commune with one another. In both cases, perhaps our struggle to find anyone to talk to is mostly a function of our ignorance, our failure to look and listen in the proper ways.

“If we’re not picking up on animal intelligence, [their] overtures, or even deciphering their communication systems, no wonder we gaze out on a silent universe!” Sharpe says. “We could be awash with alien signals, and we just don’t have the perceptual bandwidth yet or the ability to recognize and interpret them. A humpback whale illustrates that really well.”

Thousands of U.S. Cities Could Become Virtual Ghost Towns by 2100


These projected findings about depopulation in U.S. cities are shaped by a multitude of factors, including the decline of industry, lower birth rates and the impacts of climate change

an city empty street in downtown Jackson, Mississippi
Jackson, Mississippi, United States.

The Urban U.S. could look very different in the year 2100, in part because thousands of cities might be rendered virtual ghost towns. According to findings published in Nature Cities, the populations of some 15,000 cities around the country could dwindle to mere fractions of what they are now. The losses are projected to affect cities everywhere in the U.S. except Hawaii and Washington, D.C.

“The way we’re planning now is all based on growth, but close to half the cities in the U.S. are depopulating,” says senior author Sybil Derrible, an urban engineer at the University of Illinois Chicago. “The takeaway is that we need to shift away from growth-based planning, which is going to require an enormous cultural shift in the planning and engineering of cities.”

Derrible and his colleagues were originally commissioned by the Illinois Department of Transportation to conduct an analysis of how Illinois’s cities are projected to change over time and what the transportation challenges will be for places that are depopulating. As they got deeper into the research, though, they realized that such predictions would be useful to know for cities across the entire U.S.—and not just for major ones, such as New York City, Chicago and Los Angeles. “Most studies have focused on big cities, but that doesn’t give us an estimation of the scale of the problem,” says lead study author Uttara Sutradhar, a doctoral candidate in civil engineering at the University of Illinois Chicago.

The authors analyzed data collected from 2000 to 2020 by the U.S. Census and the American Community Survey, an annual demographics survey conducted by the U.S. Census Bureau. This allowed them to identify current population trends in more than 24,000 cities and to model projections of future trends for nearly 32,000.  They applied the projected trends to a commonly used set of five possible future climate scenarios called the Shared Socioeconomic Pathways. These scenarios model how demographics, society and economics could change by 2100, depending on how much global warming the world experiences.

The authors’ resulting projections indicated that around half of cities in the U.S., including Cleveland, Ohio, Buffalo, N.Y., and Pittsburgh, Pa., are likely to experience depopulation of 12 to 23 percent by 2100. Some of those cities, including Louisville, Ky., New Haven, Conn., and Syracuse, N.Y., are not currently showing declines but are likely to in the future, according to the findings. “You might see a lot of growth in Texas right now, but if you had looked at Michigan 100 years ago, you probably would have thought that Detroit would be the largest city in the U.S. now,” Derrible says.

Regionally, the Northeast and Midwest will most likely be the most heavily affected by depopulation, the authors found. And on a state level, Vermont and West Virginia will be the hardest hit, with more than 80 percent of cities in each of these two states shrinking. Illinois, Mississippi, Kansas, New Hampshire and Michigan could also see about three quarters of their cities decline in population.

While the authors’ analysis of current trends found that 43 percent of the more than 24,000 cities are losing residents, around 40 percent are now growing, including major cities such as New York City, Chicago, Phoenix and Houston. In general, though, the places that are projected to most likely gain population by 2100 tend to be located in the South or West.

The new analysis does not explore the factors that are driving the projected trends. But Sutradhar says there is probably a complex mix of variables at play that differ by location, including the rising cost of homes in some places, the decline of industry, lower birth rates, different levels of state taxes and the impacts of climate change.

Justin Hollander, an urban planning scholar at Tufts University, who was not involved in the research, says that the new study’s methods were sound and that the findings are original. “I have never seen a national study that looked so far into the future,” he says. He warns, however, that making specific projections this far in advance is “pretty reckless,” given the amount of uncertainty the future holds.

He appreciates, though, that the paper calls attention to future depopulation in general, which he agrees the data do support. “These are not isolated problems to the Detroits of the world,” he says. “Depopulation is everywhere, and the paper is right to demand that cities face this fact and begin to honestly prepare for this possible future.”

The authors hope that their paper serves as a wake-up call to policy makers to begin moving away from growth-based planning and to start finding place-specific solutions for cities that are likely to depopulate in the years ahead. “We should see this not as a problem but as an opportunity to rethink the way we do things,” Derrible says. “It’s an opportunity to be more creative.”