‘Time Is Running Out’: New Open Letter Calls for Ban on Superintelligent AI Development



 An open letter demanding a ban on the development of superintelligent AI was released on Wednesday, endorsed by more than 700 prominent figures—including Nobel laureates, leading AI researchers, faith leaders, policymakers, and celebrities.

Signatories include five Nobel Prize winners, two pioneers often dubbed the “Godfathers of AI,” Apple co-founder Steve Wozniak, former Trump strategist Steve Bannon, Vatican AI ethics adviser Paolo Benanti, and Prince Harry and Meghan, the Duke and Duchess of Sussex.

The letter, published in full, states:


> “We call for a prohibition on the development of superintelligence, not lifted before there is  

> (1) broad scientific consensus that it will be done safely and controllably, and  

> (2) strong public buy-in.”


The initiative was organized by the Future of Life Institute (FLI), the same nonprofit behind a widely shared 2023 open letter that urged a six-month pause on training advanced AI systems—a call that ultimately went unheeded by major tech firms.


This new campaign zeroes in specifically on superintelligence, which FLI defines as AI capable of outperforming humans across all meaningful tasks. FLI’s executive director, Anthony Aguirre, told *TIME* that such systems could emerge within just one to two years. “Time is running out,” he warned. “The only thing likely to stop AI companies from racing toward superintelligence is widespread societal recognition that this isn’t what we actually want.”


Supporting data from a new poll shows 64% of Americans believe superintelligence should not be developed until it is “provably safe and controllable,” while only 5% say it should be pursued as quickly as possible. “It’s a small number of very wealthy companies building these systems,” Aguirre noted, “and a very, very large number of people who would prefer a different path.”


The letter also drew signatures from actor Joseph Gordon-Levitt, Stephen Fry, musician will.i.am, historian Yuval Noah Harari, and former Obama national security advisor Susan Rice. Notably, Leo Gao—a technical staff member at OpenAI, whose CEO Sam Altman has described the company as a “superintelligence research company”—also signed.


Aguirre anticipates more endorsements as the campaign gains momentum. “The beliefs are already there,” he said. “What we lack is people feeling empowered to voice them openly.”


In a statement accompanying his signature, Prince Harry emphasized: “The future of AI should serve humanity, not replace it. I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance.”


Joseph Gordon-Levitt added: “Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc. But does AI also need to imitate humans, groom our kids, turn us all into slop junkies, and make zillions of dollars serving ads? Most people don’t want that. But that’s what these big tech companies mean when they talk about building ‘Superintelligence.’”


The letter’s concise wording was intentional, designed to unite a wide and diverse coalition. Yet Aguirre stresses that real change will require regulation. “Many of the harms stem from the perverse incentive structures companies face today,” he explained, pointing to the intense U.S.–China race to achieve superintelligence first.


“Whether it arrives soon or takes longer,” Aguirre cautioned, “once superintelligence exists, the machines will be in charge. We have no idea if that will go well for humanity—but it’s not an experiment we should rush into blindly.”

Post a Comment

Previous Post Next Post