Ilya Sutskever Has a New Plan for Safe Superintelligence

 For the past several months, the question “Where’s Ilya?” has become a common refrain within the world of artificial intelligence. Ilya Sutskever, the famed researcher who co-founded OpenAI, took part in the 2023 board ouster of Sam Altman as chief executive officer, before changing course and helping engineer Altman’s return. From that point on, Sutskever went quiet and left his future at OpenAI shrouded in uncertainty. Then, in mid-May, Sutskever announced his departure, saying only that he’d disclose his next project “in due time.”

Now Sutskever is introducing that project, a venture called Safe Superintelligence Inc. aiming to create a safe, powerful artificial intelligence system within a pure research organization that has no near-term intention of selling AI products or services. In other words, he’s attempting to continue his work without many of the distractions that rivals such as OpenAI, Google, and Anthropic face. “This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever says in an exclusive interview about his plans. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”

Sutskever declines to name Safe Superintelligence’s financial backers or disclose how much he’s raised.

Altman and Sutskever at Tel Aviv University in 2023.Photographer: Amir Cohen/Reuters

As the company’s name emphasizes, Sutskever has made AI safety the top priority. The trick, of course, is identifying what exactly makes one AI system safer than another or, really, safe at all. Sutskever is vague about this at the moment, though he does suggest that the new venture will try to achieve safety with engineering breakthroughs baked into the AI system, as opposed to relying on guardrails applied to the technology on the fly. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” he says.

Sutskever has two co-founders. One is investor and former Apple Inc. AI lead Daniel Gross, who’s gained attention by backing several high-profile AI startups, including Keen Technologies. (Started by John Carmack, the famed coder, video game pioneer, and recent virtual-reality guru at Meta Platforms Inc., Keen is trying to develop an artificial general intelligence based on unconventional programming techniques.) The other co-founder is Daniel Levy, who built a strong reputation for training large AI models working alongside Sutskever at OpenAI. “I think the time is right to have such a project,” says Levy. “My vision is exactly the same as Ilya’s: a small, lean cracked team with everyone focused on the single objective of a safe superintelligence.” Safe Superintelligence will have offices in Palo Alto, California, and Tel Aviv. Both Sutskever and Gross grew up in Israel.

Given Sutskever’s near-mythical place within the AI industry, the uncertainty around his status has been a Silicon Valley fixation for months. As a university researcher and later a scientist at Google, he played a major role in developing several key AI advances. His involvement in the early days of OpenAI helped it attract the top talent that’s been essential to its success. Sutskever became known as its major advocate for building ever-larger models—a strategy that helped the startup surge past Google and proved crucial to the rise of ChatGPT.

This fascination with Sutskever’s plans only grew after the drama at OpenAI late last year. He still declines to say much about it. Asked about his relationship with Altman, Sutskever says only that “it’s good,” and he says Altman knows about the new venture “in broad strokes.” Of his experience over the last several months he adds, “It’s very strange. It’s very strange. I don’t know if I can give a much better answer than that.”

Safe Superintelligence is in some ways a throwback to the original OpenAI concept: a research organization trying to build an artificial general intelligence that could equal and perhaps surpass humans on many tasks. But OpenAI’s structure evolved as the need to raise vast sums of money for computing power became evident. This led to the company’s tight partnership with Microsoft Corp. and contributed to its push to make revenue-generating products. All of the major AI players face a version of the same conundrum, needing to pay for ever-expanding computational demands as AI models continue to grow in size at exponential rates.

These economic realities make Safe Superintelligence a gamble for investors, who would be betting that Sutskever and his team will hit breakthroughs to give it an edge over rivals with larger teams and significant head starts. Those investors will be putting down money without the hope of creating profitable hit products along the way. And it’s not at all clear that what Safe Superintelligence hopes to make is even possible. With “superintelligence,” the company is using AI industry lingo to refer to a system that would be in another league than the human-level AIs most of the largest tech players are pursuing. There’s no consensus in the industry about whether such an intelligence is achievable or how a company would go about building one.

That said, Safe Superintelligence will likely have little trouble raising money, given the pedigree of its founding team and the intense interest in the field. “Out of all the problems we face, raising capital is not going to be one of them,” says Gross.

Researchers and intellectuals have contemplated making AI systems safer for decades, but deep engineering around these problems has been in short supply. The current state of the art is to use both humans and AI to steer the software in a direction aligned with humanity’s best interests. Exactly how one would stop an AI system from running amok remains a largely philosophical exercise.

Sutskever says that he’s spent years contemplating the safety problems and that he already has a few approaches in mind. But Safe Superintelligence isn’t yet discussing specifics. “At the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale,” Sutskever says. “After this, we can say we would like it to be a force for good. We would like to be operating on top of some key values. Some of the values we were thinking about are maybe the values that have been so successful in the past few hundred years that underpin liberal democracies, like liberty, democracy, freedom.”

Sutskever says that the large language models that have dominated AI will play an important role within Safe Superintelligence but that it’s aiming for something far more powerful. With current systems, he says, “you talk to it, you have a conversation, and you’re done.” The system he wants to pursue would be more general-purpose and expansive in its abilities. “You’re talking about a giant super data center that’s autonomously developing technology. That’s crazy, right? It’s the safety of that that we want to contribute to.”

Post a Comment

Previous Post Next Post