OpenAI Dissolves High-Profile Safety Team After Chief Scientist Sutskever’s Exit

OpenAI has effectively dissolved a team focused on ensuring the safety of possible future ultra-capable artificial intelligence systems, following the departure of the group’s two leaders, including OpenAI co-founder and chief scientist, Ilya Sutskever.

Rather than maintain the so-called superalignment team as a standalone entity, OpenAI is now integrating the group more deeply across its research efforts to help the company achieve its safety goals, the company told Bloomberg News. The team was formed less than a year ago under the leadership of Sutskever and Jan Leike, another OpenAI veteran.

The decision to rethink the team comes as a string of recent departures from OpenAI revives questions about the company’s approach to balancing speed versus safety in developing its AI products. Sutskever, a widely respected researcher, announced Tuesday that he was leaving OpenAI after having previously clashed with Chief Executive Officer Sam Altman over how rapidly to develop artificial intelligence.

Leike revealed his departure shortly after with a terse post on social media. “I resigned,” he said. For Leike, Sutskever’s exit was the last straw following disagreements with the company, according to a person familiar with the situation who asked not to be identified in order to discuss private conversations.

In a statement on Friday, Leike said the superalignment team had been fighting for resources. “Over the past few months my team has been sailing against the wind,” Leike wrote on X. “Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”

Hours later, Altman responded to Leike’s post. “He’s right we have a lot more to do,” Altman wrote on X. “We are committed to doing it.”

Other members of the superalignment team have also left the company in recent months. Leopold Aschenbrenner and Pavel Izmailov, were let go by OpenAI. The Information earlier reported their departures. Izmailov had been moved off the team prior to his exit, according to a person familiar with the matter. Aschenbrenner and Izmailov did not respond to requests for comment.

John Schulman, a co-founder at the startup whose research centers on large language models, will be the scientific lead for OpenAI’s alignment work going forward, the company said. Separately, OpenAI said in a blog post that it named Research Director Jakub Pachocki to take over Sutskever’s role as chief scientist.

“I am very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone,” Altman said in a statement Tuesday about Pachocki’s appointment. AGI, or artificial general intelligence, refers to AI that can perform as well or better than humans on most tasks. AGI doesn’t yet exist, but creating it is part of the company’s mission.

OpenAI also has employees involved in AI-safety-related work on teams across the company, as well as individual teams focused on safety. One, a preparedness team, launched last October and focuses on analyzing and trying to ward off potential “catastrophic risks” of AI systems.

The superalignment team was meant to head off the most long term threats. OpenAI announced the formation of the superalignment team last July, saying it would focus on how to control and ensure the safety of future artificial intelligence software that is smarter than humans — something the company has long stated as a technological goal. In the announcement, OpenAI said it would put 20% of its computing power at that time toward the team’s work.

In November, Sutskever was one of several OpenAI board members who moved to fire Altman, a decision that touched off a whirlwind five days at the company. OpenAI President Greg Brockman quit in protest, investors revolted and within days, nearly all of the startup’s roughly 770 employees signed a letter threatening to quit unless Altman was brought back. In a remarkable reversal, Sutskever also signed the letter and said he regretted his participation in Altman’s ouster. Soon after, Altman was reinstated.

In the months following Altman’s exit and return, Sutskever largely disappeared from public view, sparking speculation about his continued role at the company. Sutskever also stopped working from OpenAI’s San Francisco office, according to a person familiar with the matter.

In his statement, Leike said that his departure came after a series of disagreements with OpenAI about the company’s “core priorities,” which he doesn’t feel are focused enough on safety measures related to the creation of AI that may be more capable than people.

In a post earlier this week announcing his departure, Sutskever said he’s “confident” OpenAI will develop AGI “that is both safe and beneficial” under its current leadership, including Altman.

© 2024 Bloomberg L.P.
 


(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

US and UK Announce Partnership on AI Safety and Testing

The United States and Britain on Monday announced a new partnership on the science of artificial intelligence safety, amid growing concerns about upcoming next-generation versions.

Commerce Secretary Gina Raimondo and British Technology Secretary Michelle Donelan signed a memorandum of understanding in Washington to jointly develop advanced AI model testing, following commitments announced at an AI Safety Summit in Bletchley Park in November.

“We all know AI is the defining technology of our generation,” Raimondo said. “This partnership will accelerate both of our institutes work across the full spectrum to address the risks of our national security concerns and the concerns of our broader society.”

Britain and the United States are among countries establishing government-led AI safety institutes.

Britain said in October its institute would examine and test new types of AI, while the United States said in November it was launching its own safety institute to evaluate risks from so-called frontier AI models and is now working with 200 companies and entities.

Under the formal partnership, Britain and the United States plan to perform at least one joint testing exercise on a publicly accessible model and are considering exploring personnel exchanges between the institutes. Both are working to develop similar partnerships with other countries to promote AI safety.

“This is the first agreement of its kind anywhere in the world,” Donelan said. “AI is already an extraordinary force for good in our society, and has vast potential to tackle some of the world’s biggest challenges, but only if we are able to grip those risks.”

Generative AI – which can create text, photos and videos in response to open-ended prompts – has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and catastrophic effects.

In a joint interview with Reuters Monday, Raimondo and Donelan urgent joint action was needed to address AI risks.

“Time is of the essence because the next set of models are about to be released, which will be much, much more capable,” Donelan said. “We have a focus one the areas that we are dividing and conquering and really specializing.”

Raimondo said she would raise AI issues at a meeting of the US-EU Trade and Technology Council in Belgium Thursday.

The Biden administration plans to soon announce additions to its AI team, Raimondo said. “We are pulling in the full resources of the US government.”

Both countries plan to share key information on capabilities and risks associated with AI models and systems and technical research on AI safety and security.

In October, Biden signed an executive order that aims to reduce the risks of AI. In January, the Commerce Department said it was proposing to require US cloud companies to determine whether foreign entities are accessing US data centers to train AI models.

Britain said in February it would spend more than GBP 100 million ($125.5 million or roughly Rs. 1,047 crore) to launch nine new research hubs and AI train regulators about the technology.

Raimondo said she was especially concerned about the threat of AI applied to bioterrorism or a nuclear war simulation.

“Those are the things where the consequences could be catastrophic and so we really have to have zero tolerance for some of these models being used for that capability,” she said.

© Thomson Reuters 2024


Affiliate links may be automatically generated – see our ethics statement for details.

Check out our Latest News and Follow us at Facebook

Original Source

Exit mobile version