According to recent reports by The New York Times, Time, CNBC and others, Ilya Sutskever, a co-founder and former chief scientist of OpenAI, has launched a new artificial intelligence company called Safe Superintelligence Inc. (SSI). The goal of SSI is to develop a superintelligent AI system that prioritizes safety and security.
SSI Launch and Mission
Safe Superintelligence Inc. (SSI) was officially announced on June 19, 2024, via a post on X (formerly Twitter) by Ilya Sutskever. The company’s primary objective is to develop a superintelligence system that surpasses human intelligence while ensuring safety and security. SSI plans to focus on a single product to:
- Avoid distractions from management overhead or product cycles
- Shield the company from short-term commercial pressures
- Address the technical challenges of creating a safe superintelligence, drawing a distinction between safety in terms of nuclear safety and trust and safety
Founding Team Members
Ilya Sutskever, the former chief scientist and co-founder of OpenAI, is joined by two other founding members at Safe Superintelligence Inc. (SSI):
- Daniel Gross, a former AI lead at Apple who worked on the company’s AI and search initiatives until 2017.
- Daniel Levy, a former technical staff member and researcher at OpenAI who previously collaborated with Sutskever.
Sutskever will serve as the chief scientist at SSI, with his responsibilities described as “overseeing groundbreaking advancements.” The company has established offices in both Palo Alto, California, and Tel Aviv, Israel.
Sutskever’s OpenAI Departure
Sutskever’s departure from OpenAI in May 2024 followed his involvement in an attempt to remove CEO Sam Altman in November 2023. After Altman’s reinstatement, Sutskever expressed regret for his actions and supported the decision. Prior to leaving, Sutskever was part of OpenAI’s superalignment team, which focused on AI safety. Another member of the team, Jan Leike, resigned shortly after, citing concerns that the company had lost its focus on safety in favor of marketable products.
AI Safety Concerns
The launch of SSI comes amid growing concerns about the safety and ethical implications of advanced AI systems within the AI community. Sutskever’s new venture emphasizes the importance of safety in AI development, aiming to address the technical challenges of creating a safe superintelligence. This focus on safety is particularly relevant given recent departures from OpenAI, such as Jan Leike, who cited concerns that the company had lost its focus on safety in favor of marketable products.
Source: Perplexity