Last month, Ilya Sutskever, OpenAI co-founder and chief scientist, announced his departure from the company after nearly a decade. He mentioned he would be focusing on a “deeply meaningful personal project.” Now, a month later, Sutskever has unveiled his new venture: a company named Safe Superintelligence Inc. (SSI).
Ilya Sutskever, accompanied by Daniel Gross, a seasoned investor-entrepreneur, and Daniel Levy, a former OpenAI employee, has launched Safe Superintelligence Inc. (SSI). Sutskever has articulated that SSI’s primary objective is to develop a safe superintelligence, a goal the company considers the “most critical technical challenge of our era.“
SSI’s entire focus revolves around achieving Safe Superintelligence, aligning its mission, product roadmap, and business model toward this singular goal. According to SSI’s website, “Our exclusive focus ensures no distractions from management overhead or short-term product cycles, while our business model safeguards safety, security, and progress from immediate commercial pressures.“
Headquartered in Palo Alto and Tel Aviv, Safe Superintelligence Inc. is actively seeking top technical talent globally who are dedicated to advancing safe superintelligence. However, SSI has chosen not to disclose details about its financial backers and investors at this time.
Ilya Sutskever spearheaded the Superalignment project at OpenAI. Reports indicate that he also played a pivotal role in attempting to remove Sam Altman from OpenAI in November 2023, although the coup was unsuccessful, and Altman returned to his position. This incident led to the departure of numerous researchers and board members from OpenAI, citing concerns that the organization was prioritizing profit over safety.
0 Comments