We are currently in the early phases of Artificial Intelligence (AI), with chatbots like ChatGPT, which are driven by Large Language Models (LLMs), being prime examples. But AI extends beyond just chatbots. The future of AI includes emerging concepts like AI agents, AGI (Artificial General Intelligence), and Superintelligence. In this article, I will explore the concept of Superintelligence and discuss how Safe Superintelligence can safeguard humanity from the risks associated with powerful AI systems.
What is Superintelligence?
As its name implies, Superintelligence refers to a type of intelligence that greatly exceeds the capabilities of the most brilliant human minds across all domains. It encompasses knowledge, skills, and creativity that are orders of magnitude greater than those of biological humans.
It’s important to note that Superintelligence is a theoretical concept, envisioning AI systems with cognitive abilities far beyond human capacities. Such systems could usher in new paradigms in scientific discovery, address problems that have perplexed humans for centuries, process information and reason at speeds vastly superior to human ability, and execute tasks simultaneously.
Superintelligence is often considered to be a step beyond AGI—Artificial General Intelligence. According to cognitive scientist David Chalmers, AGI could progressively evolve into superintelligence. While AGI systems are capable of matching human abilities in reasoning, learning, and understanding, Superintelligence would surpass human intelligence in every dimension.
In May 2023, OpenAI outlined its vision for superintelligence and its future governance. In a blog post by Sam Altman, Greg Brockman, and Ilya Sutskever, they noted that “it’s conceivable that within the next decade, AI systems could surpass expert skill levels in most fields and perform as much productive work as one of today’s largest corporations.“
Implications and Risks of Superintelligence
Nick Bostrom, a prominent thinker on the subject, highlights the significant risks associated with Superintelligence, particularly if it is not aligned with human values and interests. He argues that the development of Superintelligence poses an existential threat to humanity, with the potential for outcomes that could be catastrophic, possibly even leading to human extinction.
In addition to these existential concerns, Bostrom also explores a range of ethical issues related to the creation and use of superintelligent systems. He questions what will happen to individual rights, who will wield control over such systems, and the broader impact on society and welfare. There is also a considerable risk that once developed, a Superintelligent system could outmaneuver human efforts to regulate or constrain its actions.
Additionally, the concept of Superalignment could trigger an “Intelligence Explosion,” a term introduced by British mathematician I.J. Good in 1965. Good theorized that a self-improving intelligent system might design and create even more advanced intelligent systems, resulting in a rapid and uncontrollable increase in intelligence. This scenario could lead to unintended and potentially harmful consequences for humanity.
How Can Safe Superintelligence Help?
Many AI theorists argue that managing and controlling a superintelligent system will necessitate strict alignment with human values. The system must be designed to interpret and execute actions in a way that is both correct and responsible.
Ilya Sutskever, co-founder of OpenAI and a former co-lead of the Superalignment project at the company, was actively involved in efforts to align powerful AI systems. However, in May 2024, Sutskever, along with Jan Leike, the head of Superalignment at OpenAI, departed from the company.
Leike criticized the shift in focus at OpenAI, claiming that “safety culture and processes have taken a backseat to shiny products.” He has since joined Anthropic, a competing AI research lab. Meanwhile, Sutskever has launched a new company called Safe Superintelligence Inc. (SSI), which is dedicated to developing a safe superintelligent system. SSI asserts that ensuring the safety of superintelligence is “the most important technical problem of our time.”
Under Sutskever’s leadership, SSI aims to concentrate exclusively on achieving safe superintelligence, avoiding involvement in management or product cycles. During his tenure at OpenAI, Sutskever discussed the potential risks and benefits of powerful AI systems in an interview with The Guardian.
Sutskever remarks, “AI is a double-edged sword: it has the potential to address many of our problems, but it also introduces new ones.” He argues that “the future will be favorable for AI no matter what, but it would be ideal if it were also beneficial for humans.”
0 Comments