What is Artificial General Intelligence (AGI)? Explained


As AI research progresses, the idea of Artificial General Intelligence (AGI) is shifting from theoretical discussions to potential reality. We’re currently in the early stages of the AI era, with numerous Generative AI applications already in use. But what comes next? Could AGI be the future? To grasp what Artificial General Intelligence (AGI) is and how it might affect humanity, keep reading.

What is Artificial General Intelligence or AGI?

There is no universal agreement among researchers and AI labs on a precise definition of AGI (Artificial General Intelligence). Generally, AGI is understood as an AI system capable of matching or surpassing human abilities, particularly in cognitive tasks.

Different AI labs have their own interpretations of AGI. For example, in February 2023, OpenAI described AGI as “AI systems that are generally smarter than humans.” The company aims to develop AGI that benefits all of humanity, while also recognizing the “serious” risks associated with such technology, including potential “misuse, severe accidents, and societal disruption“.

Shane Legg, a co-founder of DeepMind (now Google DeepMind) and its Chief AGI Scientist, along with fellow researcher Ben Goertzel, coined the term AGI. Legg describes AGI as encompassing a wide array of capabilities. According to DeepMind’s stance, an AGI “system should not only perform a variety of tasks but also be capable of learning how to execute those tasks, evaluating its performance, and seeking help when necessary.

The Google DeepMind team has outlined different levels of AGI: emerging, competent, expert, virtuoso, and superhuman. According to DeepMind researchers, current frontier AI models have demonstrated some emergent behaviors that align with the early stages of this progression.

Characteristics of AGI

Just as there is no broad consensus on the definition of AGI, its characteristics are also not well-defined. However, AI researchers agree that a human-level AGI should be capable of reasoning like humans and making decisions even in the face of uncertainty. It should possess extensive knowledge, including common sense understanding.

Additionally, an AGI system should be able to plan, acquire new skills, solve open-ended problems, and communicate naturally. Cognitive scientists also suggest that AGI should have traits like imagination to generate novel ideas and concepts. Furthermore, AGI characteristics might include physical abilities such as seeing, hearing, moving, and acting.

To assess whether AI models have achieved AGI, several tests are used, including the well-known Turing Test. Named after computer scientist Alan Turing, this test evaluates whether an AI system can mimic human conversation convincingly enough that a person cannot distinguish between interacting with a machine or a human.

While many believe that current AI chatbots have passed the Turing Test, the test also suggests that a machine should demonstrate intelligent behavior comparable to that of humans. Another test, proposed by John Nilsson, is the Employment Test, which posits that a machine should be capable of performing a critical job in a manner similar to a human.

Steve Wozniak, co-founder of Apple, has introduced the Coffee Test as a way to evaluate an intelligent AI system. According to Wozniak, a sufficiently advanced AI should be able to locate the coffee machine, add water and coffee, and complete the brewing process independently, without any human assistance.

Levels of AGI

OpenAI believes that achieving AGI will be a gradual process with multiple stages of progress, rather than a single leap. Bloomberg recently reported that OpenAI has identified five levels of advancement toward realizing AGI.

The first level is Conversational AI, which includes current chatbots like ChatGPT, Claude, and Gemini. The second level is Reasoning AI, where models can reason similarly to humans, a milestone we have not yet reached. The third level is Autonomous AI, where AI agents can perform actions independently on the user’s behalf.

The fourth level is Innovating AI, where AI systems have the capability to innovate and enhance themselves. The final, fifth level is Organizational AI, where an AI system can handle the functions and tasks of an entire organization autonomously. This advanced AI can experience failures, learn from them, improve, and collaborate with multiple agents to perform tasks simultaneously.

AGI Progress and Timescale: How Close are We to Achieve It?

Sam Altman, CEO of OpenAI, believes that we could reach the fifth level—Organizational AI—within the next decade. Predictions on achieving AGI vary among experts. Ben Goertzel suggests that AGI could be realized within the next few decades, potentially by the 2030s.

Geoffrey Hinton, often referred to as the “godfather of AI,” initially was uncertain about the timeline for AGI. However, he now believes that a general-purpose AI might be just 20 years away.

François Chollet, a leading researcher at Google DeepMind and the creator of Keras, believes that AGI cannot be achieved by simply scaling existing technologies like large language models (LLMs). He has introduced a new benchmark called ARC-AGI and initiated a public competition to test current AI models against it. Chollet argues that AGI development has stagnated and that new approaches are necessary to make meaningful progress.

Yann LeCun, Chief AI Scientist at Meta, also contends that LLMs have inherent limitations and are inadequate for achieving AGI due to their lack of true intelligence and reasoning capabilities.

Existential Risk From AGI

As AI development accelerates globally, many experts caution that achieving AGI could pose significant risks to humanity. OpenAI itself acknowledges the serious dangers associated with the technology. Geoffrey Hinton, after leaving Google, told CBS News that it is “not inconceivable” for AI to potentially threaten humanity, emphasizing the need for robust controls over increasingly intelligent AI systems.

An AGI system capable of matching human abilities might lead to widespread unemployment across various industries, exacerbating economic challenges worldwide. OpenAI has already published a paper outlining which jobs could be replaced by ChatGPT. Additionally, such a powerful system carries risks of misuse or unintended consequences if it is not carefully aligned with human values.

Elon Musk has also voiced concerns about the dangers of AGI, stressing that its development should prioritize human interests. Last year, Musk, along with other leading figures in the industry, called for a pause on major AI experiments.

Ilya Sutskever, OpenAI co-founder and former chief scientist, left the company to launch a new startup called Safe Superintelligence. He remarks, “AI is a double-edged sword: it has the potential to solve many of our problems, but it also creates new ones. The future will likely be promising for AI, but it would be preferable if it were also beneficial for humanity.”

Ilya Sutskever is now focused on aligning powerful AI systems with human values to avoid catastrophic outcomes for humanity. Timnit Gebru, a former AI researcher at Google, was dismissed from the company for publishing a paper that highlighted the risks associated with large language models (LLMs). She argues that instead of asking what AGI is, we should question “why we should build it.

AGI has the potential to significantly impact societal structures, potentially causing widespread job loss, deepening inequality, and leading to conflict and scarcity. This raises the crucial question—should we even pursue AGI? There are numerous questions and ethical concerns that need to be addressed before advancing AGI development. What are your thoughts on AGI? Share your views in the comments below.


What's Your Reaction?

hate hate
67
hate
confused confused
467
confused
fail fail
266
fail
fun fun
200
fun
geeky geeky
133
geeky
love love
600
love
lol lol
666
lol
omg omg
466
omg
win win
266
win

0 Comments

Your email address will not be published. Required fields are marked *