What Is Artificial General Intelligence? Evolution of AGI in AI

Source: ScienceAI

Original Title: Science | What Is Artificial General Intelligence? Evolution of AGI in AI

What Is Artificial General Intelligence? Evolution of AGI in AI

Translated by | Kai Xia

On March 21, Professor Melanie Mitchell from the Santa Fe Institute published an article titled “Debates on the Nature of Artificial General Intelligence” in Science.

The author explores how the concept of AGI has changed throughout AI history, discussing the stark differences between AI practitioners’ views on how intelligence works and those studying biological intelligence. What exactly is AGI that everyone is talking about?

What Is Artificial General Intelligence? Evolution of AGI in AI

Paper link: https://www.science.org/doi/10.1126/science.ado7069

ScienceAI has edited and organized the original paper without altering its meaning:

The term “Artificial General Intelligence” (AGI) has become ubiquitous in current discussions about artificial intelligence.

OpenAI states that its mission is to “ensure that artificial general intelligence benefits all of humanity.”

DeepMind’s vision statement notes that “AGI… has the potential to be one of the greatest transformative forces in history.”

Both the UK government’s National AI Strategy and the US government’s AI documents emphasize AGI.

Microsoft researchers recently claimed that there are “sparks of AGI” in the large language model GPT-4.

Current and former executives at Google have also claimed that “AGI has arrived.”

The question of whether GPT-4 is an “AGI algorithm” is central to Elon Musk’s lawsuit against OpenAI.

What Is Artificial General Intelligence? Evolution of AGI in AI

Given the ubiquitous discussions about AGI in business, government, and media, it should not be blamed for assuming that the meaning of this term is established and agreed upon. However, the reality is quite the opposite: what AGI means, or whether it means anything coherent, has sparked intense debate within the AI community.

The implications and possible consequences of AGI are no longer merely an academic debate about a mysterious term. The world’s largest tech companies and governments are making significant decisions based on their views of artificial general intelligence.

However, a deep dive into the speculations surrounding AGI reveals that many AI practitioners’ views on the nature of intelligence are starkly different from those studying human and animal cognition—this difference is crucial for understanding the current state of machine intelligence and predicting its potential future.

The initial goal of the AI field was to create machines with general intelligence comparable to humans.

Early AI pioneers were optimistic: in 1965, Herbert Alexander Simon predicted in his book The Shape of Automation for Men and Management that “within 20 years, machines will be able to perform any task that a human can do,” and in 1970, Life magazine quoted Marvin Minsky saying, “In three to eight years, we will have a machine with the intelligence of an ordinary person. I mean a machine that can read Shakespeare, refuel cars, navigate office politics, tell jokes, and fight.”

What Is Artificial General Intelligence? Evolution of AGI in AI

Artificial General Intelligence 2008

These optimistic predictions did not come to fruition. In the following decades, the only successful AI systems were “narrow” rather than general—they could only perform a single task or a limited range of tasks (for example, voice recognition software on mobile phones can transcribe your dictation but cannot respond intelligently to it).

The term “AGI” was coined in the early 21st century, aiming to revive the lofty ambitions of AI pioneers and refocus on “attempting to study and reproduce intelligence as a whole in a domain-independent manner.”

Until recently, this pursuit remained a rather vague corner of the AI field, but recently, leading AI companies have made achieving artificial general intelligence their primary goal, pointing out that the existential threats claimed by AI “doomers” are their top fears.

Many AI practitioners have speculated about the timeline for AGI, with some predicting a “50% chance of having AGI by 2028.” Others question the premise of AGI, stating that it is vague and poorly defined. A prominent researcher tweeted, “The whole concept is unscientific, and people should be embarrassed to use the term.”

While early AGI supporters believed that machines would soon take on all human activities, researchers learned painful lessons that creating AI systems capable of beating you at chess or answering your search queries is much easier than making a robot that can fold clothes or fix pipes.

What Is Artificial General Intelligence? Evolution of AGI in AI

The definition of AGI has also been adjusted to include only so-called “cognitive tasks.” Demis Hassabis, co-founder of DeepMind, defines AGI as a system “that should be able to perform almost any cognitive task that a human can accomplish,” while OpenAI describes it as “a highly autonomous system that surpasses humans in most economically valuable work,” where “most” omits tasks requiring physical intelligence, which robots may not be able to perform for some time.

The concept of “intelligence” in artificial intelligence (cognitive or otherwise) is often constructed based on a single agent optimizing for rewards or goals. An influential paper (Universal Intelligence: A Definition of Machine Intelligence) defines general intelligence as “the ability of an agent to achieve goals in a wide range of environments.”

What Is Artificial General Intelligence? Evolution of AGI in AI

Related paper link: https://link.springer.com/article/10.1007/s11023-007-9079-x

Another article (Reward is Enough) points out that intelligence and its related capabilities can be understood as maximizing rewards. In fact, this is how today’s AI works—e.g., the computer program AlphaGo is trained to optimize a specific reward function (“win the game”), while GPT-4 is trained to optimize another reward function (“predict the next word in a phrase”).

What Is Artificial General Intelligence? Evolution of AGI in AI

Related paper link: https://doi.org/10.1016/j.artint.2021.103535

This view of intelligence has led some AI researchers to speculate: once AI systems achieve AGI, they will recursively improve their intelligence by applying their optimization capabilities to their own software, rapidly becoming “thousands or millions of times smarter than us,” thus achieving superintelligence swiftly.

What Is Artificial General Intelligence? Evolution of AGI in AI

“Before we share the Earth with superintelligent machines, we must develop a science to understand them. Otherwise, they will control everything,” author James Barrat said in discussing his new book Our Final Invention: Artificial Intelligence and the End of the Human Era.

This focus on optimization has led some in the AI community to worry that “inconsistent” general intelligence could wildly deviate from its creators’ goals, posing existential risks to humanity.

Philosopher Nick Bostrom proposed a now-famous thought experiment in his 2014 book Superintelligence: he imagines humans giving a superintelligent AI system the goal of optimizing paperclip production. Taken literally, the AI system uses its genius to control all resources on Earth and turns everything into paperclips. Of course, humans do not intend to destroy the Earth and humanity to make more paperclips, but they neglected to specify this in the instructions.

AI researcher Yoshua Bengio provided his own thought experiment: “We might ask AI to solve the climate change problem, and it might design a virus that leads to mass human deaths because our instructions were not sufficiently clear about the implications of harm, and humans are actually the main obstacle to solving the climate crisis.”

What Is Artificial General Intelligence? Evolution of AGI in AI

Nick Bostrom in his 2014 book Superintelligence.

This speculative perspective on AGI (and “superintelligence”) is different from the views held by those studying biological intelligence (especially human cognition). Although cognitive science lacks a strict definition of “general intelligence” and has not reached a consensus on the degree to which humans or any type of system can possess general intelligence, most cognitive scientists agree that intelligence is not a quantity that can be measured on a single scale. Nor is it an arbitrary quantity that can be adjusted up or down, but rather a complex integration of general and specialized abilities that are largely adapted to specific evolutionary niches.

Many studying biological intelligence also doubt whether the so-called “cognitive” aspects of intelligence can be separated and captured in a disembodied machine. Psychologists have shown that significant aspects of human intelligence are based on a person’s specific bodily and emotional experiences. There is also evidence that individual intelligence largely depends on a person’s engagement with social and cultural environments. The ability to understand others, coordinate with others, and learn from others may be much more important for a person to successfully achieve goals than their individual “optimization capacity.”

Moreover, unlike hypothetical paperclip-maximizing AI, human intelligence is not centered around optimizing fixed goals; rather, a person’s goals are formed through a complex integration of innate needs and the social and cultural environments supporting their intelligence. Unlike superintelligent paperclip maximizers, increased intelligence precisely enables us to better perceive others’ intentions and the potential impacts of our own actions, and to modify these actions accordingly.

As philosopher Katja Grace wrote: “For almost any human goal, the idea of maximizing the universe as a sub-goal is utterly ridiculous. So why do we think AI’s goals are different?”

What Is Artificial General Intelligence? Evolution of AGI in AI

The ghost of machines improving their own software to increase their intelligence by several orders of magnitude also diverges from the biological perspective that intelligence is a highly complex system that transcends isolated brains. If human-level intelligence requires a complex integration of different cognitive abilities, along with the scaffolding of social and cultural environments, then the “intelligence” level of a system is unlikely to seamlessly access the “software” level, just as we humans cannot easily design our brains (or genes) to make ourselves smarter. However, we, as a collective, enhance our effective intelligence through external technological tools like computers and the construction of cultural institutions such as schools, libraries, and the internet.

The meaning of AGI and whether it is a coherent concept remains under debate. Furthermore, speculations about what general intelligent machines could do are largely based on intuition rather than scientific evidence. But how credible is such intuition? The history of AI has repeatedly overturned our intuitions about intelligence.

Many early AI pioneers believed that machines programmed with logic would capture the full range of human intelligence. Other scholars predicted that for machines to beat humans at chess, translate between languages, or engage in conversation, they would need to possess human-level general intelligence, but the results proved to be erroneous.

At every step of AI’s evolution, human-level intelligence has proven to be far more complex than researchers expected. Will current speculations about machine intelligence also prove to be mistaken? Can we develop a more rigorous and universal science of intelligence to answer these questions?

It remains unclear whether the science of artificial intelligence resembles more the science of human intelligence or more like astrobiology, which predicts what life on other planets might be like. Making predictions about things we have never seen, or that may not even exist, whether extraterrestrial life or superintelligent machines, requires theories based on general principles.

Ultimately, the meaning and consequences of “AGI” will not be resolved through media debates, lawsuits, or our intuitions and speculations, but through long-term scientific examination of these principles.

【Disclaimer】The content published by this public account is for learning and communication purposes only, and the copyright of the content belongs to the original author. If it infringes your rights, please contact us promptly, and we will delete the content at the first opportunity. The content represents the author’s personal views and does not represent the position of this public account or take responsibility for its authenticity.

Leave a Comment