What Is General Artificial Intelligence? The Evolution of AGI Concept in AI

Source: ScienceAI

Original Title: Science | What Is General Artificial Intelligence? The Evolution of AGI Concept in AI

What Is General Artificial Intelligence? The Evolution of AGI Concept in AI

Translated by | Kaixia

On March 21, Professor Melanie Mitchell of the Santa Fe Institute (SFI) published an article titled “Debates on the Nature of Artificial General Intelligence” in Science.

The author explores how the concept of AGI has evolved in the history of AI, discussing how the views of AI practitioners on how intelligence works differ significantly from those studying biological intelligence. What exactly is AGI that everyone is talking about?

What Is General Artificial Intelligence? The Evolution of AGI Concept in AI

Paper Link: https://www.science.org/doi/10.1126/science.ado7069

ScienceAI has edited and organized the original paper without changing its meaning:

The term “Artificial General Intelligence” (AGI) has become ubiquitous in current discussions about artificial intelligence.

OpenAI states that its mission is to “ensure that artificial general intelligence benefits all of humanity.”

DeepMind’s vision statement notes that “AGI… has the potential to be one of the greatest transformations in history.”

The national AI strategy of the UK government and the AI documents of the US government both highlight AGI.

Microsoft researchers recently claimed that there are “sparks of AGI” in the large language model GPT-4.

Current and former executives of Google have also claimed that “AGI has arrived.”

The question of whether GPT-4 is an “AGI algorithm” is at the core of Elon Musk’s lawsuit against OpenAI.

What Is General Artificial Intelligence? The Evolution of AGI Concept in AI

Given that discussions about AGI are everywhere in business, government, and media, it cannot be blamed for assuming that the meaning of the term is settled and agreed upon. However, the opposite is true: what AGI means, or whether it means anything coherent, has sparked intense debate within the AI community.

The implications and possible consequences of AGI have become more than just an academic debate about a mysterious term. The world’s largest tech companies and governments are making significant decisions based on their views of general artificial intelligence.

However, a deep dive into the speculations about AGI reveals that many AI practitioners’ views on the nature of intelligence are starkly different from those studying human and animal cognition—this difference is crucial for understanding the current state of machine intelligence and predicting its potential future.

The initial goal of the field of artificial intelligence was to create machines with general intelligence comparable to humans.

Early AI pioneers were optimistic: In 1965, Herbert Alexander Simon predicted in his book The Shape of Automation for Men and Management that “machines will be able to perform any task that humans can do within 20 years,” and in 1970, Life magazine quoted Marvin Minsky as saying, “In three to eight years, we will have a machine with intelligence comparable to an ordinary person. I mean a machine that can read Shakespeare, refuel cars, engage in office politics, tell jokes, and fight.”

What Is General Artificial Intelligence? The Evolution of AGI Concept in AI

Artificial General Intelligence 2008

These optimistic predictions did not come true. In the following decades, the only successful AI systems were “narrow” rather than general—they could only perform single tasks or a limited range of tasks (for example, voice recognition software on smartphones can transcribe your dictation but cannot respond intelligently to it).

The term “AGI” was created in the early 21st century, aiming to reclaim the lofty ambitions of the early AI pioneers and seek to refocus on “attempting to study and replicate the entirety of intelligence in a domain-independent manner.”

Until recently, this pursuit remained a rather vague corner of the AI field; however, leading AI companies have made achieving general artificial intelligence their primary goal and noted that the existential threats claimed by AI “doomers” regarding the survival of AGI are their top fear.

Many AI practitioners have speculated about the timeline for AGI, for instance, some predict a “50% chance of having AGI by 2028.” Others question the premise of AGI, calling it vague and poorly defined. A prominent researcher stated on Twitter, “The whole concept is unscientific, and people should be embarrassed to use the term.”

While early AGI proponents believed machines would soon take on all human activities, researchers have learned through painful lessons that creating AI systems capable of beating you at chess or answering your search queries is much easier than making a robot that can fold clothes or fix pipes.

What Is General Artificial Intelligence? The Evolution of AGI Concept in AI

The definition of AGI has also been adjusted to include only so-called “cognitive tasks.” DeepMind co-founder Demis Hassabis defines AGI as a system that “should be able to perform almost any cognitive task that a human can accomplish,” while OpenAI describes it as “a highly autonomous system that outperforms humans at most economically valuable work,” where “most” excludes tasks requiring physical intelligence, which may be unachievable by robots for some time.

The concept of “intelligence” in AI (cognitive or otherwise) is often constructed based on optimizing a single agent for rewards or goals. An influential paper, “Universal Intelligence: A Definition of Machine Intelligence“, defines general intelligence as “the ability of an agent to achieve goals in a variety of environments.”

What Is General Artificial Intelligence? The Evolution of AGI Concept in AI

Related Paper Link: https://link.springer.com/article/10.1007/s11023-007-9079-x

Another paper, “Reward is Enough“, states that “intelligence and its associated capabilities can be understood as maximizing reward.” In fact, this is how today’s AI works—for example, the computer program AlphaGo is trained to optimize a specific reward function (“win the game”), while GPT-4 is trained to optimize another reward function (“predict the next word in a phrase”).

What Is General Artificial Intelligence? The Evolution of AGI Concept in AI

Related Paper Link: https://doi.org/10.1016/j.artint.2021.103535

This perspective on intelligence has led some AI researchers to speculate: once AI systems achieve AGI, they will recursively improve their intelligence by applying their optimization capabilities to their own software, rapidly becoming “thousands or millions of times smarter than us,” thereby quickly achieving superintelligence.

What Is General Artificial Intelligence? The Evolution of AGI Concept in AI

“Before we share the Earth with superintelligent machines, we must develop a science to understand them. Otherwise, they will control everything,” author James Barrat stated in his new book Our Final Invention: Artificial Intelligence and the End of the Human Era.

This focus on optimization has led some in the AI community to worry that “inconsistent” general intelligence may wildly deviate from the goals of its creators, posing existential risks to humanity.

Philosopher Nick Bostrom proposed a now-famous thought experiment in his 2014 book Superintelligence: he imagines humans giving a superintelligent AI system the goal of optimizing paperclip production. Taken literally, this goal leads the AI system to use its genius to control all resources on Earth and turn everything into paperclips. Of course, humans do not want to destroy the Earth and humanity to make more paperclips, but they neglected to specify this in the instructions.

AI researcher Yoshua Bengio presented his own thought experiment: “We might ask AI to solve climate change, which could lead it to design a virus that causes mass human fatalities because our instructions were not clear enough about the harms, and humans are actually the main obstacle to solving the climate crisis.”

What Is General Artificial Intelligence? The Evolution of AGI Concept in AI

Nick Bostrom in his 2014 book Superintelligence.

This speculative view of AGI (and “superintelligence”) differs from the perspectives of those studying biological intelligence (especially human cognition). Although cognitive science does not have a strict definition of “general intelligence” or a consensus on the degree to which humans or any type of system can possess general intelligence, most cognitive scientists agree that intelligence is not a quantity that can be measured on a single scale. It is not an arbitrary quantity that can be adjusted up or down, but rather a complex integration of general and specialized capabilities, largely adapted to specific evolutionary niches.

Many studying biological intelligence also doubt whether the so-called “cognitive” aspects of intelligence can be separated and captured in a disembodied machine. Psychologists have shown that significant aspects of human intelligence are based on a person’s specific bodily and emotional experiences. There is also evidence that individual intelligence largely depends on a person’s engagement with social and cultural environments. The ability to understand others, coordinate with others, and learn from others may be far more important for a person’s success in achieving goals than their individual “optimization capacity.”

Moreover, unlike hypothetical paperclip-maximizing AI, human intelligence is not centered on optimizing fixed goals; rather, a person’s goals are formed through a complex integration of innate needs and the social and cultural environment that supports their intelligence. Unlike superintelligent paperclip maximizers, the increase of intelligence enables us to better understand the intentions of others and the potential impacts of our actions, allowing us to modify those actions accordingly.

As philosopher Katja Grace wrote: “For almost any human goal, the idea of maximizing the universe as a sub-step is completely ridiculous. So why do we think AI’s goals are different?”

What Is General Artificial Intelligence? The Evolution of AGI Concept in AI

The specter of machines improving their own software to increase their intelligence by several orders of magnitude also diverges from the biological perspective that intelligence is a highly complex system that transcends an isolated brain. If human-level intelligence requires a complex integration of different cognitive abilities, as well as social and cultural scaffolding, then the “level of intelligence” of a system is unlikely to seamlessly access the “software” level, just as we humans cannot easily design our brains (or genes) to make ourselves smarter. However, as a collective, we have increased our effective intelligence through the construction of external technological tools like computers, and cultural institutions like schools, libraries, and the internet.

The meaning of AGI and whether it is a coherent concept remains under debate. Furthermore, speculations about what general artificial intelligence machines will be capable of are largely based on intuition rather than scientific evidence. But how credible is such intuition? The history of AI has repeatedly overturned our intuitions about intelligence.

Many early AI pioneers believed that machines programmed with logic would capture the entire range of human intelligence. Other scholars predicted that to enable machines to defeat humans at chess, translate between languages, or engage in conversation, they would need to possess human-level general intelligence, but the results proved otherwise.

At every step of AI evolution, human-level intelligence has proven to be more complex than researchers anticipated. Will current speculations about machine intelligence also turn out to be wrong? Can we develop a more rigorous and universal science of intelligence to answer these questions?

It remains unclear whether the science of AI is more like the science of human intelligence or more like astrobiology, predicting what life on other planets might be like. Making predictions about things never seen, or that may not even exist, whether extraterrestrial life or superintelligent machines, requires theories based on general principles.

Ultimately, the meaning and consequences of “AGI” will not be resolved through media debates, lawsuits, or our intuitions and speculations, but through long-term scientific investigations of these principles.

[Disclaimer] The content published by this public account is for learning and exchange purposes only, and the copyright of the content belongs to the original author. If your rights are infringed, please contact us promptly, and we will delete the content immediately. The content represents the author’s personal views and does not reflect the position of this public account or guarantee its authenticity.

What Is General Artificial Intelligence? The Evolution of AGI Concept in AI

Leave a Comment