Understanding AGI: Insights from Fei-Fei Li

Understanding AGI: Insights from Fei-Fei Li

MLNLP community is a well-known machine learning and natural language processing community both domestically and internationally, covering NLP graduate students, university professors, and corporate researchers.
The Vision of the Community is to promote the exchange and progress between academia, industry, and enthusiasts in natural language processing and machine learning, especially for beginners.
Reprinted from | Z Potentials
Edited by | TechCrunch

Understanding AGI: Insights from Fei-Fei Li

Image Source: Stanford University

Are you confused about Artificial General Intelligence (AGI)? This is what OpenAI is persistently trying to create in a way that ultimately benefits all of humanity. You might want to take them seriously because they just raised $6.6 billion to get closer to this goal.
But if you are still wondering what AGI is, you are not alone.
At the Credo AI Responsible AI Leadership Summit on Thursday, the world-renowned researcher often referred to as the “Godmother of AI,” Fei-Fei Li, stated that she also does not know what AGI is. At other moments, Fei-Fei Li discussed her role in the birth of modern AI, how society should protect itself from advanced AI models, and why she believes her new unicorn startup, World Labs, will change everything.
But when asked about her views on the “AI singularity,” Li felt as confused as the rest of us.
“I come from the academic side of AI, educated in a more rigorous and evidence-based approach, so I am not very clear on what these terms mean,” Li said in a crowded room in San Francisco, next to a large window overlooking the Golden Gate Bridge. “To be honest, I don’t even know what AGI means. People say you’ll know it when you see it, but I guess I haven’t seen it yet. In fact, I don’t spend much time thinking about these terms because I think there are many more important things to do…”
If anyone knows what AGI is, it might be Fei-Fei Li. In 2006, she created ImageNet, the world’s first large AI training and benchmark dataset, which was crucial in catalyzing our current AI boom. From 2017 to 2018, she served as Chief Scientist of AI/ML at Google Cloud. Today, Fei-Fei Li leads the Stanford Human-Centered AI Institute (HAI), and her startup World Labs is building a “large world model.” (If you ask me, this term is almost as confusing as AGI.)
OpenAI CEO Sam Altman attempted to define AGI in an interview with The New Yorker last year. Altman described AGI as “the equivalent of a median human that you could hire as a coworker.”
Meanwhile, OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.”
Clearly, these definitions are not good enough for a company valued at $157 billion. Therefore, OpenAI created five levels to internally assess its progress toward AGI. The first level is chatbots (like ChatGPT), followed by reasoners (apparently, OpenAI’s o1 is at this level), agents (this is supposedly next), innovators (AI that can help invent things), and finally organization-level AI (AI that can perform the work of an entire organization).
Still confused? So am I, and so is Li. Moreover, this sounds far beyond what an average human coworker can do.
Earlier in the talk, Li mentioned that she has been curious about the concept of intelligence since childhood. This led her to start researching the field before AI became profitable. In the early 2000s, Li stated that she and a few others were quietly laying the groundwork for this field.
“In 2012, my ImageNet combined with AlexNet and GPUs—many call this the birth of modern AI. It was driven by three key factors: big data, neural networks, and modern GPU computing. Once that moment arrived, I believe the entire field of AI and our world became different forever.”
When asked about California’s controversial AI bill SB 1047, Li spoke cautiously to avoid reigniting the controversy that Governor Newsom had just quelled by vetoing the bill last week. (We recently spoke with the author of SB 1047, who would prefer to re-engage in debate with Li.)
“Some of you may know that I have expressed my concerns about this vetoed bill [SB 1047], but now I am reflecting thoughtfully and looking forward to the future,” Li said. “I feel very surprised, or honored, that Governor Newsom invited me to be involved in the next steps after SB 1047.”
The Governor of California recently invited Li and other AI experts to form a task force to help the state develop safeguards for AI deployment. Li stated that she would adopt an evidence-based approach in this role and would strive to advocate for academic research and funding support. However, she also hopes to ensure that California does not penalize technologists.
We need to truly focus on the potential impacts on humanity and our communities, rather than blaming the technology itself… If a car is misused intentionally or unintentionally and harms someone, it makes no sense to punish the automotive engineer—like Ford or General Motors. Simply punishing automotive engineers will not make cars safer. What we need to do is continue innovating to achieve safer measures while improving the regulatory framework—just like seat belts or speed limits—AI is the same.”
This is one of the better arguments I’ve heard against SB 1047, which would penalize tech companies affected by dangerous AI models.
Although Li is providing AI regulatory advice to California, she is also running her startup World Labs in San Francisco. This is Li’s first startup, and she is one of the few women leading cutting-edge AI labs.
“We are a long way from a truly diverse AI ecosystem,” Li said. “I do believe that diverse human intelligence will lead to diverse AI and will bring us better technology.”
In the coming years, she is excited to bring “spatial intelligence” closer to reality. Li stated that human language is the foundation of today’s large language models, which may have taken millions of years to develop, while vision and perception may have taken 540 million years. This means that creating large world models is a more complex task.
“It’s not just about letting computers see, but truly letting computers understand the entire three-dimensional world, which I call spatial intelligence,” Li said. “We don’t just look to name things… we really look to do things, navigate the world, and interact with each other, and narrowing the gap between seeing and doing requires spatial knowledge. As a tech expert, I find this very exciting.”
This article is translated from: TechCrunch, https://techcrunch.com/2024/10/03/even-the-godmother-of-ai-has-no-idea-what-agi-is/
Technical Exchange Group Invitation

Understanding AGI: Insights from Fei-Fei Li

△ Long press to add assistant

Scan the QR code to add the assistant on WeChat

Please note: Name-School/Company-Research Direction
(e.g., Xiao Zhang-HIT-Dialogue System)
to apply for joining the Natural Language Processing/PyTorch and other technical exchange groups

About Us

MLNLP community is a grassroots academic community built by machine learning and natural language processing scholars from home and abroad, which has developed into a well-known community for machine learning and natural language processing, aiming to promote progress between academia, industry, and enthusiasts in machine learning and natural language processing.
The community can provide an open communication platform for further education, employment, and research for relevant practitioners. Everyone is welcome to follow and join us.

Understanding AGI: Insights from Fei-Fei Li

Leave a Comment