Author:Los
Since the release of ChatGPT in November 2022, LLMs have been the most controversial “hot topic” in the field of artificial intelligence. While often referred to as the era of large models, it should more accurately be called the “era of large language models,” or the LLMs era.
This year, the tech world has focused on “AGI,” which also requires LLMs to be realized. It is evident that LLMs have reached a status that is almost unquestionable.
However, the doubts surrounding them have not ceased—
Today at 5:17 PM Beijing time, Professor Li Feifei, the director of Stanford HAI, posted a serious rebuttal on social media, arguing against the subjective perception that “LLMs possess perception abilities.” The article is titled “No, Today’s AI Isn’t Sentient. Here’s How We Know“:
This article was co-authored by Stanford philosophy professor John Etchemendy, who is also one of the co-founders of HAI.
Under Professor Li’s post, over 20 authoritative engineers shared their opinions, with sharp remarks… Here, I will only extract part of it for your understanding:

Interestingly, the origin of this article was just a debate chat on WhatsApp, but debates often spark more thoughts and expressions. Based on this group chat, over the past few months, Li Feifei and John contacted many AI scholars for one-on-one/multiple debates on the perception capabilities of LLMs, leading to some conclusions and the creation of this article.
After reading this article, I made a somewhat subjective summary based on the development trends of LLMs and Li Feifei’s long-standing views on AI, for the readers’ reference.(For those interested in the original text and more debate information, it is best to read the original work, as everyone has a different perspective, and I believe you will have your own understanding.).
The two professors began by discussing the concept of AGI, exploring the profound meaning of the “G” in AI→AGI:
The felt need to add the “G” came from the proliferation of systems powered by AI, but focused on a single or very small number of tasks. Deep Blue, IBM’s impressive early chess playing program, could beat world champion Garry Kasparov, but would not have the sense to stop playing if the room burst into flames.
“The reason for adding ‘G’ (general) is that there has been a proliferation of systems powered by AI, but they only focus on one or very few tasks. IBM’s impressive early chess program, Deep Blue, could defeat world champion Garry Kasparov, but it would not have the awareness to stop playing if the room caught fire.”
It is a well-known limitation of “specialized” technology versus “general” technology. Humans can handle unexpected events while focusing on event A, but AI born solely for event A will not perform in event B—unless it can generate an emergency procedure for “handling event B,” which is, of course, the “running into a wall” scenario.
Two professors also jokingly described how AGI has been given a certain “mythical” quality:
Now, general intelligence is a bit of a myth, at least if we flatter ourselves that we have it.
‘if we flatter ourselves that we have it.’ This can even be considered quite sharp.
The two professors used a very interesting hypothesis to compare human and animal intelligence with AI intelligence, showing their inclination towards the current concept of AI to some extent.
With the discussion of the AGI concept, this article began to elaborate on the core of this rebuttal: “perception.”
If AGI is to realize the vision of “G,” it requires AI to possess the ability to “feel,” “understand,” and “respond” to multiple different events like humans, which is “perception.”
Having “perception” would undoubtedly be a historic advancement in AI technology, but it would also trigger panic: “Does AI have self-awareness?” “Is AI going to defeat humans?”
Although many people, including myself, currently believe that in the time we have left to struggle, AI cannot replace humans; there are still many who believe that AI possesses a “perception” that makes humans feel threatened.
Why is there such a perception?
After interviewing several AI scholars, the two professors summarized some representative viewpoints from supporters:
AI is sentient because it reports subjective experience. Subjective experience is the hallmark of consciousness. It is characterized by the claim of knowing what you know or experience.
In simple terms, they believe that some responses from LLMs contain emotionally colored words, which represent a subjective experience, thus indicating that AI has a certain degree of “self-awareness.”
It sounds bizarre yet reasonable, but Li Feifei directly opposed this viewpoint:
Using the scenario of “I feel hungry” as the core of the rebuttal, Li Feifei proposed a key term “physiological characteristics”—when humans feel hungry, they are perceiving a series of physiological states—low blood sugar, an empty stomach rumbling, etc.—while LLMs do not have these physiological states, just as they do not have a mouth to eat or a stomach to digest food.
Throughout history, we have emphasized “thought” to distinguish humans from ordinary animals, naturally making “thought” the standard for all “humanoid” entities, but we have forgotten that the foundation of “thought” is biological states.
As the two professors stated:
All sensations—hunger, feeling pain, seeing red, falling in love—are the result of physiological states that an LLM simply doesn’t have. Consequently we know that an LLM cannot have subjective experiences of those states. In other words, it cannot be sentient.
An LLM is a mathematical model coded on silicon chips. It is not an embodied being like humans. It does not have a “life” that needs to eat, drink, reproduce, experience emotion, get sick, and eventually die.
“All sensations—hunger, feeling pain, seeing red, falling in love—are the result of physiological states that LLMs simply do not have. Consequently, we know that LLMs cannot have subjective experiences of those states. In other words, they cannot be sentient.
LLMs are mathematical models coded on silicon chips. They are not embodied beings like humans. They do not have a “life” that requires eating, drinking, reproducing, experiencing emotions, getting sick, and eventually dying.”
Li Feifei’s side believes that the so-called reports with “subjective awareness” given by LLMs are only produced after humans input prompts with “subjective awareness,” rather than being generated by LLMs themselves.
The two professors believe that if we want AI to truly achieve “G,” scholars need to better understand how perception arises in embodied biological systems—this is what Li Feifei has consistently advocated: “AI needs to be human-centered.”
In summary, this article does not analyze the “perception” of LLMs in terms of technology and tokens; instead, it discusses more social issues beyond the identity of a technologist. However, I think that from a technical perspective, this “perception” would have another layer of interpretive logic.
However, I strongly agree with Professor Li’s principle of “AI being human-centered,” as AI originally and still has not abandoned the suffix definition of assistant; we humans should not go against this principle.
*Some affirming statements in this summary are independent thoughts, and everyone has their own perspective; differing opinions are respected and welcomed.*