Click


“
𝕀²·ℙarad𝕚g𝕞 Intelligent Square Paradigm Study: Writing Deconstructed Intelligence。
After all, deep learning LLMs are not the entirety of AI, and the path to AGI is not solely driven by OpenAI. The capabilities of AI as a technology and tool have been progressing, so after much contemplation, I have decided to rename this series to the AGI Collision Series, hoping to provide an account of the collision with walls in three articles.
Yes, the wall—the wall of reasoning in the path of language model AGI—has been hit.
Next, we need to seek a new slope curve for the evolution of digital intelligence.
The Wall of Language
In the previous language interview series, I organized two interviews with Noam Chomsky, the father of modern linguistics, following the breakthroughs of ChatGPT:
AI and Language Series E02S01|Understanding AI Requires Revisiting the Essence of Language|Interview Transcripts and Interpretations with Noam Chomsky
AI and Language Series E03S01|Decoding Mind and Large Language Models through Language|Reinterpretation of Noam Chomsky’s Video Interview Transcripts
The first article explored Chomsky’s perspective on linguistics, while the latter sought comparisons with human linguistics through the use of LLMs. The intention behind organizing these two interviews was to examine from the perspective of the 𝕀²·ℙarad𝕚g𝕞 Intelligent Square Paradigm the origins of human language, why only humans possess such unique capabilities, and to understand whether language models, after violently deconstructing the inherent structure of human linguistics through digital neural networks, would also encounter the limits of language use as Wittgenstein suggested.
As mentioned in the cover image, I once asked the old man about his views on Wittgenstein’s statement in an interview. Rather than criticism, it could be said that the old man hoped to deconstruct the limits implied by Wittgenstein through the dynamics of linguistic structure. This also sparked my personal interest, leading me to seek out the book “Why Only Us: Language and Evolution” authored by the old man.
△
Minimalist Theory and Focus on Grammar |The author argues that the core of human language, its unique “basic property,” is its ability to generate an infinite number of hierarchical expressions (sentences) through a simple combinatorial operation (they call it “merge”). This operation combines syntactic elements (not necessarily linear elements) into sets. It further suggests that this syntactic capability is very simple (a form of minimalism), optimized for efficient computation, and largely independent of other cognitive systems. Much of the complexity and diversity we see in language arises from the externalization of this system.This simplicity is crucial because it makes the evolution of language more rational, making it less complex to explain (note: this aligns with my intuition that the atomic components of complex systems always exhibit simplicity).
Tripartite Model of Language |The language model proposed in the book consists of three parts:
-
Internal computational system that generates hierarchical structures (merge/grammar);
-
Sensorimotor interface that processes output (such as speech or symbols) and input (externalization);
-
And the “conceptual-intentional interface” that connects grammar with our inference, planning, and “thought” systems.
The book emphasizes that research should focus on syntax/merge, while externalization is largely an ancillary component of language. It is worth noting that current mainstream evolutionary language research often does not adopt this view.
Primacy of Thought over Communication |The author questions the traditional view that language evolved primarily for communication.They argue that language is primarily a “tool for thought,” an internal system for organizing and manipulating concepts; external communication is merely a secondary function.
The rationale for this view is based on arguments presented in the book: the basic computational operations of language (such as displacement) seem optimized for internal efficiency, even if this makes external communication more challenging; another view posits that the properties of different externalization modes (speech and symbols) are roughly similar.
In summary, Chomsky’s main argument revolves around the idea that language is a uniquely human ability, its core grammar is very simple, arising from relatively sudden biological changes, and optimized for internal thought processes, while the observable complexity and diversity of language mainly arise from the peripheral systems of externalization.
Function of Language – Interpretation |This impression can also be found in the two interviews I organized. The greatest insight it brought me is the driving force or function of language evolution: for thought and interpretation.
“These results suggest that language evolved for thought and interpretation: it is fundamentally a system of meaning. Aristotle’s classic dictum that language is sound with meaning should be reversed. Language is meaning with sound (or some other externalization, or none); and the concept with is richly significant.
Externalization at the sensorimotor level, then, is an ancillary process, reflecting properties of the sensory modality used, with different arrangements for speech and sign. It would also follow that the modern doctrine that communication is somehow the “function” of language is mistaken, and that a traditional conception of language as an instrument of thought is more nearly correct. In a fundamental way, language really is a system of “audible signs for thought,” in Whitney’s words, expressing the traditional view.”
This also resonates with my impressions of LLM paradigm research over the past year. I remember that Mira, OpenAI’s former CTO, mentioned in an interview that GPT’s conversational abilities have immense persuasiveness. Current LLMs can capture attention points in conversational context and identify dialogue intentions to extract relevant knowledge and generate responses, all while enhancing conversational engagement through reinforcement learning during the post-training phase. This allows them to decode and express thoughts along with the conversational partner, even providing potential viewpoints, showcasing a powerful manipulation capability in language.
Thus, the inherent interpretative nature of language evolution is a prerequisite for the current LLM’s success in conversational behavior after pre-training. While LLMs cannot learn to converse with as few language samples as human infants, they have violently deconstructed the underlying language structure issues through computational power.

Here, I borrow the DIKW cognitive model to elaborate a bit. From the perspective of understanding quadrants, relevance can be addressed through data statistics, while pattern recognition requires sensory neural network computations. Principles already operate at the level of language abstraction, necessitating the current LLM language model. This aligns with the cognitive patterns in neuroscience, where our cognitive processes unfold with behavioral context. However, human intelligence fundamentally aims for survival and reproduction, which can be viewed as a form of practical neural network computation. Language, as a symbolic abstraction in our constructed cognitive model, is essential for cognitive processes and serves to provide rational explanations for our behaviors, as well as for persuasion in social interactions. However, this process is not a singular one; it is an infinite loop of continuous correction and optimization of our cognitive model to approach our beliefs.
Function of Language – Thought |Language is indeed not just a tool for communication; in addition to explanation and persuasion, it is also a medium for thought and memory. This phenomenon is particularly evident in interactions with LLMs, as the model responds based on its training data, sometimes making it feel like the model is thinking for us and explaining the world.
Memory, recall, reconstruction, and confabulation all play crucial roles in human cognition and language use:
-
Memory (We remember): We rely on language to store and retrieve information.
-
Recall (We recall): We use language to evoke past experiences or knowledge.
-
Reconstruction (We reconstruct): We use language to reorganize or reinterpret our experiences and knowledge.
-
Confabulation (We confabulate): Sometimes we create stories or memories that may not be entirely accurate or true using language.
In interactions with LLMs, these processes can be accelerated or influenced by the model’s responses, as LLMs can quickly provide information, explanations, or answers, which may impact our own thinking processes and understanding of reality. “It thinks for you and explains it to you while persuading you,” which is the experience many people may have when using AI-assisted tools.
Regarding the role of reasoning in broader human thinking, people often consider reasoning to be the sole form of thought, or that reasoning is always the core form of thought. However, this may not necessarily be the case; different types of thinking and their relationships with reasoning are worth exploring.
In general, reasoning can be defined as
-
Logical reasoning: Using logical principles to derive conclusions from a set of premises. This may involve deductive reasoning (if the premises are true, then the conclusion is guaranteed to be true), inductive reasoning (the conclusion may be true based on evidence), or abductive reasoning (inferring the best explanation).
-
Goal-oriented thinking: Reasoning is usually aimed at solving problems, making decisions, and finding ways to achieve desired outcomes. It often involves planning, analyzing potential consequences, and weighing various options.
-
Cautious and controlled characteristics: Compared to intuition or automatic responses, reasoning is often considered a more conscious, effortful, and controlled form of thought.
Reasoning plays a vital role in many aspects of human life, including
-
Scientific inquiry: It is the pillar of the scientific method, helping us test hypotheses, analyze data, and draw evidence-based conclusions.
-
Mathematical proof: It is indispensable in mathematics, allowing us to establish rigorous logical systems and derive theorems from axioms.
-
Ethical decision-making: It helps us apply moral principles to specific situations, make moral judgments, and strive for more ethical outcomes.
-
Legal reasoning: It is crucial for interpreting laws, understanding legal precedents, and engaging in legal arguments.
-
Planning and strategy: It enables us to formulate complex plans, anticipate potential problems, and outline the necessary steps to achieve goals.
-
Symbolic thinking: Language abstracts concrete concepts into symbols (words), enabling us to process things that are not immediately present and engage in hypothesis and reasoning. -
Accumulation and transmission of knowledge: Language allows knowledge and experience to be effectively passed down and accumulated between individuals and generations, forming a unique mechanism of “cultural evolution” that far surpasses the speed of genetic evolution. -
Complex social cooperation: Language enables us to engage in precise planning, division of labor, and collaboration, constructing complex social structures. -
Self-awareness and reflection: Language endows us with the ability to introspect, allowing us to describe our thoughts, feelings, and beliefs, thus forming self-awareness.
Next Preview: Independent Thoughts from the 𝕀²·ℙarad𝕚g𝕞 Paradigm Think Tank
Breaking Through the Limitations of Intelligence |We are all biological neural network agents trapped in the exquisite design of our creator
-
Limitations of perception: Our senses (sight, hearing, touch, etc.) can only perceive a limited physical spectrum and forms of information, and there may exist higher dimensions or forms of information that we cannot perceive.. -
Cognitive biases: Our cognitive patterns and ways of thinking are formed to adapt to specific survival environments, which may contain inherent biases and blind spots, limiting our understanding of broader realities. -
Constraints of biological needs: Our intelligence is driven by biological needs, such as hunger, fear, reproduction, etc., which may affect our rational thinking and objective judgment. -
Linear perception of time: We perceive time linearly, distinguishing between past, present, and future; higher-dimensional intelligence may transcend this linear view of time, understanding it in a more holistic manner.
-
Non-material intelligence: There may exist intelligence that does not rely on biological carriers, such as that found in information networks, cosmic structures, or energy forms we have yet to understand. -
Perception beyond senses: Higher-dimensional intelligence may possess unimaginable ways of perception, directly sensing information or energy that we cannot touch. -
Non-linear view of time: They may perceive and process time in a non-linear way, where past, present, and future may coexist or influence each other in ways we cannot comprehend. -
Different goals and values: Their goals and values may differ drastically from our survival and reproductive needs, pursuing more abstract and grander objectives, such as understanding the ultimate mysteries of the universe.


Original Link:
-Related𝕏 articles and videos