★
Click the “Shanghai Normal University Journal of Philosophy and Social Sciences” above to follow us
Artificial Intelligence and Human Society
Artificial Intelligence: Tool or Subject?
——A Discussion on the Singularity of Artificial Intelligence
Cheng Chengping
▲Originally published in “Shanghai Normal University Journal (Philosophy and Social Sciences Edition)”2021 Issue 6
Abstract:The essence of human intelligence lies in thought, which is a manifestation of consciousness. Consciousness is the result of the emergence of a complex system of neurons, and it does not adhere to material conservation laws; thus, consciousness is non-material. Artificial intelligence is non-living matter and cannot produce consciousness. In terms of content, artificial intelligence, as matter, cannot simulate non-material consciousness. Artificial intelligence can only simulate consciousness in form. This means that artificial intelligence can simulate thought but does not possess thought and cannot think. Therefore, artificial intelligence cannot become a subject; it can only be a tool. However, as artificial intelligence improves its ability to simulate human intelligence in form, some artificial intelligence can become a formal subject. A consciousness-lacking artificial intelligence cannot achieve the singularity, but the combination of humans and machines may achieve it. Although the singularity of human-machine integration will not pose a serious threat to human existence, it will bring severe challenges to the existing social order and humanity.
Keywords:Artificial Intelligence; Human Tools; Formal Subject; Embodied Cognition; Non-material; Human-Machine Integration; Singularity
Author Introduction:Cheng Chengping, Professor at the School of Economics and Management, Wuhan University, Doctoral Supervisor (Wuhan, Hubei 430072).
The question of whether artificial intelligence is a tool or a subject is both a scientific and a philosophical issue. Currently, there is no consensus in the academic community regarding this issue. Scientifically answering this question helps various sectors of society view artificial intelligence and its development rationally.Based on defining the concepts of tools, subjects, and artificial intelligence, and analyzing the origins of human intelligence and the nature of human consciousness, this study concludes that artificial intelligence can only be a tool and not a subject. On this basis, the paper further analyzes the widely concerned issue of the singularity of artificial intelligence. This paper argues that artificial intelligence cannot achieve singularity, but the combination of humans and machines may achieve it.The main innovations of this paper are reflected in two aspects: first, by clarifying the origin of human intelligence and proving the non-materiality of consciousness, it concludes that artificial intelligence does not have thought, cannot think, and can only simulate human intelligence in form, not in content; it is merely a tool for humans; second, it demonstrates that artificial intelligence cannot achieve singularity. These two aspects of innovation have important theoretical value and practical significance.1. Tools, Subjects, and Artificial IntelligenceTo correctly answer whether artificial intelligence is a tool or a subject, we must first define the concepts of tools, subjects, and artificial intelligence.1. Tools and SubjectsTools typically refer to instruments used by humans. Tools can be further abstracted as means to achieve goals.The concept of a subject has different meanings in different disciplines. For example, in philosophy, a subject refers to a person who can recognize objects and has practical abilities; in systems theory, a subject refers to the main components that constitute a system; in criminal law, a subject refers to a person who bears criminal responsibility due to crime; in civil law, a subject refers to a citizen or legal person who enjoys rights and bears obligations; in international law, a subject refers to the sovereign exercisers of state power and corresponding obligors.To make the discussion have a certain universality and broadness, this paper discusses the concept of a subject from a philosophical perspective. However, the current philosophical concept of a subject is limited to humans and does not include objects other than humans. To make the concept of a subject open, the extension of the concept of a subject is considered for expansion, as with the development of science and technology and the expansion of human experience, it may be possible to discover or create objects with subject characteristics. For example, discovering extraterrestrial beings with subject characteristics or creating objects with subject characteristics. The characteristics of a subject are the ability to recognize and practice, in short, subjective initiative. Subjective initiative, from the perspective of the relationship between subject and object, refers to the party with subjective ability reflecting the object subjectively, forming rational or perceptual understanding through a series of abstract processes, and then reacting back to the object through practical activities. Therefore, this paper defines a subject as a person or object with subjective initiative. This broadened concept of a subject lays the conceptual foundation for including artificial intelligence and other objects into the category of subjects.From the perspective of the relationship between tools and subjects, the subject is the goal, and the tool is the means to achieve the subject’s goals. From the perspective of causality, the subject can be self-causal. Using econometric language, the subject has self-correlation, while tools do not have self-correlation.2. Artificial IntelligenceDue to differing understandings and definitions of artificial intelligence in academia, to unify the understanding, this paper believes that it is more appropriate to define artificial intelligence using a historical approach. In 1956, scientists including John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon first proposed the concept of “artificial intelligence” (AI) at a conference at Dartmouth College in the United States, defining AI as machines that simulate, extend, and enhance human intelligence.In the above AI concept, there are two keywords: one is human intelligence, and the other is machine. At that time, McCarthy and other scientists did not define what human intelligence is, nor did they define what a machine is. There is controversy regarding the understanding of human intelligence, but there is no controversy regarding the understanding of machines. A machine typically refers to a device assembled from various metal and non-metal components that operates and performs work by consuming energy. It should be emphasized that the material basis of machines is inorganic rather than organic.Different disciplines have different understandings or definitions of human intelligence. For example, computer scientists view human intelligence as “computation” from the idea that the world is computable; mathematicians regard human intelligence as “abstract ability” or “logical proof ability”; linguists believe the key to human intelligence lies in “understanding”; logicians see human intelligence as manifested in “reasoning”; cognitive scientists view human intelligence as “thinking” or “thought”. From the perspective of the extension of the concept, we can find that among the various different understandings of human intelligence, the concept of thought has the broadest extension, as thought can encompass computation, abstract ability, logical proof ability, understanding, reasoning, etc. Although there are different understandings or definitions of thought, different understandings or definitions share a common point: thought is the process of revealing the inherent and essential characteristics of things. Thought can be divided into basic forms or types such as imaginal thinking, abstract thinking, and inspirational thinking, among which abstract thinking can be further divided into formal logical thinking and dialectical logical thinking.Therefore, to reach a consensus, we can define human intelligence as thought. In fact, if human intelligence is understood merely as computation, reasoning, or abstract ability, it does not adequately encompass or reflect the entirety of human intelligence. For example, the “logical proof ability” referred to by mathematicians includes formal logic, which encompasses inductive logic and deductive logic but excludes dialectical logic. Currently, artificial intelligence has the capabilities of inductive logic and deductive logic but does not yet possess the capability for dialectical logic. Whenever it encounters dialectical logic problems, AI based on logicalism will run into difficulties. The “reasoning” referred to by logicians includes causal reasoning and counterfactual reasoning. The difference between the two is that the former is based on facts, while the latter can be counterfactual. In 2011, Turing Award winner and computer scientist Judea Pearl stated that current AI lacks causal reasoning ability, and how to enable AI to possess causal reasoning ability is a significant challenge faced by AI experts.2. Artificial Intelligence is a Tool, Not a SubjectAfter defining the concepts of tools, subjects, and AI, we then need to clarify the origin of human intelligence and prove the non-materiality of consciousness. Based on this, we will argue the viewpoint that AI is a tool and not a subject based on the material characteristics of AI, and clarify that AI can become a formal subject.1. The Origin of Human Intelligence and the Non-materiality of ConsciousnessFirst, the origin of human intelligence. Based on the above analysis, we can define human intelligence as thought. Thought specifically manifests as computational ability, logical proof ability, reasoning ability, and understanding ability, all of which have subjective characteristics. Cognitive science indicates that human thought is a manifestation of human consciousness; if a person lacks consciousness, they cannot have thought. Currently, there is no consensus in academia on the meaning of consciousness, but psychology typically defines consciousness as “psychological phenomena that are currently being perceived by a person.” From the perspective of whether a person’s psychological state can be perceived, consciousness can be distinguished as “conscious” and “unconscious” or “subconscious.”Regarding the origin of consciousness, there are two viewpoints: materialist and idealist. Materialism posits that consciousness arises from the brain and is a function of the brain; some materialists even believe that consciousness itself is a form of matter. Idealism posits that the mind (consciousness) and the brain are two separate entities; some idealists even believe that the mind can exist independently of the brain.Modern neurophysiology posits that consciousness must be predicated on life; it is the result of the complex system of organized neurons in the brain. Research in cognitive neuroscience shows that if certain neurons in the brain are damaged, corresponding consciousness will be lost, and neurons in different brain regions govern different aspects of consciousness. This indicates a close relationship between consciousness and neurons, denying the view that consciousness can exist independently of the brain. Psychologist Susan Blackmore asserts that to date, no non-living entity has been found to produce or possess consciousness.Embodied cognition posits that human consciousness arises not only from brain neurons but also from neurons outside the brain. Maria Brincker argues that human consciousness originates from the perception of all body neurons and interaction with the external world. Embodied cognition is gradually being recognized in academia because many skills cannot be expressed through the brain; these skillful knowledges are hidden within the sensory neurons throughout the body. For example, swimming skills learned through training cannot be fully expressed through explicit knowledge in the brain; they are merely a form of implicit knowledge hidden within the body’s motor neurons, and one cannot acquire swimming skills merely by learning swimming knowledge in class.The above analysis indicates that human intelligence originates from human consciousness, and human consciousness arises from the emergence of a complex system of neurons in living organisms; consciousness cannot exist independently of the living body.Second, consciousness is non-material. The above studies indicate that consciousness arises from the emergence of a complex system of neurons in living organisms. Therefore, we pose a philosophical question: Can matter generate non-matter? Or, more generally, can “nothing” and “something” transform into each other? In fact, the transformation between “nothing” and “something” is a common phenomenon in nature. For example, inorganic matter can generate organic matter; one form of matter can transform into another form; one form of energy can transform into another form of energy; matter and energy can also transform into each other; these are all commonplace phenomena in nature. Therefore, we have reason to believe that matter can generate non-matter. For instance, consciousness is non-matter that emerges from organic matter—the complex system of neurons. We can at least support the view that consciousness is non-material with four arguments.Argument one: All matter adheres to the laws of conservation of mass and energy, while consciousness does not adhere to these laws. Matter can transform from one form to another, but mass neither increases nor decreases, meaning matter adheres to the law of conservation of mass. However, consciousness does not adhere to conservation laws because consciousness can be infinitely replicated. For example, a person’s viewpoint can be accepted by countless others without losing the original viewpoint.Argument two: Matter can be detected, while consciousness cannot be detected. Humans can only perceive consciousness through introspection and cannot detect its existence. Thus far, various advanced detection technologies, such as brain imaging scanners, have failed to detect and discover the existence and composition of consciousness.Argument three: Matter is limited in space and time, while consciousness is limitless in space and time. Professor Han Shuifa of Peking University posits that consciousness has super-temporal and spatial characteristics. For example, humanity has only a history of written civilization spanning over 5,700 years, and modern physics has a history of just over 400 years since Galileo Galilei; however, humanity’s understanding of nature can be traced back to the beginning of the universe, spanning 15 billion years, allowing observation and analysis of celestial bodies over 30 billion light-years away.Argument four: Matter follows causality, while consciousness does not completely adhere to causality. Humans can be self-causal. For instance, humans can set their own behavioral goals and norms. Art, theory, and value beliefs are all created by humans for their own needs and are not imposed on humanity by nature. Various forms of tools are also created by humans for their own purposes; these forms of tools do not exist in nature. Humans possess infinite creativity, rooted in the non-spatial and non-temporal characteristics of consciousness and the self-causal nature. Humans can be the cause of other things’ occurrence and development, as well as the cause of their own occurrence and development.Through the above four arguments, we can conclude that consciousness is non-material.2. Artificial Intelligence Can Become a Formal SubjectAccording to the concept of AI, AI is a machine composed of inorganic matter rather than organic matter. Blackmore argues that since AI is not composed of organic matter and lacks neurons, “artificial consciousness is unattainable.” Therefore, AI cannot emerge consciousness like humans, and without consciousness, it cannot possess human intelligence.In terms of content, AI is material, while human intelligence is non-material, creating an insurmountable chasm between the two. This means that material AI cannot simulate non-material human intelligence. AI lacks consciousness and cannot content-wise simulate consciousness, meaning AI cannot possess thought.In terms of form, regardless of whether it is material AI or non-material human intelligence, as long as there are formal commonalities, simulation is possible. Currently, AI can simulate human intelligence in three basic forms: first, symbolic; second, connectionist; third, behaviorist. AI can formally simulate human intelligence, meaning it can exhibit thought capabilities to a certain extent in form.However, AI that can only simulate human intelligence in form has two limitations: first, it has form without content, and thus AI cannot “understand”—it cannot comprehend the meaning of the roles it plays, nor can it understand the significance of its actions and purposes. It lacks true thinking ability and thus lacks creativity. Second, according to Gödel’s incompleteness theorem, humans may never fully recognize the characteristics of human intelligence; therefore, they cannot formalize all human intelligence, making it impossible for AI to completely simulate human intelligence in form.Nonetheless, since AI can simulate human intelligence in form, it possesses the potential to become a formal subject. A subject is a person or object with subjective initiative. For AI to become a formal subject means that AI has subjective initiative in form. Subjective initiative consists of three components: self-awareness, free will, and moral responsibility. Research shows that some AI possesses formal self-awareness, weak free will, and moral capability, but lacks substantial self-awareness and cannot independently bear moral responsibility; thus, some AI has weak subjective initiative. In other words, some AI can become a formal subject.AI with formal subject characteristics can respond to problems in two ways: first, formalized responses; second, experiential responses. Formalized responses refer to addressing any problem that can be formalized using formal methods. For example, if it is a logical reasoning problem, then symbolic AI can adopt formal methods to respond. Experiential responses refer to situations where similar problems have occurred in the past and have had successful solutions, allowing AI to apply experiential methods to solve similar future problems. With the continuous improvement of big data, smart chips, and algorithm technology, on one hand, AI can accumulate vast amounts of “experience”; on the other hand, AI’s computational speed far exceeds that of humans, allowing AI to help humans quickly solve many problems that can be addressed through experiential methods. However, when encountering problems that cannot be formalized and for which there is no prior experience, AI will be at a loss.3. Is the Singularity of Artificial Intelligence Possible?After answering whether AI is a tool or a subject, the logical extension is to further answer whether the singularity of AI is possible and how it might be achieved.1. The Singularity of Artificial Intelligence is ImpossibleThe issue of the AI singularity is a topic of widespread concern across society, and scientifically answering this question helps address societal concerns and dispel doubts. According to Professor He Huaihong of Peking University, in 1958, John von Neumann, known as the “father of computing” and “father of game theory,” first used the term “singularity” while discussing changes in computer technology with renowned mathematician Stanisław Ulam. American mathematician and novelist Vernor Vinge was the first to use the term “singularity” in a formal context in the field of AI. However, the most famous discussion of the singularity in AI comes from the renowned futurist Ray Kurzweil. In his book “The Singularity is Near,” Kurzweil expresses the viewpoint that with the accelerated development of AI technology, at a certain point, a technological singularity will be reached, at which point AI will greatly surpass human intelligence, and robots will become the “successors” of human evolution and thought. Kurzweil’s novel viewpoint has elicited both excitement and concern. Kurzweil’s singularity view has sparked long-term debate in academia, and to date, no consensus has been reached.David Chalmers, director of the Center for Consciousness at the Australian National University, believes that the evolution of AI mainly comes from algorithms, and that finding the correct algorithm can achieve the AI singularity. The main theoretical basis for the AI singularity is Moore’s Law, which posits that AI technology develops exponentially, reaching a point that triggers a qualitative change or mutation, known as the singularity. The speed of human intelligence development is far below that of AI, and when the singularity arrives, AI will surpass human intelligence.Opposing views regarding the AI singularity can be summarized into three main categories. First, the mind-body dualism perspective. Searle argues: “Only by believing that the mind can be conceptually and experientially separated from the brain—strong forms of dualism—can we hope to reproduce the mind through writing and running programs.” However, cognitive neuroscience research indicates that the mind (consciousness) and the brain are inseparable. Searle’s point is that the mind is influenced and constrained by the brain; even if AI can simulate the mind, it cannot reproduce psychological processes. Therefore, the AI singularity is impossible. Second, the weak computationalism perspective. Weak computationalism asserts that the philosophical foundation of the AI singularity is strong computationalism. Strong computationalism posits that the world is computable and that the human brain can be viewed as a digital computer, but the world is not completely computable, and the human brain is not merely a digital computer; computation is only part of the function of the human brain. Third, the material basis perspective. This perspective holds that the function of an object is related to both its structure and its material basis. Therefore, connectionism in AI, which attempts to simulate the organization of brain neurons while discarding its biological basis to achieve the simulation of human intelligence, is unlikely to succeed. As cognitive neuroscientist Antonio Damasio points out in his book “Self Comes to Mind,” living organisms are the basis of conscious minds; without living organisms, there can be no conscious minds.In response to the first category of opposing views, this paper argues that even if the mind and brain can be separated, the AI singularity still cannot be realized. This is because the mind is non-material, while AI is a non-living material; AI cannot possess or simulate non-material content, and merely simulating human intelligence in form will not surpass human intelligence. In response to the second category of opposing views, this paper argues that non-material human intelligence is not completely computable; thus, AI based on strong computationalism cannot simulate the mind. In response to the third category of opposing views, this paper argues that although living organisms are the basis of conscious minds, if conscious minds are also material, then with the advancement of technology, the mechanisms or functions of conscious minds may be understood. According to the theory of functional assignment, any physical function that can be understood can eventually be objectified by humans, allowing for reproduction. For example, once scientists grasp the aerodynamic mechanisms of bird flight, they can create airplanes; airplanes, though not birds, can fly.Through the above arguments, this paper concludes that the AI singularity is impossible.2. The Singularity May Be Achieved Through Human-Machine IntegrationSince AI cannot achieve singularity, how can singularity be realized? This paper argues that the combination of AI and humans into a human-machine integrated entity may achieve singularity. Human-machine integration can enable human intelligence and AI to empower each other, complementing each other’s strengths and developing together, thus enhancing the overall intelligence level, to the point where the intelligence of human-machine integration far exceeds human intelligence, leading to the singularity described by Kurzweil. However, this singularity differs from Kurzweil’s singularity primarily in terms of the carriers involved; the former’s carrier is human-machine integration, while the latter’s carrier is AI. Human-machine integration is human-led and essentially still belongs to humans, while AI belongs to matter.The reason Kurzweil’s AI singularity viewpoint has drawn attention and even panic from various sectors of society is twofold: first, it stems from humanity’s curious nature towards new things; and second, many people do not understand the AI singularity. Research indicates that AI must possess two conditions simultaneously to pose a threat to humanity: first, it must have a value system different from that of humans; second, it must have reproductive capabilities. If these two conditions cannot be simultaneously met, then even if an AI singularity occurs, it poses no threat to humanity. In fact, based on previous research, AI cannot simultaneously meet these two conditions. Therefore, even if an AI singularity occurs, there is no need for alarm. Since human-machine integration is fundamentally still human-led, it does not possess a value system different from that of humans, and thus, there is no need to feel panic about it.However, a new question that needs serious consideration is that when the singularity of human-machine integration occurs, it signifies that humanity has entered a post-human or transhuman era. Post-humans or transhumans still belong to humanity, but their intelligence far exceeds that of current humans. Moreover, through NBIC technology enhancement or improvement, humans may realize the dream of immortality; human biological characteristics may change, no longer being purely organic life forms but a combination of organic and inorganic, which will also alter identity based on memory sets, leading to inevitable disorder in the existing social order and a reconfiguration of social order.Regardless of whether and when the singularity of human-machine integration will arrive, it will come without anyone’s will. In fact, human-machine integration has already begun and is developing at a rapid pace. Three reasons lead to the rapid development of human-machine integration: first, there is enormous commercial value and health care value hidden within it. For example, research shows that many people in China suffer from sleep disorders, and the sleep economy market in China exceeded 400 billion yuan in 2020, and is expected to surpass 1 trillion yuan by 2030. Human-machine integration is expected to help those with sleep disorders regain normal sleep. Additionally, human-machine integration can assist disabled individuals who are blind, deaf, or paralyzed to restore their normal living abilities. Second, there is a need to explore and expand human potential. For instance, human-machine integration can allow individuals to see sights they could not see before, hear sounds they could not hear before, remember information they could not remember before, and accomplish tasks they could not do before, thus broadening human perspectives, expanding experiences, enriching thoughts, and perfecting character. Third, the internal logic of AI technology development. Technological development has two internal logics: on one hand, development is the driving force and source of technological advancement; on the other hand, technological development has path dependence. Human-machine integration is an important direction for AI technology development, and its development trend will inevitably strengthen.4. ConclusionTools typically refer to instruments used by humans. The extended concept of tools can be defined as means to achieve goals. A subject is a person or object with subjective initiative. AI is a machine that simulates, extends, and enhances human intelligence.Human intelligence originates from human consciousness; without consciousness, there is no human intelligence. Consciousness is the result of a complex system of neurons in humans, and it is non-material. AI is non-living matter; it cannot possess consciousness, nor can it content-wise simulate consciousness. Therefore, it cannot truly simulate human intelligence. It lacks thinking and understanding capabilities, as it cannot comprehend the meaning of the roles it plays, nor can it understand the significance of its actions and purposes. The lack of true thinking and understanding capabilities means it lacks creativity. However, AI can simulate human intelligence to a certain degree in form, thus “appearing” intelligent. AI that can only simulate human intelligence in form is just a tool and cannot become a subject, but it can become a formal subject.Since AI lacks consciousness, the AI singularity is impossible. However, human-machine integration absorbs the strengths of both human intelligence and AI, enhancing the overall intelligence level while retaining human subjectivity. Particularly, with the enhancement or improvement of humans through NBIC technology, both physical and intelligence levels will significantly increase, making the singularity of human-machine integration possible.If the singularity of human-machine integration is realized, it will bring many changes. First, humanity may achieve the dream of immortality, becoming post-humans or transhumans; a society where humans do not die may lead to unpredictable changes in concepts and various behavioral changes, and how to adapt to these changes will be a significant challenge. Second, how human-machine integration maintains humanity is a major dilemma, as humans do not yet know where the boundaries of humanity lie. Even if this boundary is known, ensuring that people do not cross it is also a significant challenge. Third, the positive and negative impacts on equality. On one hand, human-machine integration may lead to greater equality among humans. For example, nature gives humanity “dice rolls,” leading to natural differences between individuals, especially in terms of labor capabilities; this inequality, recognized by figures like Marx, can be addressed through technology, as the pursuit of equality is humanity’s nature. On the other hand, human-machine integration may also exacerbate inequalities among individuals, as the use of technology is often not free; wealth inequality may lead to unequal access to technology. How to ensure that new technology benefits everyone equally, achieving equality in physical capabilities and intelligence among people, is a significant challenge.The difficulties and challenges outlined above require all of humanity to join hands to address collectively; no individual, single organization, or even single country can resolve them. This may be a more intelligent problem than developing AI technology.(The article has been abridged; the full text can be viewed on our journal’s homepage.)
Great article! Must give a thumbs up