The Origins of Artificial Intelligence

Artificial Intelligence (AI), is a new interdisciplinary subject developed through the integration of various fields such as computer science, cybernetics, information theory, linguistics, neurophysiology, psychology, mathematics, and philosophy. Since its inception, AI has gone through many ups and downs but has finally gained recognition as an emerging discipline and has increasingly attracted interest and attention. Not only have many other disciplines begun to incorporate or borrow AI technologies, but expert systems, natural language processing, and image recognition within AI have become three major breakthroughs in the emerging knowledge industry.

The idea of artificial intelligence can be traced back to the 17th century with Pascal and Leibniz, who were among the first to conceive of intelligent machines. In the 19th century, British mathematicians Boole and De Morgan proposed the “laws of thought,” which can be considered the beginning of artificial intelligence. In the 1920s, British scientist Babbage designed the first “calculating machine,” which is regarded as the precursor of computer hardware and AI hardware. The advent of electronic computers made the research of artificial intelligence truly possible.

As a discipline, artificial intelligence emerged in 1956, proposed at a conference held at Dartmouth University by the “father of AI” McCarthy along with a group of mathematicians, computer scientists, psychologists, and neurophysiologists. Due to different research perspectives, various schools of thought have formed in AI research, including the symbolic school, connectionist school, and behaviorist school.

Traditional artificial intelligence is symbolic, based on the physical symbol system hypothesis proposed by Newell and Simon. A physical symbol system consists of a set of symbol entities that are physical patterns, which can appear as components within the symbol structure and can generate other symbol structures through various operations. The physical symbol system hypothesis posits that a physical symbol system is both a sufficient and necessary condition for intelligent behavior. The main work is the General Problem Solver (GPS): by abstraction, transforming a real system into a symbol system, and based on this symbol system, using dynamic search methods to solve problems.

The connectionist school studies the nature and capabilities of non-programmed, adaptive, brain-style information processing, starting from the structure of the human brain’s nervous system, focusing on the information processing capabilities and dynamic behaviors of large groups of simple neurons. This is also referred to as neural computing. Research emphasizes simulating and realizing sensory and perceptual processes, image thinking, distributed memory, and self-learning and self-organizing processes in human cognition.

The behaviorist school, based on behavioral psychology, believes that intelligence is only manifested in interaction with the environment.

The research on artificial intelligence has gone through several stages:

  1. The First Stage: The Rise and Fall of AI in the 1950s. Following the initial proposal of the concept of AI, several significant achievements emerged, such as machine theorem proving, checkers programs, general problem-solving programs, and the LISP list processing language. However, due to the limited reasoning capabilities and failures in machine translation, AI entered a valley. This stage was characterized by an emphasis on problem-solving methods while neglecting the importance of knowledge.
  2. The Second Stage: The Emergence of Expert Systems from the Late 1960s to the 1970s, marking a new peak in AI research. Research and development of expert systems such as DENDRAL for chemical mass spectrometry analysis, MYCIN for disease diagnosis and treatment, PROSPECTOR for mining exploration, and Hearsay-II for speech understanding led AI towards practical applications. In 1969, the International Joint Conferences on Artificial Intelligence (IJCAI) was established.
  3. The Third Stage: Significant Development of AI in the 1980s with the Development of Fifth Generation Computers. Japan initiated the “Fifth Generation Computer Systems” project in 1982, aiming to make logical reasoning as fast as numerical calculations. Although this project ultimately failed, it sparked a wave of AI research enthusiasm.
  4. The Fourth Stage: Rapid Development of Neural Networks in the Late 1980s. The first international conference on neural networks was held in the USA in 1987, marking the birth of this new discipline. Subsequently, investments in neural networks grew, leading to rapid advancements.
  5. The Fifth Stage: A New Peak in AI Research in the 1990s. With the development of network technology, particularly the Internet, AI research shifted from studying single intelligent agents to distributed AI based on network environments. Research focused on distributed problem-solving based on shared goals and multi-agent systems, making AI more practical. Additionally, the introduction of the Hopfield multi-layer neural network model led to a flourishing of research and applications in artificial neural networks. AI has penetrated various fields of social life.

IBM’s “Deep Blue” computer defeated the world chess champion, and the USA developed the information superhighway plan with multi-agent systems as a key research focus. Agent technology-based Softbots have found extensive applications in software and web search engines. The Sandia National Laboratory established the largest “virtual reality” laboratory globally, aiming to achieve more user-friendly human-computer interaction through data helmets and gloves, establishing better intelligent user interfaces. Significant advancements have been made in image processing and recognition, as well as sound processing and recognition, with IBM launching the ViaVoice voice recognition software to make sound an important medium for information input. Major computer companies worldwide have once again begun to include “artificial intelligence” in their research agendas. It is widely believed that computers will evolve towards networking, intelligence, and parallel processing. The information technology field of the 21st century will center on intelligent information processing.

Currently, the main research areas in artificial intelligence include: distributed AI and multi-agent systems, artificial cognitive models, knowledge systems (including expert systems, knowledge base systems, and intelligent decision-making systems), knowledge discovery and data mining (extracting useful knowledge from large, incomplete, fuzzy, and noisy data), genetic and evolutionary computation (simulating biological genetics and evolution theories to reveal the laws of human intelligence evolution), artificial life (constructing simple artificial life systems, such as robotic insects, and observing their behavior to explore the mysteries of primitive intelligence), and AI applications (such as fuzzy control, intelligent buildings, intelligent human-computer interfaces, intelligent robots, etc.).

Although significant achievements have been made in AI research and applications, there is still a long way to go for comprehensive promotion and application, with many issues yet to be resolved, requiring collaboration among multidisciplinary research experts. The future research directions of artificial intelligence mainly include: AI theory, machine learning models and theories, imprecise knowledge representation and reasoning, commonsense knowledge and reasoning, artificial cognitive models, intelligent human-computer interfaces, multi-agent systems, knowledge discovery and acquisition, and foundational AI applications.

Leave a Comment