Follow Us | Book Giveaway at the End
In the 1940s, the world’s first computer, ENIAC, was born during the latter part of World War II. The initial intention behind this technology’s development was to assist the military in experiments, aiming to calculate missile trajectories and angles more quickly. This research provided a solid technical foundation for the realization of artificial intelligence.
Research in neuroscience shows that the human brain is an electronic network composed of neurons. Neurons exist only in two states: activation and inhibition, with no intermediate states. Furthermore, Wiener’s cybernetics, Shannon’s information theory, and Turing’s computational theory subsequently indicated that creating a brain based on electronic components is possible.
In 1943, physiologist W.S. McCulloch and mathematician W.A. Pitts proposed the mathematical model of the neuron, known as the M-P model. This model is an abstract and simplified representation constructed according to the structure and functioning of biological neurons, essentially modeling a biological neuron. The name M-P model is derived from the initials of the two proposers.

▲Proposers of the M-P Model: W.S. McCulloch
In 1949, psychologist Donald Hebb proposed the Hebb algorithm in his book “The Organization of Behavior,” and he was the first to introduce the concept of connectionism. Connectionism describes the way the brain works, namely that the brain’s operations are completed through the connections between brain cells. Hebb pointed out that if both the source and target neurons are excited, their synaptic connection will strengthen. This is also the biological basis of the Hebb algorithm. Hebb’s greatest contribution lies in his important hypothesis regarding the working principles of neural networks: information in neural networks is stored in the weights of connections.

▲Donald Hebb
In 1950, Turing published a groundbreaking paper predicting the possibility of creating machines with true intelligence and proposed a method for testing whether a machine possesses true intelligence, famously known as the Turing Test. Turing stated that if a machine can engage in conversation with a human without its machine identity being discernible, it can be considered intelligent. The Turing Test was the first serious proposal in the philosophy of artificial intelligence.
In 1951, W.S. McCulloch and W.A. Pitts’ student Marvin Lee Minsky, along with Dean Edmonds, built the first neural network machine (named SNARC). Over the next 50 years, Minsky became a leading figure in artificial intelligence. The image below shows Marvin Lee Minsky.

▲Marvin Lee Minsky
In 1955, Newell and (later Nobel Prize winner) Simon, with the assistance of J.C. Shaw, developed the “Logic Theorist.” This program could prove 38 of the first 52 theorems in “Principia Mathematica,” some of which were more novel and elegant than those in the original text. Simon believed they had “solved the mysterious mind/body problem, explaining how systems composed of matter could acquire mental properties” (this assertion’s philosophical stance was later termed “strong AI” by John Searle, meaning that machines can think like humans).
18 Years of Golden Development Period
In 1956, Minsky and John McCarthy organized the Dartmouth Conference. One assertion made at the conference was that “every aspect of learning or any other characteristic of intelligence should be precisely described so that machines can simulate it.” Attendees included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Newell, and Simon, each of whom made significant contributions to AI research in the first decade.
At this conference, Newell and Simon discussed the “Logic Theorist,” while McCarthy persuaded attendees to adopt the term “artificial intelligence” as the name for the field. The Dartmouth Conference established the name and tasks of artificial intelligence, marking the emergence of the initial achievements and the earliest researchers, and is widely regarded as the birth of AI.
In the years following the Dartmouth Conference, it was the pioneering era of artificial intelligence, with exciting achievements occurring almost everywhere. People discovered that computers could solve algebraic application problems, geometric proofs, and even learn and use English. Before this, no one had imagined that machines could be so intelligent. With a plethora of achievements, people began to hold overly optimistic views of artificial intelligence, even believing that fully intelligent machines would appear within 20 years.
In 1958, computer scientist Frank Rosenblatt proposed the first truly excellent artificial neural network—the perceptron. Through the perceptron, some linearly separable problems could already be solved. The theoretical model of the perceptron utilized the M-P model and had a simple and feasible learning algorithm.

▲Frank Rosenblatt
In 1960, electrical engineer Bernard Widrow and his student Marcian Hoff published “Adaptive Switching Circuits,” where they implemented neural networks in hardware, proposed the ADALINE network, and published the Widrow-Hoff algorithm (also known as the LMS algorithm). The ADALINE network can be used for adaptive filtering, and in this book, we will also see how to use the ADALINE network for printed character recognition.

▲Bernard Widrow and Marcian Hoff
During this period, there was great enthusiasm for artificial intelligence, with Minsky even expressing in 1967 that the problem of creating artificial intelligence would be substantially solved within a generation.
In June 1963, MIT received $2.2 million in funding from the newly established ARPA (later DARPA, the Defense Advanced Research Projects Agency) to support the MAC project, which included the artificial intelligence research group established by Minsky and McCarthy five years earlier.
Subsequently, ARPA provided $3 million annually until the 1970s. ARPA also provided similar funding for Newell and Simon’s group at Carnegie Mellon University and the Stanford AI project (founded by John McCarthy in 1963). Another important AI laboratory was established in 1965 by Donald Michie at the University of Edinburgh. For many years, these four research institutions remained the centers of AI academia.
In 1969, Minsky and S. Papert published the book “Perceptrons.” In this book, they carefully analyzed the functions and limitations of single-layer neural networks represented by the perceptron and pointed out that perceptrons could not handle nonlinear problems. The criticism of the perceptron in this book dealt a severe blow to the development of neural networks, laying the groundwork for a decade-long interruption in neural network research.
In 1972, Kohonen proposed the Self-Organizing Map (SOM). Using this network, clustering of data can be achieved through the network’s self-learning.
1972 was also the year of the birth of the Prolog language. Since 1972, various branches of Prolog have emerged. In the early field of artificial intelligence research, Prolog remained a primary method and tool.
The First Low Point: 1974-1980
Due to an initial misjudgment of the difficulties of artificial intelligence, by the 1970s, AI began to face criticism from all sides, along with funding issues for research. Because of previous overly optimistic views on AI, when promises could not be fulfilled, funding for AI was significantly reduced. By 1974, it became almost impossible to find funding for AI projects. Meanwhile, Minsky’s criticism of the perceptron dealt a near-fatal blow to the development of neural networks, leading to a near decade-long silence of neural networks.
During this period, artificial intelligence also faced criticism from some university professors. Some philosophers strongly opposed the claims of AI researchers, arguing that Gödel’s incompleteness theorem had proven that formal systems (such as computer programs) could not determine the truth of certain statements, while humans could. In 1980, John Searle proposed the “Chinese Room” experiment, attempting to prove that programs do not “understand” the symbols they use. He argued that if machines do not understand symbols, they cannot be said to be thinking.
Prosperity Period: 1980-1987
In the 1980s, a type of program known as “expert systems” began to be accepted by companies worldwide, with knowledge processing becoming the focal point of the artificial intelligence field. Expert systems could deduce answers to specific domain problems based on a set of specialized knowledge. For example, a system named MYCIN could diagnose blood infections. The emergence of expert systems made artificial intelligence truly practical.
In 1981, Japan’s Ministry of Economy, Trade, and Industry allocated $850 million to support the fifth-generation computer project, aiming to create machines that could converse, translate, understand images, and think like humans. Other countries responded, with the UK and the US also increasing investments in the AI industry. These actions contributed to the prosperity of artificial intelligence.
In 1982, John Hopfield proposed a new type of neural network (now known as the Hopfield network). This is a fully connected neural network with self-associative and memory functions, and it is easy to implement in hardware. This revitalized neural networks, which had been in silence for a decade.

▲John Hopfield
In 1986, Rumelhart and McClelland proposed the BP neural network. This is a multilayer feedforward network trained by the error backpropagation algorithm and is one of the most widely used neural network models today.
The Second Low Point: 1987-1993
In the 1980s, the development of expert systems also approached a bottleneck. Because expert systems could only handle specific domain problems and could not form common sense concepts, they could not construct true intelligence. Additionally, the maintenance costs of large expert systems were high, making upgrades difficult and usage cumbersome. Thus, disappointment in artificial intelligence resurfaced, leading to a new round of financial crises.
In 1987, demand in the AI hardware market suddenly declined, while the performance of desktop computers from Apple and IBM continued to improve. By 1987, these computers had surpassed the expensive Lisp machines. Consequently, the old systems lost their value and collapsed instantaneously.
Resurgence: 1993 to Present
Thanks to advancements in computing and Moore’s Law, we now possess computational capabilities unimaginable to our predecessors. Even in the absence of significant breakthroughs in AI principles, remarkable engineering achievements have been made solely based on computational power.
In 1997, “Deep Blue” became the first computer system to defeat world chess champion Garry Kasparov.
In 2005, a robot developed by Stanford University successfully walked 131 kilometers on a desert path.
In 2006, Hinton proposed the deep belief network. This is a deep neural network and a precursor to deep learning. It uses a greedy unsupervised method to learn and solve problems, achieving good results.
In 2009, the Blue Brain Project claimed to have successfully simulated some functions of the human brain.
In 2012, Andrew Ng and Jeff Dean built the Google Brain. Google Brain uses over 16,000 CPUs to simulate 1 billion neurons, achieving breakthroughs in language and image recognition.
In 2015, Microsoft used deep learning networks with residual learning methods to reduce the error rate of ImageNet classification to 3.57%, which is lower than human recognition rates. The depth of the neural network reached 152 layers.
In 2016, the deep learning system AlphaGo, equipped with 1920 CPU clusters and 280 GPUs, defeated Lee Sedol, becoming the first program to defeat a professional Go player without giving handicap stones.
For example, “Deep Blue” is at least 10 million times faster than computers from the 1950s. Engineering breakthroughs have allowed us to run and test models that were previously only theoretical, significantly accelerating the development of artificial intelligence and facilitating its commercialization.
Flourishing AI in China
Currently, China has become one of the fastest-growing countries in AI development. As of 2018, China’s total investment in AI exceeded 70 billion yuan, accounting for over 30% of the global total. The rapid integration of AI technology and industry has also significantly improved production and living efficiency, transforming human life in various aspects from industrial production to consumer services.
In the field of government services, AI technology has been widely applied, including identity verification using biometric technologies such as facial recognition and voiceprint recognition, providing intelligent government services through conversational AI, using semantic understanding and sentiment analysis algorithms to assess the positive and negative sentiments of online opinions, and employing “text analysis + knowledge graph + search” technologies to assist criminal investigation and technical investigation work, as well as using computer vision technologies to identify and track key suspects in monitoring.
In the financial sector, the application scenarios of AI are gradually expanding from transaction security to transforming the entire financial operation process. The applications of humans in this field can be divided into three levels: service intelligence, cognitive intelligence, and decision intelligence. Service intelligence benefits from enhanced computing power, performing supervised machine learning. For example, using facial recognition, voice recognition, and intelligent customer service to improve interaction and service quality in finance. Cognitive intelligence primarily utilizes supervised machine learning, supplemented by unsupervised learning of feature variables, leading to more refined risk identification and pricing. Decision intelligence primarily employs unsupervised learning to predict scenarios that the human brain cannot imagine or that have not yet occurred, guiding and influencing current decisions.
In the healthcare sector, the application of AI in medicine is developing rapidly. In China, the application areas of AI in healthcare are relatively concentrated, with many application scenarios focusing on the diagnosis and treatment stage of diseases in the healthcare industry chain. Internet platform companies like Tencent and Alibaba, startups like Tuya Technology, Huayi Huijing, and Yitu Medical, as well as traditional pharmaceutical companies like Siemens, are focusing on medical imaging as a key direction for the productization of AI technology, developing a range of AI-assisted medical imaging products for screening esophageal cancer, lung cancer, breast cancer, colorectal cancer, and diabetic retinopathy.
In the field of autonomous driving, the automotive industry led by autonomous driving technology will usher in a revolution in the industrial chain. The production, channels, and sales models of traditional car manufacturers will be replaced by emerging business models. The boundaries between emerging autonomous driving solution technology companies and traditional car manufacturers will be broken. With the rise of the shared car concept, shared mobility under autonomous driving technology will replace the traditional private car concept. As regulations and standards for the autonomous driving industry are established, new industries such as safer and faster autonomous freight and logistics will continue to emerge.
In the manufacturing sector, the application potential of AI is enormous. In terms of research and development design, AI algorithms can be used to develop digital automated research and development systems, significantly reducing the uncertainties in research and development in fields with long cycles, high costs, and rich potential data, such as pharmaceuticals, chemicals, and materials, promoting the transition from high-risk, high-cost physical research and development design to low-cost, high-efficiency digital automated research and development design.
In terms of production and manufacturing, AI technology can enhance flexible production capabilities, achieving large-scale personalized customization and improving manufacturing enterprises’ responsiveness to changes in market demand;
In quality control, AI technology can be utilized in manufacturing sub-sectors with large output, complex components, and high process requirements to achieve rapid quality inspection and quality assurance of products, enhancing the integration level of AI technology with IoT and big data technology, and establishing an automated quality detection system for the entire production process;
In supply chain management, AI technology can achieve precise grasp of supply and demand changes, establishing real-time and accurate matching of supply and demand relationships, focusing on enhancing supply chain efficiency in sectors with significant market demand fluctuations and complex supply chains; in operational maintenance, establishing equipment operation status models based on AI algorithms to monitor changes in operational status indicators, predicting and resolving risks of equipment, products, and production lines in advance.
In the retail sector, AI is blooming in various retail links such as business decision-making, precise marketing, and customer communication scenarios, with application scenarios becoming fragmented and entering a large-scale experimental phase, and the application scenarios of AI in retail are moving from individual instances to aggregation. Traditional retail enterprises are forming partnerships with e-commerce platforms and startups, building application scenarios around people, goods, scenes, and chains.
For example, JD.com applies AI technology across the entire system, process, and scenario of retail consumption. On the supply side, JD.com has launched an AI platform that enables cross-scenario reuse of intelligent algorithms, with daily invocation exceeding 1.2 billion times;
Ant Financial uses AI technology to control financial risks, improve financial efficiency, reduce transaction costs, and enhance user experience. Its micro-loan business achieves a “310” service model of 3 seconds for application, 1 second for decision, and zero waiting; its “damage insurance” business can assess vehicle repair costs in car insurance claims with just one photo;
Offline retail companies such as Suning and Gome are beginning to layout AI applications that combine online and offline.
AI technology is rapidly transitioning from the research stage to the application stage. Want to learn about new AI technologies in various industry applications?
The “AI Practice Record” written by the China Electronic Information Industry Development Research Institute (CCID) and the AI Industry Innovation Alliance showcases nearly 40 cases from different fields, providing in-depth analysis of methods for technology implementation and offering valuable experience for future innovation and application.
Book Recommendation

AI Practice Record
China Electronic Information Industry Development Research Institute (CCID)
AI Industry Innovation Alliance
Content Summary:
This book is divided into three parts: overview, general technology, and industry applications.
The overview introduces the current development situation of AI products and the policy environment for AI.
The general technology section carefully selects products from 10 companies whose core competitiveness lies in developing underlying technology, detailing their implementation ideas and current applications.
The industry application section contains 24 cases, primarily gathering application cases that combine AI technology with the real economy, focusing on the expansion of AI technology application scenarios.