As a branch of artificial intelligence and linguistics, natural language processing, simply put, is the conversion of natural language (such as English or Mandarin) into data that computers can use to understand the world (numbers). This understanding of the world is sometimes used to generate natural language text that reflects this understanding (i.e., natural language generation). The English term for natural language processing is Natural Language Processing, abbreviated as NLP.
But did you know? The English abbreviation NLP also has distinctions like “Li Kui” and “Li Gui”?
NLP is also the abbreviation for Neuro-Linguistic Programming, a pseudoscience. People who are not familiar with these two fields often confuse them. Today, let’s uncover their true nature.
A Pseudoscience Founded by a Psychologist and a Linguist?
The English term for neuro-linguistic programming is Neuro-Linguistic Programming, so the abbreviation is also “NLP”.
It was founded in the 1970s by Richard Bandler and John Grinder in California, USA.
Richard Bandler majored in computer science during his studies, but he was passionate about researching human behavior, reading extensively in psychology, and often challenging traditional psychological schools. John Grinder was a linguist teaching at the University of California. Both were dissatisfied with the therapeutic processes of traditional psychology, as they took too long and did not yield lasting results.
By chance, they studied and mimicked several masters who had achieved excellence in human communication and psychotherapy (the masters included Milton Hyland Erickson, the founder of the Ericksonian hypnosis school; Virginia Satir, a family therapy master; and Fritz Pearls, the founder of Gestalt therapy), analyzing the language patterns and psychological strategies they used in therapy, ultimately creating the theoretical framework of NLP with their unique ideas.
They believed that there is a connection between neural processes (neuro), language (linguistic), and behavioral patterns learned through experience (programming), and that these can be modified to achieve specific goals in life. Bandler and Grinder also claimed that the methods of natural language processing could “model” the skills of outstanding individuals, allowing anyone to acquire these skills.
Although the origins of neuro-linguistic programming seem to resonate with certain behavioral science and psychology knowledge, it sounds serious, but over time, its perception among the public has worsened.
This is because there is no scientific evidence to support the claims of NLP advocates, and many scientific reviews have pointed out that NLP is based on outdated metaphors about how the brain works, inconsistent with current neurological theories, and contains many factual errors. Many supportive studies on NLP have significant methodological flaws, which is why it is regarded as a pseudoscience.
Also Known as NLP, Does Natural Language Processing Originate from the Turing Test?
The full English name for natural language processing is Natural Language Processing, abbreviated as NLP. It discusses how to process and utilize natural language. Simply put, it allows computers to transform the input language into meaningful symbols and relationships, and then process it according to the purpose.
The origin of natural language processing dates back to 1950 when Alan Turing wrote a paper describing a test for “thinking” machines. He stated that if a machine could become part of a conversation using a teletypewriter and fully mimic humans without noticeable differences, then that machine could be considered to have thinking ability.
Shortly thereafter, in 1952, the Hodgkin-Huxley model demonstrated how the brain forms networks using neurons. These events sparked ideas in artificial intelligence (AI), natural language processing (NLP), and the development of computers.
Two particularly successful NLP systems developed in the 1960s were SHRDLU—a natural language system limited in vocabulary and operation, like “the blocks world”; and ELIZA, designed by Joseph Weizenbaum between 1964-1966 to simulate “client-centered therapy,” which sometimes presented surprisingly human-like interactions, but only within a very limited knowledge scope of ELIZA, otherwise, it would yield vague responses.
Until the 1980s, most natural language processing systems were still based on a set of complex, manually defined rules. However, starting in the late 1980s, language processing introduced machine learning algorithms, leading to innovations in NLP and a boom in development.
How Did Natural Language Processing Rise?
In the late 1980s, NLP underwent a revolution, during which some early machine learning algorithms (decision trees provide a good example) produced systems similar to the old handwritten rules, but research increasingly focused on statistical models, which could make soft probabilistic decisions.
In 1997, the LSTM recurrent neural network (RNN) model was introduced and found its market in speech and text processing in 2007. At that time, neural network models were considered the frontier in understanding and developing NLP for text and speech generation.
In 2001, Yoshio Bengio and his team proposed the first neural “language” model—a feedforward neural network. It described a type of artificial neural network that does not form cycles using connections, where data moves only in one direction, from input nodes to any hidden nodes, and then to output nodes.
In 2011, Apple’s Siri became one of the first successful NLP/AI assistants used by ordinary consumers. In Siri, the automatic speech recognition module converts the owner’s words into conceptually interpreted numbers, and then the voice command system matches these concepts with predefined commands to initiate specific actions. For example, if Siri asks, “Would you like to hear your balance?” it understands the responses of “yes” or “no” and takes corresponding actions.
In recent years, due to the availability of big data, powerful computing capabilities, and enhanced algorithms, natural language processing has rapidly developed and transformed, being widely applied across various industries:
▪ NLP is widely used in the translation industry. Many localization companies use machine translation to help their translators work more efficiently. When text is largely translated by machines, it can save them valuable time and increase the number of words translated daily.
▪ Search engines use natural language processing to derive relevant search results based on similar search behaviors or user intentions. By using NLP, ordinary people can find what they want.
▪ NLP is also used in email filters. Gmail’s email classification is one of the newer NLP applications. Based on the content of received emails, Gmail can also identify which of the three categories (primary, social, or promotions) the emails belong to. This can help users determine which emails are important and need quick responses, and which emails they may want to delete.
References:
https://www.dataversity.net/a-brief-history-of-natural-language-processing-nlp/#
https://zh.wikipedia.org/wiki/%E8%87%AA%E7%84%B6%E8%AF%AD%E8%A8%80%E5%A4%84%E7%90%86
https://en.wikipedia.org/wiki/Neuro-linguistic_programming#cite_note-Thyer-11
https://zh.wikipedia.org/wiki/%E7%A5%9E%E7%B6%93%E8%AA%9E%E8%A8%80%E8%A6%8F%E5%8A%83
https://www.textmetrics.com/what-is-natural-language-processing-nlp/

Practical Natural Language Processing: Understanding, Analyzing, and Generating Text with Python
Authors: Hobson Lane, Cole Howard, Hannes Max Hapke
Translators: Shi Liang, Lu Xiao, Tang Kexin, Wang Bin
Content Summary:
This book introduces practical applications of natural language processing (NLP) and deep learning. NLP has become a core application area of deep learning, and deep learning is a necessary tool in NLP research and applications. The book is divided into three parts: the first part introduces the basics of NLP, including tokenization, TF-IDF vectorization, and the conversion from frequency vectors to semantic vectors;
The second part discusses deep learning, covering basic deep learning models and methods such as neural networks, word vectors, convolutional neural networks (CNN), recurrent neural networks (RNN), long short-term memory (LSTM) networks, sequence-to-sequence modeling, and attention mechanisms; the third part introduces practical content, including model building, performance challenges, and coping strategies for real-world systems like information extraction, question-answering systems, and human-computer dialogue.
This book is aimed at intermediate to advanced Python developers, combining theoretical foundations with practical programming, making it a practical reference for professionals in the modern NLP field.
-END-