Introduction
What positive impacts will new technologies bring to industries?
How will they smoothly land in various scenarios?

Recently, at the “2024 Global Business Innovation Conference” hosted by UFIDA, Academician Zhang Bo, an academician of the Chinese Academy of Sciences and honorary director of the Tsinghua University Institute of Artificial Intelligence, delivered a speech titled “Industry in the Era of Generative Artificial Intelligence”.
Academician Zhang elaborated on the insights and thoughts of the academic community regarding large models from dimensions such as capabilities, applications, architectures, and trends, comprehensively analyzing the evolution path of large models, and conducting in-depth discussions on the application prospects, challenges, and practical applications of this technology in different fields.
01
Three Major Capabilities of Large Language Models
And One Major Flaw Not to Be Ignored
The three core capabilities of generative artificial intelligence are:
01
First, powerful language generation capability, which can generate diverse, semantically coherent, human-like text in open domains. This is the essence and advantage that distinguishes large language models from other computer-generated languages;
02
Second, strong natural language dialogue capability, which enables natural human-computer dialogue in open domains;
03
Third, strong transfer capability, which allows a model trained on a proxy task to adapt to downstream tasks with only a small amount of data and fine-tuning, thus highlighting the large model’s ability to generalize.
“Because of the above advantages and flaws, industries must pay special attention to these factors when implementing large model applications.”
—— Academician Zhang Bo
02
Three Directions for Implementing Foundational Models
In contrast, application scenarios of large models involving critical business areas of enterprises are more difficult to achieve, such as autonomous driving or customized production and quality control in the manufacturing industry. This is because these core businesses have low tolerance for errors and require higher reliability and accuracy.
So, how can the application scenarios of large models be implemented in core business areas? Where are the opportunities for technology providers and the industry?
Academician Zhang proposed three directions for implementing general foundational models.
First, vertical large models aimed at various industries;
Second, building industry applications based on large models;
Third, combining large models with other technologies and tools to create industry applications.
03
Six Architectural Patterns of Large Models
Having found the directions for implementing large models, making large models truly land while ensuring their safety, trustworthiness, and controllability is a common concern for industries and enterprises. To this end, Academician Zhang proposed six architectural patterns based on large models.
First, Prompt Engineering
In many large model designs, this crucial intermediate step is often added. It can enhance the model’s understanding and response capabilities, producing more satisfactory results.
For example, when asked which number is larger between 9.11 and 9.9, it may give the wrong answer. However, when the user prompts the existence of a decimal point, the large model will provide the correct result. Therefore, prompt engineering is a key factor affecting the generated results. The quality of the prompts directly determines the accuracy and quality of the output results. In practical applications, optimizing prompt content becomes an important means to improve the effectiveness of generative artificial intelligence applications.
Second, Retrieval-Augmented Generation (RAG)
For factual questions, to improve the certainty of generated content, generative artificial intelligence needs to combine retrieval functions by triggering external knowledge base retrieval mechanisms to assist large models in generating more accurate, detailed, and targeted answers.
Third, Fine-tuning
After incorporating domain knowledge and private data, fine-tuning in specific domains can significantly improve the output quality of generative artificial intelligence, making it more aligned with the needs of specific fields. For example, after training with medical expertise, a large model can pass the medical practitioner qualification exam with an accuracy rate of over 90%. Moreover, during the diagnostic reasoning process, the large model also provides reasonable explanations for the results.
Fourth, Knowledge Graph and Vector Database
Using knowledge graphs in conjunction with vector databases can help generative artificial intelligence better understand and process semantic information in texts, addressing issues such as lack of factual knowledge, hallucinations, and interpretability. When deploying large models in enterprises, establishing a vector database and coordinating it with a document database can improve the accuracy of generated results.
Fifth, Internal Monitoring and Control
With human control, large models can detect data deviations and drifts and handle anomalies. Additionally, by introducing intelligent agent reinforcement learning, large models can self-reflect, helping them achieve integrated perception, action, and learning, thereby reducing errors.
Sixth, Safety and Governance
With the development of large models, safety, misuse, and abuse have become common issues, involving political standards, ethics, and moral concerns. Only by establishing multi-layered safety guarantees and promoting the implementation of governance systems can we ensure the healthy and sustainable development of large models. Currently, this is an urgent issue.
04
Persisting in the Independent Development Path of Large Models
Promoting Application Innovation and Industrialization Process
With the rapid development of generative artificial intelligence, the industry has also raised doubts about its future prospects. In response to this widespread concern, Academician Zhang explained that generative AI is a significant technological breakthrough in human history. Thus, humanity has spent decades solving three key technical issues in the field of artificial intelligence—text semantic vector representation, generative pre-trained transformers, and self-supervised learning.
Among these, the most critical technological innovation lies in text semantic vector representation, which enables a leap from processing information forms to processing information content.
“The real significance of this technology is that it transforms language issues into mathematical problems. Originally, texts only represented symbols that exist in discrete spaces, which are difficult to analyze using mathematical tools. Now, language is translated into vectors, allowing computers to parse semantics based on vectors and process the content of information, thus helping humanity truly enter the era of artificial intelligence!”
—— Academician Zhang Bo
Based on a deep understanding of the principles of large models, Academician Zhang is fully confident in the development of third-generation artificial intelligence technology. Currently, the key issue still lies in how to implement it.
Academician Zhang believes that the development direction of third-generation artificial intelligence focuses on: first, constructing AI theories and methods that are interpretable and robust to eliminate the panic people feel; secondly, developing safe, controllable, trustworthy, reliable, and scalable technologies to drive the prosperous development of the artificial intelligence industry; third, promoting innovative applications and industrialization of AI. This indicates that the research and development of AI technology is not only an academic breakthrough but also needs to closely integrate with industrial demands, transforming technological innovations into real applications to bring economic benefits and social progress.
At the same time, he also proposed the concept of “Knowledge-Driven + Data-Driven”, which integrates knowledge, data, algorithms, and computing power to ensure that AI technology not only possesses strong intelligent capabilities but can also play a stable and long-lasting role in diverse application scenarios.
Academician Zhang emphasized that adhering to the path of independent development in China requires recognizing the core role of knowledge-driven and data-driven approaches in third-generation artificial intelligence, fully combining and utilizing elements such as knowledge, data, algorithms, and computing power to drive the prosperous development of China’s artificial intelligence industry.
In today’s rapidly advancing artificial intelligence technology, large models are demonstrating enormous potential across various industries. Meanwhile, on this challenging journey, only by continuously enhancing the safety, reliability, and controllability of large models can we truly realize their widespread application.
In the future, we should not only focus on breakthroughs in technology itself but also think about how to deeply integrate it with industrial realities. Only in this way can every enterprise create key variables for its future development through exploration and excavation, allowing large models to create more value and opportunities for human society and embrace the comprehensive arrival of the intelligent era.
☟☟☟

