
Editor’s Note
Large language models (LLMs) still face many challenges when dealing with domain-specific or knowledge-intensive tasks, such as generating hallucinations, outdated knowledge, and opaque, untraceable reasoning processes. Retrieval-Augmented Generation (RAG) technology has emerged to address these issues. RAG retrieves relevant information snippets from external knowledge bases and combines them with user queries to create rich context, guiding LLMs to generate more accurate and substantiated answers. This process not only enhances the ability to handle knowledge-intensive tasks but also allows for continuous updates of the knowledge base and integration of domain-specific information, enabling LLMs to better adapt to real-world application needs. The enhancement of large model retrieval has become a hot topic of mutual concern in both academia and industry. This article aggregates relevant reports, videos, and journal articles from the CCF digital library and other platforms related to the topic, covering various aspects such as the paradigms and frameworks of large model retrieval enhancement, the design of retrieval models in retrieval-augmented systems, open-source tools for retrieval enhancement, and applications of retrieval enhancement, making it of high educational value.
Chief Editor of This Issue:
Dong Zhicheng Secretary-General of CCF Big Data Expert Committee, Professor/Vice Dean of the Gaoling School of Artificial Intelligence, Renmin University of China
Editorial Board of This Issue:
Wang Haofen Secretary-General of CCF Natural Language Processing Special Committee, Distinguished Researcher at the College of Design and Innovation, Tongji University
Table of Contents