In the last issue, we introduced AI tools for academic research, and perhaps many of you have tried one or more of them. We often see articles about how AI large models improve work and learning efficiency.
“When AI is used well, you can leave work early”
“With AI assistance, the chosen worker package is here”
“How can graduate students stand out using AI”
“How can liberal arts students use AI for research”
“How to use AI to prepare for graduate school interviews”
……
However, many people find that the results are not as good as expected when using them specifically. One important reason is improper use of prompts.
1. What Are Prompts?
Prompts refer to the input instructions or keywords provided for interaction with artificial intelligence, specifying what tasks the AI (especially generative AI) should perform and what kind of output it should generate.
Prompts serve as a bridge for communication between users and AI, allowing AI to understand and respond or generate creative content accordingly. By carefully designing and optimizing prompts, users can more effectively utilize AI tools to complete various tasks.
2. How to Write Prompts?
Although large models can be all-knowing scholars, they are like babies that understand nothing if you do not express your thoughts clearly. Therefore, writing prompts is very important.
A structured prompt framework can help us organize our thoughts when writing prompts. Currently, there are many frameworks for prompts, such as the ICIO framework, CRISPE framework, BROKE framework, APE framework, etc. These frameworks generally include elements like the role the AI plays (Role), the expected tasks to complete (Task/Objective), the requirements to follow for completing the tasks (Request/Requirement), and relevant background and context information (Context/Background).
Image Source: The Road to AGI
Link:
https://waytoagi.feishu.cn/wiki/Q5mXww4rriujFFkFQOzc8uIsnah?table=tblnDe9PMfZgoiUo&view=vewo2g2ktO
The small image summarizes the prompt elements and related content included in these frameworks, as shown below:

Case Demonstration
We can selectively apply elements from the four types of prompt elements based on the specific application scenario’s requirements. For example, if we want to use a large model to help read academic papers and extract main points.
Prompt Design:
## Role:Senior Academic Researcher
## Skills
– Proficient in reading and understanding the structure and content of academic papers.
– Ability to summarize and clarify the main ideas, key thoughts, and unresolved issues of the paper.
– Ability to analyze the details of the paper meticulously.
## Task
– Deeply understand the main ideas, key thoughts, and unresolved issues of the paper.
– Extract the most important key information for your readers.
– Output a summary of the reading.
## Requirements (Steps)
1. List the explicit methodologies presented in the paper.
2. List the verified conclusions of the paper.
3. List the key information based on the “80/20 Principle”. The “80/20 Principle” means that 20% of the content is key information that can help me understand the remaining 80%. Please organize this key information into ordered text, including but not limited to: the main idea of the paper, what problem the paper effectively addresses, and what unresolved issues the paper presents, etc. //Supplement background knowledge
4. Focus on listing the optimization, solutions, and improvements mentioned in the paper, for example, “improved performance by up to 10%” etc. //Provide examples
## Requirements (Limitations)
1. Make interpretations and summaries based on your academic rigor; I do not wish to see hallucinations.
2. The summary text must be formatted for readability, with each sentence using hierarchical headings, numbering, indentation, dividers, and line breaks to significantly optimize the presentation of information, presenting information with keywords + professional descriptions.
3. Do not cite any content outside of this article for summaries.
Please refer to the link or attachment for the paper you need to read.
Case Source:Best Practices for Prompts –Academic Reading (Reading Papers), author: Xiao Qijie, version: 1.6, slightly modified.
Link:
https://waytoagi.feishu.cn/wiki/JTjPweIUWiXjppkKGBwcu6QsnGd
Just talking without practice is a false pretense. The small image below uses Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts as an example, based on the above prompts, to see if it can summarize and extract the content of the article according to the requirements!
Here, we demonstrate using Kimi Chat intelligent dialogue assistant, which supports context of up to 200,000 Chinese characters. After uploading the prompts and the article PDF, Kimi Chat outputs content according to the required structure, and we can see that the results it provides are quite in line with expectations.


Other large model platforms, such as Tongyi Qianwen, Wenxin Yiyan, and iFlytek Spark, share similar prompt techniques, although there are slight differences in specific functionalities. For example, Tongyi Qianwen and Wenxin Yiyan support extracting content from uploaded images, documents, and external links, while iFlytek Spark currently only supports image formats.
3. Tips for Writing Prompts
What should we pay attention to when writing prompts? The small image has comprehensively referenced the free course ChatGPT Prompt Engineering launched in collaboration by Andrew Ng and OpenAI, as well as OpenAI’s official guidelines and the Microsoft Claude tutorial, summarizing the important points to consider when writing prompts, such as clarity, specificity, and avoiding ambiguity.
Prompts should include all important details and background information, leaving as little room as possible for the large model to interpret on its own. Otherwise, the large model will guess your intentions, leading to situations where it produces nonsensical results. In addition to thoroughly describing the requirements, we can borrow some tips to articulate our needs clearly.
-
Use Separators: Using triple quotes, XML tags, chapter titles, and other separators can help divide different sections of text, allowing the large model to understand better and process accordingly.

-
Provide Examples: In some cases, providing specific examples may be more intuitive. For instance, if you want the model to answer questions in a consistent style (like an analogy), you can provide examples to instruct the model to respond in that style for the upcoming questions. Effective single-sample or few-sample (2-3) learning can help avoid the model guessing how to operate.

-
Specify Steps to Complete Tasks: For more complex tasks, it is best to break them down into a series of clear steps. “Do this first, then do that, and then do the next…” Clear steps help the model execute instructions more effectively.

(Perform the following two steps on an article from China Daily about the popularity of Hanfu in Hong Kong. Step 1: Summarize the article first; Step 2: Then translate it into Chinese.)
-
Provide Reference Text: Specify that information should be obtained from the reference text to answer questions, thereby reducing the large model’s tendency to hallucinate and generate answers.

-
Break Complex Tasks into Simple Subtasks: Complex tasks may lead to more errors from the large model. By turning complex tasks into multiple simple subtasks, the large model can manage them more easily.
For example, in the case of long dialogues, since the large model has a limited context length, it cannot see text beyond a certain length. For long text dialogues, especially when there is a lot of background text, such as a book or several articles, it is advisable to split long texts into smaller parts and summarize them in stages. For multi-turn dialogues, it may be necessary to summarize the previous rounds of dialogue to avoid the large model “forgetting” earlier content due to excessive text length.
Writing prompts is a skill that requires repeated experimentation and continuous practice; it is rare to achieve satisfactory answers with just one prompt. However, once you master the basic methods and techniques, you too can become proficient and find the key to unlocking AI!
Do not expect AI to do all your work; treat AI as a tool for efficiency, using it with the right mindset. In fact, the key to writing good prompts is closely related to your knowledge base. If you do not have a solid grasp of domain knowledge, it will be challenging to write clear and specific prompts.
When it comes to AI-generated content, we must carefully evaluate it. Unclear prompts may lead to AI hallucinations, resulting in inaccurate or even erroneous information. Therefore, in addition to writing clear and specific prompts, we must also pay special attention to verifying AI-generated content from multiple sources. Additionally, when writing prompts, ethical issues (such as privacy, bias), sensitive topics, etc., should be restricted. We will continue to share issues to be aware of when using generative AI in academic environments, such as academic integrity and related policies.
Related Reading:
Are You a Human? An E-Person? You Might as Well Understand AI Literacy!
A Comprehensive Collection of AI Tools for Academic Research, Come Pick Your Favorite!

Written by | Mao Yun
Reviewed by | Han Li, Li Shuning
Planned by | Information Services Department