Author Introduction: Fu Jie, Postdoctoral researcher and assistant researcher at the Institute of Education, Tsinghua University, and the International Engineering Education Center of UNESCO.
Funding Project: Chinese Academy of Engineering consulting research project “Research on the System Construction and Strategic Countermeasures for the Independent Training of Young Engineering and Technology Talents” (2023-HY-09).
Abstract: With the rapid development of large language models (LLM) and generative artificial intelligence technologies, the emerging profession of prompt engineers has come into being. Prompt engineers focus on designing and optimizing prompts to guide large language models, enhancing the performance and applicability of these models. This article first outlines the basic concepts related to prompt engineering, such as natural language processing, large language models, generative artificial intelligence, and prompts. It then elaborates on the workflow of prompt engineers, including problem definition, requirement analysis, prompt design, and optimization. Furthermore, it summarizes the core competencies of prompt engineers, encompassing both hard and soft skills, such as in-depth understanding of natural language processing, mastery of prompting techniques, data analysis capabilities, programming skills, creative thinking, critical thinking, and lifelong learning ability. It also introduces several common work scenarios and application examples for prompt engineers. Additionally, it discusses the growth paths for prompt engineers. Finally, it addresses how higher education should support the training of prompt engineers, including establishing adaptive curriculum systems oriented towards artificial intelligence, innovating the “AI+” education model, strengthening interdisciplinary and practice-oriented teaching, deepening school-enterprise cooperation in production, education, and research, providing lifelong learning resources, and enhancing career guidance.
2. Key Concepts
1. Natural Language Processing, Large Language Models, and Generative Artificial Intelligence
Natural language processing (NLP) is a significant subfield of computer science aimed at enabling computers to understand, interpret, and generate natural language text. The research scope of NLP includes natural language understanding, natural language generation, and machine translation. NLP originated in the mid-20th century, with early studies focused on rule-based and knowledge-based methods for addressing specific issues in text and language. However, these methods often required a large number of manually designed rules and grammars, limiting their applicability to large-scale text datasets. Recently, with the rise of deep learning and large language models, significant progress has been made in the field of NLP. Deep learning technologies enable NLP systems to learn language patterns and features from large-scale text data, significantly improving system performance and generalization capabilities. The emergence of large language models has particularly allowed NLP systems to generate more natural and fluent text. NLP technologies are widely applied in various fields, including machine translation, text summarization, question-answering systems, speech recognition, and semantic analysis. The development of NLP technology has profoundly changed the way humans interact with machines and provided an important foundation for the application of artificial intelligence.
Language models (LM) are applications in the field of NLP, representing machine learning models that can express the probability distribution of discourse sequences in a predefined domain (letters, words, sentences, paragraphs, documents) after training. LMs capture the statistical properties of discourse sequences in the training corpus and are used for probabilistic predictions of discourse sequences. However, a true revolution is brought about by large language models (LLM). Large language models are ultra-large deep learning models pre-trained on massive amounts of data. LLMs utilize transformer architectures, including encoders and decoders with self-attention capabilities. Encoders and decoders extract meanings from a series of texts and understand the relationships between words and phrases within them. [16] These models are products of deep learning, with structures comprising multiple neural network layers and tens of billions to hundreds of billions of parameters. In 2020, OpenAI released GPT-3, which has up to 175 billion parameters, recognized internationally as the beginning of the “big model era.” Large language models can learn complex structures and grammatical rules of natural language, as well as vast amounts of semantic information. At its most basic level, a large language model functions like a complex autocomplete application. For input text, LLM outputs text that statistically follows learned patterns from training data. Currently, well-known large language models include: GPT series, Claude, Gemini, LLaMa, Mistral Large, ChatGLM, Tongyi Qianwen, and Wenxin Yiyan, among others.
The powerful functionality of large language models has now extended beyond text understanding and generation, possessing multimodal capabilities. For example, Claude’s primary capabilities include content generation, image interpretation, summarization, information classification, language translation, sentiment analysis, code explanation and generation, question answering, creative writing, and text interpretation, among others. [17] Other models like Alibaba’s Tongyi Qianwen and Google’s Gemini also possess similar functionalities. [18] The GPT series models, especially GPT-4 released by OpenAI on March 14, 2023, are the most noteworthy large language models. GPT-4 is a large multimodal model capable of accepting images, audio, and text as input and producing text, images, and audio as output. On February 15, 2024, OpenAI released the Sora model, which generates videos from text and is widely considered to disrupt multiple industries. [20]

Generative artificial intelligence (GenAI) is a type of artificial intelligence capable of generating text, images, or other data using generative models, typically in response to prompts. [26] Generative AI models learn the patterns and structures of their input training data and then generate new data with similar characteristics. [27] The development of large language models has significantly propelled the transformation of generative artificial intelligence.
A prompt is a natural language text that describes the task that artificial intelligence should perform, [32] submitted to the language model to receive a response. A prompt can include one or more types of content: input (required), context (optional), and examples (optional). Among them, input is the text within the prompt for which a response is desired from the model, and it is a necessary content type. The input can be a question for the model to answer, a task for the model to perform, an entity for the model to operate on, or a part of input for the model to complete or continue. Upon receiving a prompt, the model can generate text, code, images, videos, and audio, depending on the type of model used.
Types of prompts. Based on the level of contextual information included in the prompt, prompts can be roughly divided into three types: ① Zero-shot prompts; ② One-shot prompts; ③ Few-shot prompts. [33] For example, Figure 1 is a few-shot prompt example shared by a user on GitHub, which instructs ChatGPT to act as a prompt generator and provide a few examples, allowing GPT to assist in generating the prompts needed by the user.[34]

Prompt engineering, also known as prompt design, refers to the process of creating prompts to elicit the desired responses from language models. [35] Amazon defines prompt engineering as the process of guiding generative AI solutions to generate the desired output. [36] McKinsey defines prompt engineering as the practice of designing inputs for generative AI tools to produce optimal outputs. [37] Some scholars point out that prompt engineering is the process of constructing text that can be interpreted and understood by generative AI models. [38] Other scholars believe it is the practice of designing, refining, and implementing prompts or instructions that guide large language model outputs to assist in various tasks. [39] Some scholars argue that prompt engineering is the process of creating a prompt function prompt(x) aimed at improving the performance of downstream tasks. Prompt engineering focuses on how to effectively design or select prompts so that pre-trained language models can perform optimally on specific tasks. [40] Additionally, some scholars succinctly state that a prompt engineer is essentially a translator for AI. [41] Overall, prompt engineering is an empirical science involving the iterative testing of prompts to optimize performance. In the prompt engineering cycle, most of the work is not actually writing prompts. Instead, most of the time spent on prompt engineering is developing a robust evaluation method, then testing and iterating based on those evaluations. [42] Prompt engineering is a challenging yet crucial task for optimizing the performance of large language models.[43]
3. Workflow, Core Competencies, and Work Scenarios of Prompt Engineers
1. Workflow and Content of Prompt Engineers
The work content and workflow of prompt engineers are detailed and complex, covering the entire process from requirement analysis to model deployment and monitoring. The company Anthropic, which developed Claude, points out that the lifecycle of prompt engineering is shown in Figure 2, which includes: developing test cases, designing preliminary prompts, testing prompts with test cases, evaluating and optimizing prompts, and releasing refined prompts. [46]

Figure 2 Lifecycle of Prompt Engineering
The workflow of prompt engineers is based on the lifecycle of prompt engineering, but there is preliminary work to be done before developing test cases. Generally, the complete workflow of a prompt engineer is as follows:
(1) Task definition and establishment of success criteria. First, it is necessary to clearly specify the specific tasks that the large language model is required to perform, which may cover a range from entity extraction, Q&A, text summarization to more advanced tasks such as code generation or creative writing. After clarifying the tasks, measurable success criteria should be established to facilitate the evaluation and optimization process.
(2) Problem definition and requirement analysis. The next stage is problem definition and requirement analysis, aiming to accurately grasp user needs and clearly define the problems. This includes in-depth communication with stakeholders to clarify the objectives, boundaries, and expected outcomes of the tasks, ensuring a clear task direction and requirements. Additionally, key success criteria determined from the outset, such as performance, accuracy, response time, and cost, are the basis for making reasonable decisions and optimizing goals.
(3) Development of test cases. In the development phase of test cases, a diverse set of test cases covering expected application scenarios should be created based on the defined tasks and success criteria, aiming to ensure the robustness of the prompts. These test cases should include typical and edge cases.
(4) Design of preliminary prompts. In the design phase of preliminary prompts, an overview of task definitions, characteristics of good responses, and any necessary context required by the LLM should be outlined, potentially including some typical input and output examples as a starting point for improvement.
(5) Evaluation of prompts. Using the designed prompts, test cases are input into the LLM, carefully evaluating the model’s responses against expected outputs and success criteria, employing consistent scoring standards to systematically assess performance.
(6) Iterative optimization. Based on the test results, prompts are iteratively optimized to improve performance on test cases and better meet success criteria. This may involve adding clarifications, examples, or constraints to guide the LLM’s behavior while being cautious of over-optimization leading to overfitting.
(7) Deployment and monitoring. Finally, once the prompts perform well on test cases and meet success criteria, they can be deployed in real applications and monitored for performance during actual operation, preparing for further optimization as needed. [47]
Furthermore, through synthesizing various materials, this study summarizes the composite skill set of prompt engineers, finding that their core competencies focus on technical and professional domain knowledge, communication and collaboration abilities, and innovative thinking. Broadly, these competencies can be categorized into hard skills and soft skills.
Hard Skills: ① Understanding of natural language processing and AI models. Prompt engineers need to understand natural language processing technologies and algorithms, mastering professional knowledge of large language models, deep learning, and neural networks. They should be familiar with the capability boundaries of different AI tools to guide these systems in producing relevant, accurate, and contextually appropriate feedback. ② Mastery of prompting techniques. They should be proficient in using various techniques to design innovative and effective prompts to address emerging issues, such as applying zero-shot prompting to challenge models on tasks for which they have not been explicitly trained or enhancing the model’s contextual learning capabilities through few-shot prompting. Additionally, they should be adept at using advanced techniques like chain-of-thought prompting (CoT) to improve the model’s logical processing level. [50] ③ Programming skills. Programming ability is crucial for effective interaction with models, and prompt engineers often write code in languages like Python to generate and process prompts, requiring at least basic coding skills. Experience with frameworks like TensorFlow or PyTorch would also be beneficial. ④ Language proficiency. Prompt engineers “converse” with AI using natural language, so they need a broad vocabulary and skillful word usage to design effective prompts. Although models are trained in multiple languages, English is typically the primary language for training generative AI, necessitating good English proficiency and a deep understanding of vocabulary, nuances, phrasing, and context, as every word in a prompt can influence the outcome. ⑤ Specialized domain knowledge. They need a profound understanding of their professional field, such as marketing, software development, game design, healthcare, finance, etc., which aids in developing accurate prompts. For instance, a prompt engineer responsible for image generation should understand art history, photography, and related knowledge and terminology.
Soft Skills: ① Creativity and problem-solving abilities. Prompt engineers need to exhibit creative thinking to design unique and innovative prompts to achieve task objectives. They must continuously experiment with different prompt variations to find the most effective methods, optimizing and innovating workflows and systems. ② Communication and collaboration skills. They need to communicate technical concepts clearly with users, colleagues, and other stakeholders, understanding needs and integrating feedback. ③ Critical thinking. They need to evaluate the context and purpose of prompts to ensure alignment with expected outcomes. ④ Ethical awareness. They should be aware of AI ethical issues, ensuring that prompts do not lead to harmful or biased outputs. ⑤ Awareness and ability for lifelong learning. They must keep track of trends in AI development, understand industry needs, and remain at the forefront of innovation. ⑥ Additionally, another valuable quality for prompt engineers is patience. They must deal with hallucinations from large language models (AI Hallucinations), requiring patience and humor to continuously guide machines while maintaining rationality. [51] Thus, they need to exhibit patience and a sense of humor, continually guiding the machine while keeping their sanity intact. [52,53]
3. Work Scenarios of Prompt Engineers
The work scenarios of prompt engineers depend on the capability boundaries of large language models. The powerful capabilities of large language models enable prompt engineers to perform in numerous work scenarios. Below are some common types of work scenarios and application examples for prompt engineers.
(1) Data-driven. Focused on extracting insights from large datasets to optimize and generate effective prompts. When training machine learning models, these engineers design prompts to improve model accuracy and efficiency. When analyzing datasets to identify trends and patterns, they utilize prompts to enhance the accuracy of data visualization and reporting. For example, BloombergGPT can complete various tasks from sentiment analysis to question-answering systems by processing vast amounts of financial data. [54]
(3) Technical development. Focused on software development and system integration, these engineers use prompts to enhance system performance and user experience. When developing software and applications, they use prompts to optimize user interfaces and improve code quality. In integrating complex systems and services, they ensure system stability and performance through precise prompts. For example, using the AI programming assistant GitHub Copilot, source code can be automatically generated from natural language descriptions, significantly improving the efficiency of solving programming issues through nuanced prompt adjustments. [58]
(4) Research and education. Focused on education and research, these engineers use prompts to enhance learning experiences and research efficiency. In online learning platforms and educational applications, they develop prompts to provide personalized learning experiences and resources. In research projects, they utilize prompts to optimize processes for data collection, analysis, and paper writing. For example, academic writers and researchers, especially novices, can effectively enhance their writing skills and efficiency by mastering prompt engineering skills. [59] In medical education, high-quality prompt design can provide personalized learning experiences and real-time feedback, ensuring accuracy, reducing bias, and maintaining privacy. [60,61] By integrating AI, participation and personalized learning can be enhanced, promoting learning and critical thinking.[62]
4. Growth Paths for Prompt Engineers
Compared to the rapid pace of innovation in the field of artificial intelligence, the current education system shows significant lag in knowledge updates, making it difficult for school education to provide timely, comprehensive, and precise knowledge and skills training. The growth of prompt engineers largely relies on personal autonomous learning. This article attempts to roughly divide the growth path of prompt engineers into three stages: the initial stage of learning technical and theoretical foundations, the intermediate stage of accumulating practical experience and enhancing professional skills, and the advanced stage of cultivating leadership and expanding industry influence.
1. Initial Stage: Learning Technical and Theoretical Foundations
In the initial stage, prompt engineers focus on building a solid theoretical foundation and technical knowledge system. Although using tools like ChatGPT has almost no theoretical and technical barriers, a deep understanding of its principles is essential for professional prompt engineers. Key areas include the basic principles and methods of natural language processing and generative artificial intelligence technologies, as well as an understanding of the architecture and functions of large language models such as ChatGPT. Mastering basic programming skills, particularly familiarity with programming languages like Python, lays the groundwork for data processing, analysis, and training and fine-tuning artificial intelligence models. During this stage, official guides and documentation are important resources for learning, as the official documentation for large language model tools like ChatGPT, Gemini, and Claude provides detailed technical operation guidelines.[64,65] For example, the official guide for ChatGPT provides ample knowledge for mastering ChatGPT. For instance, OpenAI’s prompt engineering guide mentions six key techniques: ① Write clear instructions; ② Provide reference texts; ③ Break complex tasks into simpler sub-tasks; ④ Give the model some time to “think”; ⑤ Use external tools; ⑥ Systematically test changes.[66] Regarding the first technique, OpenAI also provides more detailed guidance. [67] During this stage, through learning technical and foundational theories, prompt engineers focus on how to design clear instructions and effectively utilize relevant tools.
2. Intermediate Stage: Accumulating Practical Experience and Enhancing Professional Skills
As they reach the intermediate stage, prompt engineers should place greater emphasis on translating theoretical knowledge into practical experience by participating in specific projects to deepen their understanding of generative AI. During this process, they should learn to design efficient prompts and evaluate AI model performance. Key activities during this period include strengthening communication with peers, attending industry conferences, seminars, and research projects, and focusing on understanding excellent prompt design examples.[68] Additionally, they should participate in professional course learning, such as the “Generative AI for Everyone” course offered by AI pioneer Andrew Ng, Stanford University’s “Natural Language Processing and Deep Learning” (CS224n) course, and Vanderbilt University’s “ChatGPT Prompt Engineering” course, as well as the “IBM Prompt Engineering” course, which can further deepen their understanding of the technology and apply it to practical projects.
3. Advanced Stage: Cultivating Leadership and Expanding Industry Influence
5. How Higher Education Can Support the Training of Prompt Engineers
Undoubtedly, in the rapidly evolving field of artificial intelligence, the growth of prompt engineers requires solid autonomous learning abilities to maintain continuous self-improvement. However, in the face of the sweeping rise of generative artificial intelligence, higher education needs to respond proactively, consciously, and organizationally support the cultivation of this emerging talent.
1. Establishing Adaptive Curriculum Systems Oriented Towards Artificial Intelligence
To address the rapidly developing field of artificial intelligence, universities should construct a new dynamic curriculum system that includes core professional courses and related supporting courses for all students, to meet the evolving career demands of prompt engineers. Core professional courses should cover the fundamentals of artificial intelligence, natural language processing, machine learning, data mining, and knowledge graphs. Supporting courses should include software engineering, project management, user experience design, human-computer interaction, and psychology, expanding students’ interdisciplinary knowledge. Simultaneously, universities should emphasize offering general education courses related to soft skills training and ethics education, cultivating students’ critical thinking abilities, cross-cultural communication skills, teamwork spirit, and awareness of professional ethics and social responsibility. In particular, there should be a strengthened focus on educating students about the potential social impacts of artificial intelligence technologies, guiding them to establish a correct view of technology ethics, enabling them to prudently assess the risks that may arise when developing and using artificial intelligence, and promoting the sustainable development of artificial intelligence.
Specifically, for students in computer and related majors, there should be an emphasis on strengthening foundational courses in mathematics and statistics, while teaching the latest AI technologies in conjunction with practical cases to ensure students possess a solid professional foundation and cutting-edge knowledge. At the same time, interdisciplinary capabilities should be cultivated through humanities and social science courses and cross-disciplinary team projects. For non-computer major students, introductory courses on computer science and AI applications should be offered to enhance their AI literacy and cultivate their ability to apply AI technologies in their respective fields through interdisciplinary project practice.
Currently, many universities at home and abroad have started taking action. For example, Arizona State University in the United States announced the launch of an “AI Acceleration Program,” becoming the world’s first higher education institution to collaborate with OpenAI. [72] Tsinghua University actively embraces artificial intelligence, initiating the construction of “Pilot Courses Empowered by Artificial Intelligence” and encouraging teaching teams to develop or apply artificial intelligence technologies represented by large language models, assisting teaching teams in prioritizing connections with Tsinghua University’s Basic Model Research Center and providing necessary technical guidance and support. [73] Additionally, they have begun constructing “CS+ to AI+” course groups, with liberal arts “AI+ courses” emphasizing information technology empowerment and science and engineering “AI+ courses” emphasizing interdisciplinary integration. Similarly, Nanjing University announced the introduction of a “General Education Core Curriculum System in Artificial Intelligence” for all incoming undergraduate students in September 2024.[74]
2. Innovating the “AI+” Education Model and Strengthening Interdisciplinary, Practice-Oriented Teaching
Innovating the “AI+” teaching model encourages educators to fully utilize AI tools throughout the teaching process, implementing a project-based training model to achieve the application of knowledge and skills. [75] Universities should actively adopt and promote AI tools, under the condition of setting norms for AI usage, “integrating artificial intelligence technology into the entire process and all aspects of education, teaching, and management,” rather than adopting an ostrich-like approach to avoid it. [76] At the same time, widely adopting project-based teaching methods centered around practical problems enhances the practicality and effectiveness of teaching. Additionally, leveraging online educational resources increases learning flexibility, incorporating AI assistants to provide one-on-one learning and project guidance for students.
Reconstructing an interdisciplinary modular knowledge system and building a cross-disciplinary learning and practice platform. Combining courses from fields such as computer science, linguistics, psychology, and ethics, along with industry collaboration projects, fosters interdisciplinary integration. Breaking down barriers between disciplines and departments, opening up course selection restrictions across the university, and encouraging students to take interdisciplinary-related courses expand their knowledge horizons. Forming cross-disciplinary teams of teachers and students to explore new application scenarios for prompt engineering.
Given that the nature of prompt engineers’ work emphasizes practical experience, higher education must increase the proportion of authentic project-based teaching. Various forms, such as laboratories, workshops, and internships, can be used to immerse students in real work environments, honing their practical skills. Additionally, innovative activities like AI competitions, hackathons, and entrepreneurial contests can cultivate students’ innovative awareness and team spirit.
3. Deepening School-Enterprise Cooperation in Production, Education, and Research
Enterprises are the main body of technological innovation, especially leading enterprises standing at the forefront of artificial intelligence development and application, mastering the latest AI technologies, possessing the richest and most authentic artificial intelligence practice projects, and having vast amounts of data. Universities should establish close cooperative relationships with leading AI and technology enterprises, fully utilizing the resources of enterprises such as technology, data, and projects. This collaboration can involve enterprises in developing AI courses, AI teaching management assessment tools, and inviting industry experts as guest lecturers to ensure that course content aligns with industry needs, enabling students to understand the latest industry technology trends. By providing internship positions and joint training opportunities, students can immerse themselves in the real project development process, cultivating engineering practice experience. Enterprises can also leverage this opportunity to advance research projects, discover talent reserves, and achieve mutual benefits for both universities and enterprises.
4. Providing Lifelong Learning Resources and Enhancing Career Guidance
The rapid advancement of artificial intelligence technology requires prompt engineers to possess a mindset and ability for lifelong learning. Universities also need to provide continuous learning resource platforms for current students and alumni, such as offering advanced courses, organizing learning communities, encouraging the establishment of artificial intelligence and prompt engineering clubs, and sharing the latest research findings, helping them continuously update their knowledge and skills.
Simultaneously, since prompt engineering is an emerging profession, universities should enhance guidance on the career development of prompt engineers. Through career planning courses, company visits, and job experience sharing, students can gain insights into employment prospects and career requirements in this field, supporting them in making informed decisions and smoothly entering the workforce.
With the inevitable large-scale application and popularization of artificial intelligence systems in the foreseeable future, interacting with AI systems will become a part of everyone’s daily life, and prompt engineering will be a survival skill that must be mastered, making it true that everyone will become a “prompt engineer.” Technological changes drive societal evolution, and everyone needs to learn how to coexist and collaborate with intelligent tools. “Artificial intelligence will not replace your job, but those who master AI tools will.” This not only reveals the trend of future work but also emphasizes the importance of lifelong learning and adapting to change. In this unprecedented era of transformation, the role of education becomes particularly crucial, as it will determine whether individuals and society as a whole can smoothly transition into a new era.
References
[1] OPENAI. ChatGPT [EB/OL]. [2024-01-21]. https://openai.com/chatgpt.
[2] MILMO D. ChatGPT reaches 100 million users two months after launch [N/OL]. The Guardian, 2023-02-02 [2024-03-12]. https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app.
[3] DREW HARWELL. Tech’s hottest new job: AI whisperer. No coding required [EB/OL]. (2023-02-25) [2024-03-03]. https://www.washingtonpost.com/technology/2023/02/25/prompt-engineers-techs-next-big-job/.
[4] MOK A. “Prompt engineering” is one of the hottest jobs in generative AI. Here’s how it works [EB/OL]. [2024-03-03]. https://www.businessinsider.com/prompt-engineering-ai-chatgpt-jobs-explained-2023-3.
[5] KELLY J. The Hot, New High-Paying Career Is An AI Prompt Engineer [EB/OL]. [2024-03-10]. https://www.forbes.com/sites/jackkelly/2024/03/06/the-hot-new-high-paying-career-is-an-ai-prompt-engineer/.
[6] NIK POPLI. How to Get a Six-Figure Job as an AI Prompt Engineer Time [EB/OL]. [2023-11-28]. https://time.com/6272103/ai-prompt-engineer-job/.
[7] SAM ALTMAN. “Writing a really great prompt for a chatbot persona is an amazingly high-leverage skill and an early example of programming in a little bit of natural language” [EB/OL]. (2023-02-20) [2024-03-20]. https://twitter.com/sama/status/1627796054040285184.
[8] Tech’s hottest new job: AI whisperer. No coding required. [EB/OL]. (2023-02-25) [2024-03-03]. https://www.washingtonpost.com/technology/2023/02/25/prompt-engineers-techs-next-big-job/.
[9] 新智元. 不写代码,拿百万年薪,ChatGPT提示工程或造就15亿码农大军 – 36氪 [EB/OL]. (2023-02-27) [2024-03-03]. https://36kr.com/p/2149611543120131.
[10] PETERS M A. Beyond technological unemployment: the future of work [J]. Educational philosophy and theory, 2020, 52(5): 485-491.
[11] LEE L. AI may kill the one job everyone thought it would create [EB/OL]. [2024-03-09]. https://www.businessinsider.com/chaptgpt-large-language-model-ai-prompt-engineering-automated-optimizer-2024-3.
[12] BRADSHAW T. Is becoming a ‘prompt engineer’ the way to save your job from AI [EB/OL]. (2022-12-13) [2024-01-15]. https://www.ft.com/content/0deda1e7-4fbf-46bc-8eee-c2049d783259.
[13] WHITE J, FU Q, HAYS S, et al. A prompt pattern catalog to enhance prompt engineering with chatgpt [J]. Arxiv preprint arXiv:2302.11382, 2023: 1-19.
[14] 傅瑞明. 提示工程师的工作方法研究 [J]. 江苏通信, 2023, 39(5): 93-98.
[15] LIU V, CHILTON L B. Design guidelines for prompt engineering text-to-image generative models: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, April 29-May 5, 2022 [C]. New York: Machinery, 2022.
[16] 亚马逊公司. 什么是大型语言模型? – LLM人工智能简介 – AWS [EB/OL]. [2024-03-06]. https://aws.amazon.com/cn/what-is/large-language-model/.
[17] ANTHROPIC. Key capabilities [EB/OL]. [2024-03-09]. https://docs.anthropic.com/claude/docs/intro-to-claude.
[18] 阿里云. 如何快速开始通义千问模型服务灵积 (DashScope) – 阿里云帮助中心 [EB/OL]. [2024-03-09]. https://help.aliyun.com/zh/dashscope/developer-reference/quick-start?spm=a2c4g.11186623.0.i7.
[19] GOOGLE. LLM概念指南 | Google AI for Developers [EB/OL]. [2024-03-06]. https://ai.google.dev/docs/concepts?hl=zh-cn.
[20] OPENAI. Sora [EB/OL]. [2024-02-27]. https://openai.com/sora.
[21] VYNCK G D, ELKER J, FEB 28 T R, et al. The future of AI video is here, super weird flaws and all [EB/OL]. [2024-02-28]. https://www.washingtonpost.com/technology/interactive/2024/ai-video-sora-openai-flaws/.
[22] LEVY S. OpenAI’s sora turns AI prompts into photorealistic videos [EB/OL]. [2024-02-28]. https://www.wired.com/story/openai-sora-generative-ai-video/.
[23] OPENAI. GPT-4 [EB/OL]. [2024-03-06]. https://openai.com/research/gpt-4.
[24] Introducing the next generation of Claude [EB/OL]. [2024-03-06]. https://www.anthropic.com/news/claude-3-family.
[25] Lmsys chatbot arena leaderboard-a hugging face space by lmsys [EB/OL]. [2024-03-10]. https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard.
[26] PINAYA W H L, GRAHAM M S, KERFOOT E, et al. Generative AI for medical imaging: extending the monai framework [EB/OL]. [2024-03-18]. http://arxiv.org/abs/2307.15208. DOI:10.48550/arXiv.2307.15208.
[27] OPENAI. Generative models [EB/OL]. [2024-03-18]. https://openai.com/research/generative-models.
[28] CHOMSKY N. Three models for the description of language [J]. IRE transactions on information theory, 1956, 2(3): 113-124.
[29] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets [J]. Advances in neural information processing systems, 2014, 2: 2672–2680.
[30] KINGMA D P, WELLING M. Auto-encoding variational bayes [J]. ArXiv preprint arXiv:1312.6114, 2013: 1-14.
[31] BENGIO Y, LECUN Y, HINTON G. Deep learning for AI [J]. Communications of the ACM, 2021, 64(7): 58-65.
[32] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners [J]. OpenAI blog, 2019, 1(8): 1-24.
[33] GOOGLE. LLM概念指南 | Google AI for developers [EB/OL]. [2024-03-06]. https://ai.google.dev/docs/concepts?hl=zh-cn.
[34] AKIN F K. f/awesome-chatgpt-prompts [EB/OL]. (2024-03-16) [2024-03-16]. https://github.com/f/awesome-chatgpt-prompts.
[35] GOOGLE. LLM概念指南 | Google AI for Developers [EB/OL]. [2024-03-06]. https://ai.google.dev/docs/concepts?hl=zh-cn.
[36] 亚马逊公司. 什么是提示工程? – 人工智能提示工程简介 – AWS [EB/OL]. [2023-11-28]. https://aws.amazon.com/cn/what-is/prompt-engineering/.
[37] MCKINSEY. What is prompt engineering? | McKinsey [EB/OL]. [2023-11-04]. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-prompt-engineering.
[38] BERRYMAN A Z JOHN. A developer’s guide to prompt engineering and LLMs [EB/OL]. (2023-07-17) [2024-01-15] https://github.blog/2023-07-17-prompt-engineering-guide-generative-ai-llms/.
[39] MESKO B. Prompt engineering as an important emerging skill for medical professionals: tutorial [J]. Journal of medical internet research, 2023, 25: e50638.
[40] LIU P, YUAN W, FU J, et al. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing [EB/OL]. [2024-03-13]. http://arxiv.org/abs/2107.13586.
[41] 提示词工程师: 为AI当翻译新华社客户端 [EB/OL]. [2023-11-28]. https://h.xinhuaxmt.com/vh512/share/11783040.
[42] ANTHROPIC. Prompt engineering [EB/OL]. [2024-03-06]. https://docs.anthropic.com/claude/docs/prompt-engineering.
[43] YE Q, AXMED M, PRYZANT R, et al. Prompt engineering a prompt engineer [EB/OL]. [2023-11-26]. http://arxiv.org/abs/2311.05661.
[44] 亚马逊公司. 什么是提示工程? – 人工智能提示工程简介 – AWS [EB/OL]. [2023-11-28]. https://aws.amazon.com/cn/what-is/prompt-engineering/.
[45] WHITE J, FU Q, HAYS S, et al. A prompt pattern catalog to enhance prompt engineering with chatgpt [J]. ArXiv preprint arXiv:2302.11382, 2023: 1-19.
[46] ANTHROPIC. Prompt engineering [EB/OL]. [2024-03-06]. https://docs.anthropic.com/claude/docs/prompt-engineering.
[47] ANTHROPIC. Prompt engineering [EB/OL]. [2024-03-06]. https://docs.anthropic.com/claude/docs/prompt-engineering.
[48] JASON NG. 那么,我是如何使用ChatGPT的 [EB/OL]. [2024-01-25]. https://mp.weixin.qq.com/s/K3mjmkLye79Khem18QHORw.
[49] ANTHROPIC. Research prompt engineer [EB/OL]. [2024-03-10]. https://jobs.lever.co/Anthropic/a2c8b571-462b-434d-b7ba-68c827e91f4d.
[50] IBM. What is prompt engineering? | IBM [EB/OL]. [2024-01-08]. https://www.ibm.com/topics/prompt-engineering.
[51] GOOGLE CLOUD. What are AI hallucinations [EB/OL]. [2024-03-10]. https://cloud.google.com/discover/what-are-ai-hallucinations.
[52] ALTEXSOFT. The role of prompt engineer: skills and background [EB/OL]. [2024-03-09]. https://www.altexsoft.com/blog/prompt-engineer/.
[53] DAVID GEWIRTZ. Six skills you need to become an AI prompt engineer [EB/OL]. [2024-03-03]. https://www.zdnet.com/article/six-skills-you-need-to-become-an-ai-prompt-engineer/.
[54] WU S, IRSOY O, LU S, et al. Bloomberggpt: a large language model for finance [J]. ArXiv preprint arXiv:2303.17564, 2023: 1-76.
[55] SHORT C E, SHORT J C. The artificially intelligent entrepreneur: ChatGPT, prompt engineering, and entrepreneurial rhetoric creation [J]. Journal of business venturing insights, 2023, 19: e00388.
[56] 喻国明,李钒. 提示工程师: 未来新闻工作者的身份转变与逻辑重构 [J]. 未来传播, 2023, 30(4): 2-12+140.
[57] 喻国明,曾嘉怡,黄沁雅. 提示工程师: 生成式 AI浪潮下传播生态变局的关键加速器 [J]. 出版广角, 2023(11): 26-31.
[58] DENNY P, KUMAR V, GIACAMAN N. Conversing with copilot: exploring prompt engineering for solving cs1 problems using natural language: Proceedings of the 54th ACM Technical Symposium on Computer Science Education V.1, March 15-18, 2023 [C]. New York: Machinery, 2023.
[59] GIRAY L. Prompt engineering with ChatGPT: a guide for academic writers [J]. Annals of biomedical engineering, 2023, 51(12): 2629-2633.
[60] HESTON T F, KHUN C. Prompt engineering in medical education [J]. International medical education, 2023, 2(3): 198-205.
[61] MESKO B. Prompt engineering as an important emerging skill for medical professionals: tutorial [J]. Journal of medical internet research, 2023, 25: e50638.
[62] KASNECI E, SEBLER K, KUCHEMANN S, et al. ChatGPT for good? On opportunities and challenges of large language models for education [J]. Learning and individual differences, 2023, 103: 1-14.
[63] UDEMY. ChatGPT prompt engineering for UX design [EB/OL]. [2024-03-18]. https://www.udemy.com/course/chatgpt-prompt-engineering-for-ux-design/.
[64] GOOGLE. 提示设计策略 | Google AI for Developers [EB/OL]. [2024-03-06]. https://ai.google.dev/docs/prompt_best_practices?hl=zh-cn.
[65] ANTHROPIC. Prompt library [EB/OL]. [2024-03-06]. https://docs.anthropic.com/claude/page/prompts.
[66] OPENAI. Prompt engineering [EB/OL]. [2024-01-08]. https://platform.openai.com/docs/guides/prompt-engineering.
[67] OPENAI. Prompt engineering [EB/OL]. [2024-01-08]. https://platform.openai.com/docs/guides/prompt-engineering.
[68] AKIN F K. f/awesome-chatgpt-prompts [EB/OL]. (2024-03-16) [2024-03-16]. https://github.com/f/awesome-chatgpt-prompts.
[69] 吴恩达. Generative AI for Everyone [EB/OL]. [2023-11-05]. https://www.coursera.org/learn/generative-ai-for-everyone?trk_ref=articleProductCard.
[70] JULES WHITE. Prompt engineering for ChatGPT [EB/OL]. [2023-11-05]. https://www.coursera.org/learn/prompt-engineering?trk_ref=articleProductCard.
[71] CLASS CENTRAL. Free course: generative AI: prompt engineering basics from IBM [EB/OL]. (2023-10-01) [2023-11-28]. https://www.classcentral.com/course/generative-ai-prompt-engineering-for-everyone-264207.
[72] ASU. A new collaboration with OpenAI charts the future of AI in higher education | ASU News [EB/OL]. [2024-03-10]. https://news.asu.edu/20240118-university-news-new-collaboration-openai-charts-future-ai-higher-education.
[73] 清华大学本科教学. 2024年人工智能赋能教学试点课程申报启动 [EB/OL]. [2024-03-02]. http://mp.weixin.qq.com/s?__biz=MzI1Nzg2NzcyNw==&mid=2247494875&idx=1&sn=732de9bb5e77d575df62f95ddf2b5966&chksm=ea12467bdd65cf6d4b24a65a012944b49bb513ff5835de5f4e24a4986b784e6f731330d400e8#rd.
[74] 米洋,王晓艳. 首开先河!南京大学开设“人工智能通识核心课程体系” – 南京大学 [EB/OL]. [2024-03-14]. https://www.nju.edu.cn/info/1056/356301.htm.
[75] 符杰,钟周,乔伟峰. 美国卓越工程师培养模式研究:2017—2023年工程教育“戈登奖”获奖项目分析 [J]. 世界教育信息, 2023, 36(12): 3-12.
[76] 教育部. 将把人工智能技术深入到教育教学和管理全过程、全环节 [EB/OL]. [2024-03-13]. https://www.chinanews.com.cn/gn/2024/03-09/10177411.shtml.
[77] 港大禁用ChatGPT等AI工具为全港大学首例 [EB/OL]. [2024-03-11]. https://www.chinanews.com.cn/dwq/2023/02-18/9956047.shtml.
[78] 李晓红. 强化企业科技创新主体地位 [EB/OL]. [2024-03-08]. https://www.gov.cn/xinwen/2022-12/26/content_5733549.html.