Editor’s Overview
This reading list published by Latent Space selects 50 papers in the field of AI engineering, covering ten core modules, providing valuable resources for AI engineers and other professionals to enhance their skills. The cutting-edge large language model (LLM) section covers the latest developments of major series models; the benchmarking and evaluation module introduces various benchmarking methods to assess model capabilities; sections on prompt engineering provide relevant practical tutorials and core technologies; the RAG chapter explains its foundational theories; the agent section includes key points on benchmarking and core technologies; the code generation section involves datasets, models, and evaluation benchmarks; the visual model section details various visual models and their multimodal development trends; the speech model section includes recognition and synthesis models; the image and video model section introduces diffusion models and related tools; and the model fine-tuning chapter covers various fine-tuning techniques and data construction methods.
This is a reading list published by the AI engineering community Latent Space, which selects 50 highly valuable papers, models, and blogs in the field of AI engineering, covering the ten core modules of AI engineering, aiming to help AI engineers and enthusiasts build a systematic AI knowledge system and enhance practical capabilities.
About Latent Space
Latent Space is a technical community focused on AI engineering, known for its high-quality newsletters, top podcasts (ranked in the top ten technology podcasts in the US!), and active online and offline community, hailed as the “number one AI engineering podcast.” Latent Space has 13.1K followers on the X platform, including Elon Musk and renowned podcast host Lex Fridman!

Without further ado, let’s get started.
1. Cutting-edge LLM
This section focuses on large language models (LLMs), including GPT series (especially the system card for GPT-4o), Claude 3 series, Gemini series, and open-source models like LLaMA series.
List:
OpenAI Series: Leading Multiple Technological Innovations
-
GPT-1: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf
(Opened the era of pre-trained language models) -
GPT-2: https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
(Showed the capabilities of large language models) -
GPT-3: https://arxiv.org/abs/2005.14165
(Milestone model with few-shot learning capabilities) -
Codex: https://arxiv.org/abs/2107.03374
(Focused on code generation) -
InstructGPT: https://arxiv.org/abs/2203.02155
(Enhanced model’s instruction-following capabilities through human feedback reinforcement learning) -
GPT-4 Technical Report: https://arxiv.org/abs/2303.08774
(Classic multimodal model) -
GPT 3.5: https://openai.com/index/chatgpt/
(The model behind ChatGPT) -
GPT-4o: https://openai.com/index/hello-gpt-4o/
(Latest release supporting stronger multimodal real-time interaction) -
o1: https://openai.com/index/introducing-openai-o1-preview/
(First-generation reasoning model) -
o3: https://openai.com/index/deliberative-alignment/
(Second-generation reasoning model)
Anthropic Series: One of OpenAI’s Strong Competitors
-
Claude 3: https://www-cdn.anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf
(Excels on multiple evaluation benchmarks) -
Claude 3.5 Sonnet: https://www.anthropic.com/news/claude-3-5-sonnet
(Latest model with further performance and speed improvements)
Google Series: Outstanding Performance and Multimodal Capabilities
-
Gemini 1: https://arxiv.org/abs/2312.11805
(Multimodal large model supporting various inputs like text, images, audio) -
Gemini 2.0 Flash: https://blog.google/technology/google-deepmind/google-gemini-ai-update-december-2024/#gemini-2-0-flash
(A lighter and faster version) -
Gemini 2.0 Flash Thinking: https://ai.google.dev/gemini-api/docs/thinking-mode
(Unlocks the model’s reasoning capabilities) -
Gemma 2: https://arxiv.org/abs/2408.00118
(Google’s latest open-source model)
Meta Series: Open Source and High Performance
-
LLaMA: https://arxiv.org/abs/2302.13971
(Pioneer of open-source large language models) -
Llama 2: https://arxiv.org/abs/2307.09288
(Significantly improved performance, supports commercial use) -
Llama 3: https://arxiv.org/abs/2407.21783
(Latest open-source model, achieving current SOTA levels)
Mistral AI Series: Europe’s OpenAI
-
Mistral 7B: https://arxiv.org/abs/2310.06825
(A small yet elegant model) -
Mixtral of Experts: https://arxiv.org/abs/2401.04088
(Utilizes MOE architecture) -
Pixtral 12B: https://arxiv.org/abs/2410.07073
(A multimodal model with 12 billion parameters)
DeepSeek Series: A Rising Star in China’s AI Field
-
DeepSeek V1: https://arxiv.org/abs/2401.02954
(First generation of DeepSeek) -
DeepSeek Coder: https://arxiv.org/abs/2401.14196
(Focused on code generation) -
DeepSeek MoE: https://arxiv.org/abs/2401.06066
(DeepSeek MoE) -
DeepSeek V2: https://arxiv.org/abs/2405.04434
(Second generation of DeepSeek) -
DeepSeek V3: https://github.com/deepseek-ai/DeepSeek-V3
(Latest and strongest model of DeepSeek)
Apple Series: Edge Intelligence
-
Apple Intelligence: https://arxiv.org/abs/2407.21075
(Apple’s entry into edge intelligence)
2. Benchmarking and Evaluation
How to objectively measure the “IQ” of AI models? This section introduces mainstream model evaluation benchmark tests and evaluation frameworks. Just like real-world student exams, benchmark tests can objectively assess AI models’ capabilities on specific tasks, helping us better understand the strengths and weaknesses of the models.
List:
General Knowledge and Reasoning Ability Assessment
Evaluating models’ understanding and reasoning abilities in various knowledge domains.
-
MMLU (Massive Multitask Language Understanding): https://arxiv.org/abs/2009.03300
(One of the most widely used knowledge-type tests, covering 57 disciplines, including humanities, STEM, social sciences, etc.) -
MMLU Pro (Professional-Level MMLU): https://arxiv.org/abs/2406.01574
(Upgraded version of MMLU, with higher difficulty, closer to professional-level tests) -
GPQA & GPQA Diamond: https://arxiv.org/abs/2311.12022
(Testing questions for graduate-level, with extremely high quality and difficulty, GPQA Diamond is its enhanced version) -
BIG-Bench: https://arxiv.org/abs/2206.04615
(Includes over 200 different types of tasks, comprehensively assessing various capabilities of models) -
BIG-Bench Hard: https://arxiv.org/abs/2210.09261
(Enhanced version of BIG-Bench, filtering the most challenging tasks)
Long Text Reasoning Ability Assessment
Evaluating models’ capabilities in processing long texts and conducting complex reasoning.
-
MuSR (Multi-Step Reasoning): https://arxiv.org/abs/2310.16049
(Evaluates models’ ability to conduct multi-step reasoning in long documents) -
LongBench: https://arxiv.org/abs/2412.15204
(Multi-task, bilingual, long text understanding benchmark test) -
BABILong: https://arxiv.org/abs/2406.10149
(Synthetic long text reasoning dataset) -
Lost in the Middle: https://arxiv.org/abs/2307.03172
(Research on the utilization of information in long texts) -
Needle in a Haystack: https://github.com/gkamradt/LLMTest_NeedleInAHaystack
(“Needle in a haystack” test, assessing the model’s ability to extract key information from long texts)
Mathematical Ability Assessment
Evaluating models’ mathematical reasoning and problem-solving abilities.
-
MATH: https://arxiv.org/abs/2103.03874
(Includes 12,500 competition-level math problems, covering algebra, geometry, probability, etc.) -
AIME (American Invitational Mathematics Examination): https://www.kaggle.com/datasets/hemishveeraboina/aime-problem-set-1983-2024
(American Mathematics Invitational, difficulty between AMC and IMO) -
FrontierMath: https://arxiv.org/abs/2411.04872
(Focuses on advanced mathematical reasoning abilities, such as university-level math competition problems) -
AMC10 & AMC12: https://github.com/ryanrudes/amc
(American Mathematics Competitions, AMC10 for students up to 10th grade, AMC12 for students up to 12th grade)
Instruction Following Ability Assessment
Evaluating models’ understanding and execution of instructions.
-
IFEval (Instruction Following Evaluation): https://arxiv.org/abs/2311.07911
(Evaluates models’ ability to follow various types of instructions) -
MT-Bench (Multi-Turn Benchmark): https://arxiv.org/abs/2306.05685
(Evaluates models’ instruction-following ability in multi-turn dialogue scenarios)
Abstract Reasoning Ability Assessment
Evaluating models’ abstract reasoning and pattern recognition abilities.
-
ARC AGI (Abstraction and Reasoning Corpus): https://arcprize.org/arc
(Evaluates models’ general intelligence, challenging models to perform abstract reasoning like humans)
3. Prompt Engineering, In-context Learning, and Chain of Thought
How to guide models to generate results that better meet requirements through prompt techniques? This section introduces prompt engineering, in-context learning (ICL), and chain of thought (CoT) techniques, helping us better interact with AI models.
List:
Practical Tutorials
-
The Prompt Report: https://arxiv.org/abs/2406.06608
(The latest overview report on prompt engineering) -
Lilian Weng’s Blog: https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/
(Systematic summary of prompt engineering by Lilian Weng) -
Eugene Yan’s Blog: https://eugeneyan.com/writing/prompting/
(Prompt engineering tips shared by Eugene Yan) -
Anthropic’s Prompt Engineering Tutorial: https://github.com/anthropics/prompt-eng-interactive-tutorial
(Official tutorial from Anthropic on how to build effective prompts step by step) -
AI Engineer Workshop: https://www.youtube.com/watch?v=hkhDdcM5V94
(Video sharing practical experiences in prompt engineering)
Core Technologies
-
Chain-of-Thought (CoT): https://arxiv.org/abs/2201.11903
(Landmark work in chain of thought technology) -
Scratchpads: https://arxiv.org/abs/2112.00114
(Provides “scratch paper” for the model, enhancing its reasoning capabilities) -
Let’s Think Step By Step: https://arxiv.org/abs/2205.11916
(Classic prompt, a hallmark phrase for chain of thought technology) -
Tree of Thoughts (ToT): https://arxiv.org/abs/2305.10601
(Thought tree, enhancing models’ reasoning and planning capabilities) -
Prompt Tuning: https://aclanthology.org/2021.emnlp-main.243/
(Soft prompt, adjusting model behavior) -
Prefix-Tuning: https://arxiv.org/abs/2101.00190
(Adding trainable prefixes to optimize model outputs) -
Adjust Decoding: https://arxiv.org/abs/2402.10200
(Improving model performance by adjusting decoding strategies) -
Representation Engineering: https://vgel.me/posts/representation-engineering/
(Representation engineering, guiding generation by directly modifying the model’s hidden states)
Automated Prompt Engineering
-
Automatic Prompt Engineering (APE): https://arxiv.org/abs/2211.01910
(Automatically generating and optimizing prompts) -
DSPy: https://arxiv.org/abs/2310.03714
(DSPy framework, building complex AI systems through programming instead of manually writing prompts)
4. Retrieval-Augmented Generation (RAG)
RAG, which stands for Retrieval-Augmented Generation, combines the advantages of retrieval and generation models, utilizing external knowledge bases to enhance model performance. This section introduces Meta’s RAG papers, MTEB embedding benchmark tests, GraphRAG, and the RAGAS evaluation framework. Vector databases serve as an important infrastructure for RAG applications, which are also recommended for understanding.
List:
Foundational Theories
-
Introduction to Information Retrieval: https://nlp.stanford.edu/IR-book/information-retrieval-book.html
(A classic textbook in the field of information retrieval) -
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks: https://arxiv.org/abs/2005.11401
(Meta’s RAG paper, a landmark work in RAG technology) -
RAG 2.0: https://contextual.ai/introducing-rag2/
(Evolution of RAG technology)
Core Technologies
-
HyDE (Hypothetical Document Embeddings): https://docs.llamaindex.ai/en/stable/optimizing/advanced_retrieval/query_transformations/
(Enhancing query effectiveness through hypothetical documents) -
Chunking: https://research.trychroma.com/evaluating-chunking
(Chunking strategy) -
Rerank: https://cohere.com/blog/rerank-3pt5
(Re-ranking, optimizing the ordering of retrieval results) -
MTEB (Massive Text Embedding Benchmark): https://arxiv.org/abs/2210.07316
(Benchmark test to evaluate the performance of text embedding models)
Advanced Technologies
-
GraphRAG: https://arxiv.org/pdf/2404.16130
(Combining knowledge graphs with RAG to enhance its knowledge reasoning capabilities) -
RAGAS: https://arxiv.org/abs/2309.15217
(Automated framework for evaluating RAG system performance)
Practical Guides
-
LlamaIndex: https://docs.llamaindex.ai/en/stable/understanding/rag/
(Practical tutorials and tools for RAG provided by LlamaIndex) -
LangChain: https://python.langchain.com/docs/tutorials/rag/
(Integration solutions and example codes for RAG provided by LangChain)
5. AI Agents
A hot topic in 2025, the future form of AI, capable of perceiving the environment, making decisions, and taking actions like humans. This section introduces important papers related to AI agents such as SWE-Bench, ReAct, MemGPT, Voyager.
-
List:
Benchmarking
-
SWE-Bench: https://arxiv.org/abs/2310.06770
(Evaluating agents’ ability to solve real-world GitHub software engineering problems) -
SWE-Agent: https://arxiv.org/abs/2405.15793
(Software engineering agent based on LLM) -
SWE-Bench Multimodal: https://arxiv.org/abs/2410.03859
(Multimodal SWE-Bench) -
Konwinski Prize: https://kprize.ai/
(Rewarding agents with outstanding contributions to software engineering automation)
Core Technologies
-
ReAct: Synergizing Reasoning and Acting in Language Models: https://arxiv.org/abs/2210.03629
(ReAct framework, combining reasoning and acting) -
MemGPT: Towards LLMs as Operating Systems: https://arxiv.org/abs/2310.08560
(Endowing agents with long-term memory capabilities) -
MetaGPT: The Multi-Agent Framework: https://arxiv.org/abs/2308.00352
(Multi-agent meta-programming framework, enabling multiple agents to work like a team through role assignment and collaboration) -
AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation: https://arxiv.org/abs/2308.08155
(Microsoft’s open-source framework supporting the construction of complex LLM applications by defining and combining multiple agents) -
Smallville: Generative Agents: Interactive Simulacra of Human Behavior: https://arxiv.org/abs/2304.03442
&https://github.com/joonspk-research/generative_agents
(Created by Stanford and Google, simulating agents with social behaviors) -
Voyager: An Open-Ended Embodied Agent with Large Language Models: https://arxiv.org/abs/2305.16291
(NVIDIA’s Minecraft agent, capable of continuous learning, exploration, and discovery in the Minecraft world) -
Agent Workflow Memory: https://arxiv.org/abs/2409.07429
(Enhancing agents’ planning and execution capabilities by introducing workflow memory mechanisms)
Practical Guides
-
Building Effective Agents: https://www.anthropic.com/research/building-effective-agents
(Anthropic shares practical experiences and thoughts on building efficient agents) -
OpenAI Swarm: https://github.com/openai/swarm
(Multi-agent tools launched by OpenAI)
6. Code Generation
This section introduces The Stack code dataset, HumanEval/Codex benchmark tests, AlphaCodeium papers, etc.
-
List:
Datasets
-
The Stack: https://arxiv.org/abs/2211.15533
(Large-scale, multilingual source code dataset, 3 TB)
Code Generation Models
-
DeepSeek-Coder: https://arxiv.org/abs/2401.14196
(DeepSeek-Coder model paper) -
Code Llama: https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/
(A series of code generation models open-sourced by Meta) -
Qwen2.5-Coder: https://arxiv.org/abs/2409.12186
(A code generation model from the Qwen 2.5 series) -
AlphaCodium: https://arxiv.org/abs/2401.08500
(Code generation model developed by DeepMind)
Evaluation Benchmarks
-
HumanEval/Codex: https://arxiv.org/abs/2107.03374
(Evaluating code generation models’ ability to solve basic programming problems) -
Aider: https://aider.chat/docs/leaderboards/
(Aider’s compilation of multiple code generation benchmark leaderboards) -
Codeforces: https://arxiv.org/abs/2312.02143
(Used to evaluate models’ competitive programming abilities) -
BigCodeBench: https://huggingface.co/spaces/bigcode/bigcodebench-leaderboard
(A multi-dimensional evaluation suite for code generation launched by the BigCode project) -
LiveCodeBench: https://livecodebench.github.io/
(Focusing on the correctness and runtime behavior of code generation model outputs) -
SciCode: https://buttondown.com/ainews/archive/ainews-to-be-named-5745/
(Evaluating the performance of code generation models in the field of scientific computing)
AI Code Review
-
CriticGPT: https://criticgpt.org/criticgpt-openai/
(A tool used internally by OpenAI to help human programmers discover code defects)
7. Visual Models
This section introduces CLIP, Segment Anything Model, and other visual models, as well as the development trends of multimodal large models.
-
List:
Object Detection
-
YOLO (You Only Look Once): https://arxiv.org/abs/1506.02640
(Classic object detection model, known for its speed and accuracy) -
DETRs Beat YOLOs on Object Detection: https://arxiv.org/abs/2304.08069
(DETR series models, a Transformer-based object detection method with superior performance)
Visual-Language Pre-training
-
CLIP (Contrastive Language-Image Pre-training): https://arxiv.org/abs/2103.00020
(OpenAI’s landmark work, linking images and text through contrastive learning) -
MMVP Benchmark: Multimodal Video Pretraining for Video Action Recognition: https://arxiv.org/abs/2401.06209
(Multimodal video benchmark test)
Image Segmentation
-
Segment Anything Model (SAM): https://arxiv.org/abs/2304.02643
(Meta’s image segmentation model, capable of segmenting any object in an image through prompts)
Multimodal Large Models
-
Flamingo: a Visual Language Model for Few-Shot Learning: https://huyenchip.com/2023/10/10/multimodal.html
(DeepMind’s multimodal model supporting few-shot learning) -
Chameleon: Mixed-Modal Early-Fusion Foundation Models: https://arxiv.org/abs/2405.09818
(Meta’s multimodal model, adopting an early fusion approach) -
GPT-4V system card: https://cdn.openai.com/papers/GPTV_System_Card.pdf
(System card for GPT-4V)
8. Speech Models
From speech recognition to speech synthesis, AI is changing the way we interact with machines. This section introduces Whisper, AudioPaLM, NaturalSpeech, and other speech models, as well as related application cases.
-
List:
Automatic Speech Recognition (ASR)
-
Whisper: https://arxiv.org/abs/2212.04356
(OpenAI’s open-source speech recognition model, supporting multiple languages)
Text-to-Speech (TTS)
-
NaturalSpeech: https://arxiv.org/abs/2205.04421
(Microsoft’s high-quality speech synthesis model)
Speech Large Models
-
AudioPaLM: https://audiopalm.github.io/
(Google’s audio-text multimodal large model, capable of processing and generating audio and text content)
Real-time Speech Technology
-
Kyutai Moshi: https://arxiv.org/html/2410.00037v2
(Open-source model supporting full-duplex speech-text conversion, low latency) -
OpenAI Realtime API: https://platform.openai.com/docs/guides/realtime
(OpenAI’s real-time API)
9. Image/Video Models
Stable Diffusion, Sora, and other generative models showcase the immense potential of AI in image and video generation. This section introduces papers related to image and video models, as well as tools like ComfyUI.
-
List:
Diffusion Models
-
Latent Diffusion Models: https://arxiv.org/abs/2112.10752
(Core technology of Stable Diffusion) -
Consistency Models: https://arxiv.org/abs/2303.01469
(Introducing consistency constraints, speeding up the sampling of diffusion models, significantly reducing the number of sampling steps) -
DiT (Diffusion Transformers): https://arxiv.org/abs/2212.09748
(Core technology of Sora, applying the Transformer architecture to diffusion models, laying the foundation for generating high-quality videos)
Image Generation Models
-
DALL-E: https://arxiv.org/abs/2102.12092
(OpenAI’s pioneering work, generating images based on text descriptions) -
DALL-E 2: https://arxiv.org/abs/2204.06125
(An upgraded version of DALL-E, higher resolution and quality of generated images) -
DALL-E 3: https://cdn.openai.com/papers/dall-e-3.pdf
(Further improves image generation quality and better understands and follows text descriptions) -
Imagen: https://arxiv.org/abs/2205.11487
(Google’s text-to-image generation model) -
Imagen 2: https://deepmind.google/technologies/imagen-2/
(An upgraded version of Imagen, supporting more diverse image editing functions) -
Imagen 3: https://arxiv.org/abs/2408.07009
(Google’s latest image generation model)
Video Generation Models
-
Sora: https://openai.com/index/sora/
(OpenAI’s text-to-video generation model, now released)
Tools
-
ComfyUI: https://github.com/comfyanonymous/ComfyUI
(Node-based Stable Diffusion WebUI, providing a flexible and controllable image and video generation process)
10. Model Fine-tuning
How to customize models according to specific needs of specific domains? This section introduces LoRA/QLoRA, DPO, and other fine-tuning techniques, as well as how to utilize these techniques to improve model performance.
-
List:
Parameter-Efficient Fine-Tuning (PEFT)
-
LoRA: Low-Rank Adaptation of Large Language Models: https://arxiv.org/abs/2106.09685
(Classic work in parameter-efficient fine-tuning, achieving efficient fine-tuning by inserting a small number of trainable parameters into large language models through low-rank adapters) -
QLoRA: Efficient Finetuning of Quantized LLMs: http://arxiv.org/abs/2305.14314
(Combining LoRA with 4-bit quantization to further reduce the computational resources required for fine-tuning)
Preference Alignment Fine-Tuning
-
DPO: Direct Preference Optimization: Your Language Model is Secretly a Reward Model: https://arxiv.org/abs/2305.18290
(An algorithm for direct optimization strategy, aligning LLM with human preferences without training a reward model) -
ReFT: Representation Finetuning for Language Models: https://arxiv.org/abs/2404.03592
(Aligning models by fine-tuning hidden layer representations, which can complement DPO)
Data Construction
-
Orca 3/AgentInstruct: Agentic Instruction Generation: https://www.microsoft.com/en-us/research/blog/orca-agentinstruct-agentic-flows-can-be-effective-synthetic-data-generators/
(Using agents to generate instruction data for model fine-tuning)
Reinforcement Learning Fine-Tuning
-
RL Finetuning for o1: https://www.interconnects.ai/p/openais-reinforcement-finetuning
(Recently launched reinforcement learning-based fine-tuning technology by OpenAI) -
Let’s Verify Step By Step: https://arxiv.org/abs/2305.20050
(Enhancing the effects of RLHF through step-by-step verification)
Tutorials
-
How to fine-tune open LLMs: https://www.philschmid.de/fine-tune-llms-in-2025
(A practical LLM fine-tuning tutorial)
Note: This article is for academic exchange only. If there is any infringement, please contact the editor for deletion.
Source: AI Information Gap
Editor: Zhang Bole
Review: Zhang Min, Wang Yun
Translation Technology Revolution: A New Era for Translation Education, Research, and Industry
Favorites | Recommended Reading List for Translation Technology Learning for Translation Majors
[01] Have you heard that Perplexity is changing foreign language education? Can you use it?
[02] Initial exploration of large language model plugins in Microsoft Office and WPS
[03] Comparison of domestic large language models — based on translation issues or cases
[04] Teaching you how to use Copilot
[05] Have ChatGPT and Gemini reached a level equivalent to TEM-8?
[06] Mainstream translation apps abroad
[07] Mainstream translation app tools in China
[08] Using TM for pre-translation in Trados
[09] Application of parallel corpora in interpretation practice
[10] Exploring corpus alignment and tokenization in corpus processing
[11] Corpus collection and cleaning in corpus processing
[12] How to create a terminology database for use in Trados?
[13] How to establish a translation memory database in Trados?
[14] Overview of common corpus tools at home and abroad
[15] Five authoritative terminology databases that translators should not miss
[16] Overview of common CAT tools at home and abroad
[17] Introduction to AntConc and indexing tools (Part 1)
[18] Tips for using Quicker
[19] Everything: A tool for “instant” file searching
[20] The first installment of Sketch Engine exploration is coming!
[21] LancsBox: A must-have tool for corpus researchers
[22] TermWiki: A powerful tool for terminology retrieval
[23] ABBYY FineReader PDF: A little helper for document recognition
[24] ChatGPT + Word = Efficient Office Work
[25] How to use chatbots to create bilingual glossaries
[26] Applications of ChatGPT in pre-translation preparation — Terminology preparation
[27] Feeding corpus to improve translation quality
[28] (1) Initial exploration of pre-editing with ChatGPT
[29] The latest method of integrating ChatGPT with Word (perfect debugging)
[30] AI foreign language writing assistant to boost efficient writing
[31] Exploring the application of ChatGPT in the translation process
[32] Academic optimization by the Chinese Academy of Sciences for local deployment
If you enjoy our content, feel free to like, share, and forward. For more questions, you can leave a message for the editor!
Promoting the application of translation technology
Facilitating the integration of translation technology and research
Leave a message in the background, and the editor will reply as soon as possible