Reflections on the New Generation of Intelligent Agents in the Post-LLM Era

Reflections on the New Generation of Intelligent Agents in the Post-LLM Era

MLNLP community is a well-known machine learning and natural language processing community both domestically and internationally, covering audiences including NLP graduate students, university teachers, and corporate researchers.
The community’s vision is to promote communication and progress between the academic and industrial sectors of natural language processing and machine learning, especially for the advancement of beginners.
Reprinted from | Lanzhou Technology

Introduction

At the end of 2022, large language models (LLMs), represented by ChatGPT, were officially released, and the development of intelligent agents (Agents) seemed to have boarded a high-speed train, entering a fast track of rapid development, with various intelligent agents and agent tool platforms (Agent Builders) emerging continuously, accelerating the implementation of AI. In November of the same year, OpenAI launched GPTs and the GPT Store, allowing users to create personalized GPTs without coding, successfully establishing an intelligent agent and its ecosystem.

Dr. Zhou Ming, founder and CEO of Lanzhou Technology, believes that “we have already entered the post-LLM era. LLMs will continue to develop, and as the application carrier of LLMs, intelligent agents will undoubtedly show explosive growth, leading the development of AI technology. At the same time, the new generation of intelligent agent tool platforms should have a multi-level intelligent system that remains independent from the underlying LLM, without relying on a single large model or technology supplier. This is conducive to achieving modular development, having strong maintainability, and establishing a good ecosystem to promote the prosperous development of intelligent agents.”

Post-LLM Era

In the post-LLM era, intelligent agents are bound to experience explosive development.

Recently, Cristóbal Valenzuela, co-founder and CEO of innovative AI video generation company Runway, shared insights, stating that “the next wave of innovation will not come from companies focused on building more powerful models, as AI models have fully entered the commercialization stage. Just as every company uses the internet, every company will also use AI.”

Overall, the main characteristics of the post-LLM (large language model) era are reflected in several aspects:

  • LLMs, as the foundational model of artificial intelligence, have matured. From the initial debut of the GPT series to the diverse and distinctive large models today, including the continuously emerging open-source models, they have not only continuously improved their capabilities in language understanding, generation, and reasoning but have also made significant progress in understanding and generating code, images, and videos, laying a solid foundation for higher-level applications and research.

  • The speed of LLM technological innovation has slowed down. Although models continue to evolve, it is difficult to see revolutionary technological leaps like those seen at the outset of LLMs in the short term. Current research and development focus more on refining existing LLMs and optimizing performance.

  • As LLM technology matures, there is a growing emphasis on the practical applications and social value of AI technology, rather than a mere fascination with its technological advancement. In addition, the technological risks and ethical challenges posed by LLMs have increasingly become focal points of concern across society.

Against this backdrop, intelligent agents (Agents) have emerged as the most important technology in the post-LLM era.

Intelligent Agents

Imagine a scenario: when an investor plans to invest in a company and consults an AI large model, it may provide a series of seemingly constructive suggestions, such as “diversifying investments” or “focusing on growth stocks,” but these suggestions are often hollow and provide no substantial help to the investor.

In contrast, when humans solve knowledge-intensive decision-making problems, they first base their analysis on experience, observe behavior, set large goals, and then gradually decompose them into smaller goals, achieving each small goal through a series of actions, ultimately converging to the original goal. Returning to the investment plan question, human thinking first analyzes the investor’s risk preference, financial situation, investment goals, and other key information, then simulates different market conditions, and finally proposes specific and feasible investment suggestions. This thinking pattern, known as the chain of thought, is key to how humans solve complex problems. Intelligent agents are an attempt to comprehensively mimic the thinking patterns and tool usage behaviors exhibited by humans when facing complex problems, aiming to have intelligent agents replace humans in completing this series of thinking processes.

Thus, intelligent agents are also defined as the ultimate form of achieving AGI and are the most likely path to AGI.

Lanzhou Technology has been deeply involved in the field of innovative technologies for large language models for many years, spanning from the initial exploration of lightweight pre-training models to the deep research and development of the Mencius large model, and then to the precise layout of industry large models targeting vertical scenarios and professional fields.

Driven by the relentless market demand, Lanzhou Technology has not only ensured continuous technological innovation but also placed greater emphasis on the practical application and social contribution of large model technologies, investing more resources and efforts in the research and development of intelligent agents.

Reflections on the New Generation of Intelligent Agents in the Post-LLM Era

Analysis of Intelligent Agent Model Architecture

An intelligent agent (Agent) is an entity that can perceive the environment, make decisions, and take actions to achieve specific goals. It can be a software program or a hardware device with certain intelligence. Intelligent agents, with their powerful language understanding and multimodal capabilities, become a bridge connecting humans and the digital world. They can understand human instructions, handle various complex tasks, and respond efficiently and accurately.

Generally speaking, intelligent agents include modules for user interface, task management, memory, knowledge storage, reasoning, learning, and action execution.

  • User interface module: Responsible for interacting with users, receiving user input commands and questions, and displaying the output results and feedback of the intelligent agent. This includes but is not limited to various forms such as graphical interfaces and voice interaction interfaces.

  • Task management module: Responsible for analyzing and decomposing user requests, determining the type, priority, and execution process of the task. Assigns tasks to corresponding modules for processing and coordinates the work between modules.

  • Memory module: Used to store the experiences, knowledge, and historical information of the intelligent agent. This module can be divided into short-term memory and long-term memory, with short-term memory used to store information relevant to the current task, while long-term memory is used to store long-term knowledge and experiences. The memory module can assist the intelligent agent in quickly retrieving and utilizing past experiences and knowledge when handling new tasks, improving decision-making efficiency and accuracy, and supporting personalized responses and action generation.

  • Knowledge base module: Stores various knowledge required by the intelligent agent, including domain knowledge, experiential data, and rules, using forms such as databases and knowledge graphs for quick retrieval and querying.

  • Reasoning module: Conducts reasoning and analysis based on user questions and knowledge from the knowledge base module to derive reasonable conclusions and solutions. Currently, the mainstream technology employs the LLM mechanism for reasoning.

  • Learning module: Learns from user feedback, new data, and experiences, updating and optimizing the knowledge and capabilities of the intelligent agent. This can be done using supervised learning, unsupervised learning, reinforcement learning, etc.

  • Action execution module: Executes specific actions based on the solutions derived from the reasoning module, such as sending instructions to a functional component or external device or calling other intelligent agents. It also provides feedback on the results of actions to the user interface and other modules for further processing and optimization.

What Are the Application Scenarios?

Intelligent agents have broad application prospects, mainly including:

  • Intelligent customer service: Intelligent customer service agents can automatically respond to and answer user inquiries, providing comprehensive customer support and services. With the help of intelligent agent tool platforms, developers can quickly build and deploy intelligent customer service agents while continuously optimizing their performance based on user needs and feedback, improving service quality.

  • Automated processes: In business processes, intelligent agents can automatically perform a series of repetitive and regular tasks, such as data entry, document processing, and approval processes. Intelligent agent tool platforms can help developers build efficient automated process agents, enhancing the work efficiency and management level of enterprises.

  • Intelligent recommendation systems: Intelligent recommendation agents can provide personalized recommendation services based on user interests, behaviors, and historical data. The learning module and algorithms in the intelligent agent tool platform can continuously optimize recommendation results, improving user satisfaction and loyalty.

  • Game development: In games, intelligent agents can act as game characters or NPCs, interacting and competing with players. Intelligent agent tool platforms can assist game developers in creating game characters with intelligent behaviors, providing players with a richer, more interesting, and challenging gaming experience.

Lanzhou Technology has launched a series of intelligent agent applications based on the Mencius large model, including intelligent meeting assistants (Lanzhou Smart Meeting), knowledge base Q&A (Lanzhou Smart Library), as well as document understanding, writing assistance, customer service assistance, marketing assistance, search assistance, etc., fully meeting the diverse needs of enterprises.

The Importance of Intelligent Agents

Overcoming the Limitations of Large Models to Achieve End-to-End Task Execution

Large models often fall short when dealing with specialized domain issues, as they are primarily trained on publicly available datasets and lack domain-specific or proprietary non-public information, leading to responses that may be vague, impractical, or formulaic, lacking precision or comprehensiveness, and struggling to update information in real-time to stay in sync with the latest data.

In contrast, intelligent agents can obtain the latest information in specific domains through interactions with external systems and data sources, effectively compensating for this shortcoming of large models. For example, in the financial sector, intelligent agents can capture market data and financial news in real time to provide users with more accurate investment advice.

Intelligent agents also possess excellent memory and processing capabilities. By constructing memory modules or interacting with external storage systems, they can better understand and manage long-term, multi-turn interaction tasks, thereby providing more coherent and user-centric services. Intelligent customer service agents are a typical example, as they can remember users’ historical inquiry records and preferences, offering more thoughtful services.

Additionally, intelligent agents can provide highly personalized services based on users’ personal preferences and behavioral habits. Whether it is intelligent assistants, intelligent recommendation systems, or intelligent customer service, intelligent agents can accurately meet users’ personalized needs, enhancing user experience and satisfaction.

Accelerating Process Standardization and Significantly Improving Productivity

The effectiveness of large models largely depends on the prompts provided by users, which must encompass elements such as problem definition, role setting, task description, execution steps, examples, and output formats. To reflect professionalism and personalization, it is also necessary to provide AI with business data and personal attention data, requiring multiple rounds of fine-tuning to meet user expectations. This process is relatively complex and usually requires operators to have a certain level of proficiency and expertise.

To address this issue, intelligent agents ingeniously encapsulate complex technical operations, prompts, and related data elements for specific tasks, allowing ordinary users to efficiently complete tasks through intuitive structured interfaces or natural language commands.

This innovation not only significantly reduces usage difficulty but also greatly enhances the standardization of processes, effectively promoting a leap in productivity.

Integration and Innovation to Promote the Prosperity of the AI Ecosystem

The progress of intelligent agents relies on powerful multimodal information fusion and processing capabilities, which will accelerate the intersection and innovation across different technological fields. The construction of intelligent agents integrates multiple technologies, including language understanding, logical reasoning, memory, learning capabilities, and specialized knowledge bases, marking a high level of integration in terms of technology. Therefore, the continuous development of intelligent agents will provide feedback to large language models (LLMs) and their technologies, continually driving the advancement of artificial intelligence.

To enhance the efficiency and quality of intelligent agent construction, intelligent agent tool platforms should rely on general or specialized LLMs as base models, providing developers with strong technical support by invoking diverse functional components and other intelligent agents. This strategy not only accelerates the development process of intelligent agents but also promotes the synchronous development of related technologies, jointly advancing the prosperity of the AI ecosystem.

Promoting Infrastructure Construction and Improving the Cloud Computing Industry Chain

The vigorous development of intelligent agents will significantly drive the enormous demand for cloud computing resources, directly catalyzing the construction and upgrading of cloud computing infrastructure. Given that intelligent agents rely on powerful computing capabilities and massive data storage during training and operation, cloud computing, with its flexible and scalable computing resources and storage services, becomes the ideal platform to meet these needs.

At the same time, the widespread application of intelligent agents will attract more enterprises and developers to join the cloud computing industry’s ecosystem, promoting the improvement and development of the cloud computing industry chain and fostering a virtuous cycle of development across the entire industry ecosystem, creating a mutually beneficial and continuously evolving ecosystem.

Reflections on the New Generation of Intelligent Agents in the Post-LLM Era

Figure 1: The Ecosystem of AI Agents

Intelligent Agent Tool Platform

For users without a professional background in LLMs, building an intelligent agent based on LLMs is not an easy task. This requires them to not only understand the working principles, capabilities, and limitations of LLMs but also to distinguish the differences in performance, accuracy, and applicability among different LLMs to make precise selections among many options and accurately interpret the behavioral logic of intelligent agents based on the selected models.

Additionally, constructing an effective intelligent agent based on LLMs typically requires a large amount of high-quality data. Non-technical personnel may lack effective means to collect data relevant to intelligent agent tasks and ensure the accuracy, completeness, and representativeness of the data, further increasing the difficulty of construction.

To address this challenge, intelligent agent tool platforms have emerged. These platforms integrate various key modules required for intelligent agent construction, such as data processing, model invocation, and communication, encapsulating these complex technical details so that users do not need to delve deeply into them. Through intelligent agent tool platforms, non-technical personnel can simply configure and invoke functions to achieve data collection and preprocessing, greatly reducing the technical difficulty of building intelligent agents.

To facilitate the rapid construction of intelligent agents, Lanzhou Technology has launched a powerful intelligent agent tool platform – Lanzhou Smart Build, which, in conjunction with Lanzhou Technology’s Mencius large model technology, can build and deploy intelligent agents across various industries, providing comprehensive services for enterprises.

Reflections on the New Generation of Intelligent Agents in the Post-LLM Era

Figure 2: General Intelligent Agent Framework

In a recent interview, Mustafa Suleyman, Microsoft’s AI head, stated that “in the coming years, there will be a trend of both large and small models advancing together. On one hand, the competition for large models will continue and integrate more modalities of data, such as video and images. On the other hand, the technology of using large models to train small models (such as distillation) is on the rise, and efficient small models will play a significant role in specific scenarios. Knowledge will be condensed into smaller, cheaper models embedded in various devices for environmental awareness.”

Issues with the Unified LLM Strategy

In the future, models of different uses and sizes will coexist. The design of intelligent agent tool platforms needs to fully explore the value of different models.

Currently, intelligent agent tool platforms primarily rely on the selected base large language model (LLM) to provide the necessary language understanding, reasoning, knowledge base, and self-learning capabilities for intelligent agents. However, this unified LLM strategy faces a series of issues:

  • Differences in computational requirements: Different intelligent tasks, such as natural language understanding, image recognition, and logical reasoning, each have unique computational demands. Attempting to implement all these functions in a single model may exceed the current capabilities of computing hardware, leading to extremely slow training and inference processes, and even rendering them impractical for real-world applications. For example, training a model containing a super-large model may require thousands or even tens of thousands of high-performance GPUs, which is not only costly but also poses significant challenges in terms of energy consumption and hardware maintenance.

  • Diverse data requirements: Different intelligent tasks typically require different types and domains of training data. Integrating all intelligent capabilities into a single model means needing to collect and organize massive amounts of diverse data, which is undoubtedly an extremely daunting task. For instance, natural language processing tasks require a large amount of text data, while image recognition tasks need a large amount of image data, and the step-by-step annotations required for reasoning are even more time-consuming. Collecting sufficient data covering all task types for an integrated model is extremely challenging, and ensuring the quality and diversity of the data is also difficult.

  • Increased model complexity: An integrated model that encompasses all intelligent capabilities will become extremely complex, with a very large structure and parameter space, making the design, training, and optimization of the model extremely difficult. Complex models are more prone to overfitting, gradient vanishing, or explosion issues, and debugging and improving them are also more challenging. When new intelligent tasks or demands arise, a large integrated model may struggle to adjust and adapt quickly, while models specifically designed for new tasks can be more flexibly adjusted and optimized.

  • Risk of technological monopoly: The powerful capabilities of a unified model may create a monopolistic situation, where other companies and developers lack sufficient competitive space and motivation to explore new technologies, algorithms, and application directions.

  • Task specificity: Different intelligent tasks often have their specific requirements and best practices. Models designed specifically for a task typically perform better in adapting to the characteristics of that task, achieving superior performance.

Moreover, in critical fields such as healthcare and finance, the interpretability of models is very important. Lack of interpretability may lead to reduced user trust in the model and make it difficult to troubleshoot and improve when issues arise.

Dividing Intelligent Agent Capabilities by Intelligence Levels

Based on the issues and discussions above, Dr. Zhou Ming, founder and CEO of Lanzhou Technology, believes that the capabilities of intelligent agents should be divided into several levels based on their intelligence levels (rather than functions). This design concept aims to enhance the efficiency, flexibility, and scalability of intelligent agents, enabling them to better adapt to various complex tasks and environments.

L1: Basic Interaction Capabilities

This level mainly encompasses the general dialogue, commonsense, and general knowledge capabilities already possessed by large language models (LLMs), primarily manifested in understanding user questions and generating responses. With the rapid development of large language models, L1 capabilities are expanding from text interaction to multimodal interactions involving voice, images, and videos. This capability serves as the foundation for interaction between intelligent agents and users and is also a prerequisite for achieving other higher-level functions.

L2: Reasoning and Explanation Capabilities

Building on L1, L2 focuses on the reasoning mechanisms of intelligent agents, i.e., the methods and processes by which intelligent agents handle information and make decisions. This can be based on rule systems or large model mechanisms, gradually reasoning and seeking optimal or near-optimal solutions. Currently, supported by L1, L2 has a relatively simple commonsense reasoning mechanism but lacks the ability to reason about specialized or complex issues. The goal of L2 is to establish a universal solving mechanism for specialized or complex problems and to be able to explain the solving process while remaining relatively independent from L1.

L3: Expert-Level Problem Solving Capabilities

Based on L1 and L2, L3 provides expert-level problem-solving capabilities by introducing domain knowledge or expert knowledge, which can be understood as a new generation of expert systems. By leveraging retrieval-augmented generation (RAG) mechanisms, L3 can integrate domain-relevant knowledge, including text, images, video data, and structured knowledge graphs. Combined with L1’s language understanding and interaction capabilities, as well as L2’s reasoning capabilities, L3 can provide interactive answers and solutions to specialized domain problems.

L4: Self-Learning Capabilities

L4 emphasizes the self-learning capabilities of intelligent agents, i.e., the ability to continuously improve their behaviors and performance through interaction with the environment and accumulation of their experiences. Intelligent agents can explore all possible solutions by remembering interaction information with users and acquiring environmental information, and utilizing reinforcement learning mechanisms to choose actions that may yield high returns based on user choices and environmental feedback. Users can also continuously introduce new data, allowing intelligent agents to update their knowledge bases and improve their ability to solve new problems and provide personalized services to users.

L5: World Models and Embodied Intelligence

Intelligent agents at the L5 level possess world models, meaning they can model and reason about objects, locations, and interactions in space and events, understanding the three-dimensional physical world. Intelligent agents can receive various information in real time, such as sound, video, sensors, and text, and carry out command-action processes with the physical world and robots through human-machine interfaces. For example, robots can autonomously generate action sequences based on user commands or environmental perceptions, which is also a capability required for embodied intelligence. While world models are crucial for artificial intelligence, current world models are still limited to specific task scenarios, such as the built-in maps of floor-cleaning robots (which users can update at any time) and the activity ranges and action planning of welcoming robots in places like shopping malls, hotels, and banks, as well as complex data for autonomous vehicles, such as road data, traffic data, and driving decision mechanisms.

Self-Play and Slow Thinking

It is particularly important to note that the above design incorporates self-play and slow thinking capabilities, which have garnered attention with OpenAI’s o1, into levels L2, L3, and L4. These capabilities are of significant importance and value for intelligent agents. Especially in scenarios requiring deep reasoning, such as AI education, medical diagnosis, deep customer service, traffic planning, AI4S, etc., they can significantly enhance user experience, but they also bring new challenges.

In intelligent agents, self-play refers to the interaction and competition of the agent with itself to enhance the accuracy of results. Through extensive self-play, intelligent agents can accumulate rich experiences and explore different strategies and action plans. In this process, intelligent agents continuously optimize their decision models, improving their performance on specific tasks. This is particularly important for complex tasks and fields where providing sufficient guidance and feedback may be challenging for humans. Intelligent agents can autonomously explore the problem space through self-play, continuously improving their performance.

On the other hand, “slow thinking” refers to the process in which intelligent agents decompose complex instructions into multiple subtasks when processing them, calling different tools to complete the tasks and reflecting on and confirming the results. This process embodies a “system 2” approach similar to human thinking, which is step-by-step, interpretable, and leads to more accurate results. It allows large models to have an understanding, planning, and reflection iterative process, enabling them to continuously learn and evolve in the environment while completing complex tasks.

However, self-play and slow thinking do not guarantee that the decision-making processes and results of intelligent agents are entirely correct, as they may involve illusory or ambiguous processes and outcomes. Therefore, operators need to identify errors, verify results, and cross-validate, which can introduce a certain degree of uncertainty and bias, necessitating further research and improvement to continuously optimize their performance. In terms of user experience, these capabilities may also lead to slower speeds, longer wait times, and chaotic interfaces. Particularly in reasoning, excessive time consumption can increase the demand for reasoning capabilities, adding new pressures to deployment costs.

Advantages of Dividing Intelligent Agent Capabilities by Levels

So, what advantages does it have to implement different capabilities of intelligent agents with separate models, rather than encompassing all capabilities within a large model?

First, this division allows for targeted training and optimization of specific levels of capabilities. This not only reduces the model size and lowers the consumption of computational resources but also significantly enhances the speed of model iteration and optimization.

Second, it enables models to possess high neutrality and flexibility. Models at different levels can be independently developed and maintained, and then integrated and combined as needed. This approach allows the functionalities of intelligent agents to be flexibly configured according to specific application needs, enhancing the system’s scalability and adaptability.

In summary, assigning the different capabilities of intelligent agents to independent models not only achieves precise positioning and compactness of the models but also ensures their high neutrality, flexibility, and maintainability. This approach can better meet the needs of various application scenarios, improving the performance and reliability of intelligent agents.

The challenges faced by this design mainly lie in the adaptability issues between intelligent agents at different levels. For example, if a certain large language model (LLM) is chosen as the base, it primarily supports the L1 level while partially covering levels L2 to L5. In this case, the intelligent agent will need to further supplement capabilities from L2 to L5. Although the continuous development of LLMs will enhance their general support for levels L2 to L5, there may still be insufficient support for specific professional tasks. Therefore, it is necessary to improve the compatibility of capabilities from L2 to L5 with L1 and supplement support for that professional task to achieve more efficient and precise intelligent applications.

Conclusion

With the continuous maturation of large language models (LLMs), we have already entered the post-LLM era. Intelligent agents have successfully bridged the “last mile” of LLM implementation, significantly enhancing productivity and becoming a strong driver of technological advancement in artificial intelligence. It is foreseeable that in the future, there will be corresponding intelligent agents providing services for any common task. To assist in the development of intelligent agents, intelligent agent tool platforms have encapsulated various LLMs for different purposes, task components, and business processes, allowing users to quickly build intelligent agents through drag-and-drop operations and natural language commands.

Thus, the hierarchical intelligent agent tool platform proposes to assign the different capabilities of intelligent agents to independent models. This framework offers numerous advantages, including strong model specificity, smaller scale, neutrality, flexibility, and ease of maintenance. By adopting this approach, it will be better able to meet the needs of various application scenarios, thereby enhancing the performance and reliability of intelligent agents.

Dr. Zhou Ming believes that future attention should be focused on the following trends in the development of intelligent agents:

  1. Further deepening of multimodal integration.

  2. Continuous enhancement of reasoning, self-learning, and self-evolution capabilities.

  3. Continuous improvement of interpretability and reliability.

  4. Deep integration with various industries.

  5. Personal intelligent assistants that can run locally on mobile phones.

  6. Integrated development of intelligent agents and intelligent robots.

  7. Collaboration and interaction among multiple intelligent agents.

  8. More convenient intelligent agent tool platforms.

Lanzhou Technology continues to deepen its technological innovation, successfully launching the Mencius GPT series large models and deriving a series of intelligent application products based on them: Lanzhou Smart Meeting – intelligent meeting assistant, Lanzhou Smart Library – enterprise intelligent knowledge base platform, and intelligent agent applications covering document understanding, writing assistance, customer support, marketing optimization, search enhancement, and more. To further accelerate the development and deployment of intelligent agents, Lanzhou Technology has launched Lanzhou Smart Build – a new generation intelligent agent tool platform. With its deep accumulation in both general and specialized large models, Lanzhou Technology has fully mastered the comprehensive technical capabilities from pre-training, SFT, alignment, reasoning optimization, to domestic GPU adaptation, as well as cross-industry intelligent agent construction and deployment, providing enterprises with one-stop, comprehensive intelligent transformation services.

Recently, Lanzhou Technology’s new generation intelligent agent tool platform, Lanzhou Smart Build, will officially be launched to the public. Stay tuned!

Technical Exchange Group Invitation

Reflections on the New Generation of Intelligent Agents in the Post-LLM Era

△ Long press to add the assistant

Scan the QR code to add the assistant on WeChat

Please note: Name – School/Company – Research Direction
(For example: Xiao Zhang – Harbin Institute of Technology – Dialogue System)
to apply to join the Natural Language Processing/Pytorch technical exchange group.

About Us

MLNLP community is a grassroots academic community jointly established by scholars in machine learning and natural language processing from both domestic and international backgrounds. It has now developed into a well-known machine learning and natural language processing community, aiming to promote progress between the academic and industrial sectors of machine learning and natural language processing and the vast number of enthusiasts.
The community can provide an open communication platform for the further education, employment, and research of relevant practitioners. Everyone is welcome to follow and join us.

Reflections on the New Generation of Intelligent Agents in the Post-LLM Era

Leave a Comment