In a market environment that emphasizes low costs and high efficiency, leveraging AI Agents to assist enterprises in digital transformation, achieving business process automation, and intelligent management decision-making has become a highly regarded application scenario for large models. However, to truly implement intelligent AI Agents, many challenges still need to be addressed.
This presentation mainly focuses on the application scenarios and practical implementation of AI Agents within enterprises.
Speaker|Li Zhe, Partner and Chief Analyst at Aifengxian
The content has been summarized. For the complete video recording and presentation materials from the expert, please scan the QR code to receive them.

01
Implementation Methods of AI Agents in Enterprises
First, here is a panoramic map of the entire large model market, which elaborates on the infrastructure layer, model layer, middle layer, and application layer. This panoramic map is designed to clarify that although AI Agents were first mentioned around May or June of this year and many practical application scenarios and cases have emerged abroad regarding the trend of AI Agents, the definition of AI Agents varies from the perspectives of enterprise users, vendors, and academia.
In terms of the specific implementation plans for enterprise users to deploy AI Agents, we can roughly categorize them into two types.
The first type is closely related to the overall capability building of large models.For many enterprise users, they believe that large models can be applied to multiple scenarios, and therefore, when considering multi-scenario applications, if repetitive construction is conducted, it may lead to a significant waste of resources. Hence, they often consider the application of the entire large model from the perspective of a middle platform or capability level. For instance, during a recent in-depth discussion with a joint-stock bank, they explicitly stated that they expect to use large models in six to seven specific scenarios next year, so they must consider building comprehensive large model capabilities in the future.
Additionally, during discussions with another leading restaurant enterprise, they also indicated that they had already taken action regarding model training and planning, intending to utilize large model capabilities across dozens of application scenarios. Such enterprises need to consider the construction of overall capabilities as well. In this context, the Agent serves as a key tool in the middle layer, as its application scenarios involve multiple links such as Q&A, data analysis, process development, etc. For them, solving this issue cannot be targeted at a single point but requires systematic thinking, so they often consider it as a middle layer within the overall large model capability.
The second type refers to specific application scenarios of Agents, such as Q&A, operation and maintenance management, customer service, digital personnel, etc., as well as the combination of Agents with RPA for process automation or serving as recruitment assistants, human resource assistants, or financial assistants.
These two aspects actually represent different directions for enterprise users when deploying large models, but it is crucial to note whether AI Agents are viewed as a capability construction or as a specific application scenario construction.
Building an Agent Middle Platform and Enhancing Large Model Capabilities
First, let’s discuss how to assess capability construction. Currently, we see that the future direction will focus on constructing a middle platform based on AI Agents or an overall structure with systematic capabilities, which is based on the landing cases observed in some financial scenarios and the conclusions drawn during communications with different enterprises.
The key focus of the Agent architecture lies in four critical factors: long-short term memory, relevant configuration tools, overall implementation path planning, and final execution, which are the core components of the entire AI Agent architecture. Therefore, considering the entire component, the underlying capabilities must rely on the support of large models. In large models, we do not explicitly distinguish between general large models, commercial large models, or specifically trained large models; these are all capabilities of the underlying large models, and for Agents, it is also necessary to consider the routing of implementing different models and the invocation of models.
At the component level of capabilities, AI Agents will involve various tools, including various general capability components such as multimodal retrieval, content generation, and handling Text to SQL, Text to Chart, Text to BI in data analysis, all of which are common capability components we have observed. The actual types of capability components are very diverse; based on our observations, there are as many as twenty to thirty capability components mentioned in Hugging Face.
At the memory component level, the core technology currently relies mainly on vector databases and real-time databases to provide Agents with specific memory functions.
From a technical perspective, AI Agents draw on the construction ideas of traditional RPA robots and the entire system, from individual design to overall execution and the realization of various execution links, as well as interaction with the user side, all of which are similar to the original concept of RPA, which also involves development, operation, and integrated management. These essentially constitute the core of what we refer to as the central capability of AI Agents. This also pertains to how to realize the calling relationship between the Agent middle platform and the upper application scenarios in different business contexts.
The above outlines the overall structure and core capabilities based on AI Agents, how to construct the central platform of AI Agents, and various implementation methods in practical applications. Of course, many enterprises do not necessarily need to completely establish a platform; on one hand, the resource investment is significant, including the investment in underlying computing capabilities and related product tools; on the other hand, it requires a substantial investment of human resources, meaning a team with deep experience in the field of NLP is needed, and they must have participated in deep learning projects to better construct and refine the system.
Therefore, in this process, we do not recommend most enterprises to directly undertake such construction; instead, we suggest focusing on practical applications.
Future Application Scenario Planning for AI Agents
Next, let’s discuss the design of specific application scenarios. When facing a particular application scenario, how should we practice AI technology? At this stage, we roughly categorize scenario values into experience-oriented, cost-reduction, revenue-increasing, and transformative.
The transformative category refers to generating new business models and bringing new business revenues, while the revenue-increasing category refers to enhancing existing revenues, and the experience-oriented and cost-reduction categories are relatively clear and understandable. It is important to emphasize that while reducing costs and increasing revenues and achieving transformation represent greater value, at this current stage, for many companies, such as banks, insurance, and securities in the financial industry, as well as consumer enterprises, whether they are brand owners or retailers, as long as they face C-end users and have online business, the weight of experience-oriented value will be quite high.
In fact, one significant benefit of experience-oriented scenarios is that they can collect more user interaction data by constructing specific scenarios. In other words, by improving user experience, the most important value is generating a large amount of interaction data. Many companies face numerous challenges in collecting personal user data, especially those that lag in data construction; they may not have a large accumulation of historical data, but experience-oriented scenarios can provide them with the opportunity to obtain more necessary user interaction data.
In terms of timeframes, we mainly consider the maturity of technology and application endpoints. Taking the memory mentioned in the four capabilities of the Agent as an example, aside from using online databases to implement it, many domestic large models still have a significant gap compared to GPT-4, and there are also certain deficiencies in some multimodal retrieval capabilities. Additionally, when planning and breaking down the entire task, the requirements for the reasoning capabilities of large models are quite high.
Therefore, in many fields such as sales empowerment and real-time forecasting, there are substantial reasoning processes and task analysis requirements. Although large models have been widely applied, the level of their realization still needs improvement; on one hand, technology needs to continue to advance, and on the other hand, more importantly, Agents are also a relatively good practical way to leverage the value of large models. In fact, at the product and specific solution planning level, we need to invest more design resources, which undoubtedly presents a significant limitation.
In the areas marked in the above image, we find that currently practical and frequently invoked functions are mainly concentrated in the office assistant domain, particularly serving as assistants in administrative office scenarios. Next is knowledge base Q&A, especially in the development and operation of knowledge base Q&A or knowledge base assistants within IT departments, which perform particularly well.
However, when expanding this to the company level, the performance of some applications is relatively poor. The key reasons for this are factors such as the quality of the company’s data, the systematic construction of labels, and whether a standardized knowledge base has been established; these are all limiting factors, rather than issues with the model itself. These are areas we anticipate will have significant development potential in the future. Looking further up, whether facing systems or operational management in manufacturing industries with numerous internet-connected devices, these are currently viewed as important development directions. Moreover, more advanced areas, such as customer service, whether in reception scenarios or outbound call scenarios, are expected to directly bring profits to enterprises.
At the same time, we believe that the value of data analysis is also extremely high, as in the short term, it helps meet various dashboard and self-service analysis needs of management and business departments, while in the long term, it can greatly enhance operational efficiency. In other words, if there are clear and unified indicators at the enterprise level, and all teams can closely focus on these indicators, it will significantly improve the overall management efficiency of the company. This means that there is a clearer and more explicit understanding of the breakdown of goals and objectives from top to bottom.
Thus, the more significant value reflected by data analysis is that it can promote the establishment of a more comprehensive data culture within the company. In this process, it can further enhance the amount of data relied upon by management when making decisions. On the other hand, when management sets certain strategies and directions, they can also decompose and implement them through various indicators, thus the value level of data analysis is actually much higher than we previously estimated.
Continuing further, whether for sales empowerment in retail scenarios or some sales empowerment in B-end scenarios, these will undoubtedly be important parts of future revenue growth. For example, with the help of Agents, entry-level personnel who originally have ordinary capabilities can become more professional, which itself is a very important sales empowerment that can bring greater value. We believe this is essentially a form of process intelligence, helping to reduce errors that may occur in processes, thereby effectively improving overall process efficiency.
At the same time, the reason for the delayed landing time of real-time forecasting fields, such as virtual shopping assistants, is that the current work of sales forecasting in retail is not satisfactory; behind this is the need for enterprises to possess a wealth of industry know-how and knowledge; on the other hand, a large amount of historical data must be accumulated to perform the entire real-time forecasting work well. Although this technology is still in its early stages of development, it indeed shows ideal application scenarios.
From a planning perspective, we believe that the directions mentioned above can be directly implemented in next year’s plans, and they can yield better results in stages.
However, it is crucial to emphasize that even in an upward trend, while everyone emphasizes value, it is still necessary to manage expectations. Because currently, for aspects such as data analysis and Q&A formats, even if everyone raises their expectations, hoping to replace human functions at the business management or decision-making level, it is quite unrealistic. At this stage, it plays more of a role in improving efficiency, allowing individuals with average capabilities to achieve significant improvements; although they cannot become experienced professionals, they can at least be elevated to an intermediate level, significantly enhancing work efficiency, but it is unlikely to directly replace human work in the original scenarios. Many times, controlling expectations is to prevent a good thing from being entirely negated due to overly high expectations, as it is certainly a gradual process.
Implementation of AI Agents in Recruitment Scenarios
Finally, we take a recruitment scenario from a headhunting company in the office domain as an example to discuss the implementation methods of Agents in a single scenario. The entire process is divided into three phases, which I believe are feasible and relatively reliable. The first phase for the company is a task, the second phase is an Event based on that task, and the third phase is essentially an overarching process.
The theoretical implementation process actually requires going through these three phases. The first phase is about improving efficiency from a single task point; for example, in this case, how to communicate effectively with the demand side, how to accurately summarize the JD and establish a clear persona, all of which are aspects that can quickly generate effects and value at a specific business point. Next, the second phase is focused on the first phase, which is the most crucial part of the process, such as how to clearly articulate the requirements. Then, the third phase connects the entire recruitment process.
In practice, when completing the third phase, we can appropriately modify the existing process. Because when single-point capabilities are improved, the original process advancement model relied on human effort to implement step by step; now it shifts to a collaborative working mode relying on Agents and humans to promote, and the existing process has great potential for improvement.
Therefore, in summary, we recommend that in each individual scenario, first identify those tasks that can immediately generate effects and intuitively improve efficiency, then decompose the process itself, and finally achieve intelligent automation of the entire process.
This is what we see as the implementation timeline for AI Agents in specific scenarios. Other areas such as data analysis, customer service, and Q&A can also follow this logical framework step by step to ultimately achieve overall implementation.
The above content is the sharing on the application scenarios and practical implementation of AI Agents. For the complete video recording and presentation materials from the expert, please scan the QR code to receive them.
⩓
Long press the QR code to receive the complete video recording and presentation materials

Graduated from Tsinghua University, previously worked in the financial division of listed companies such as Yili Group, with nearly 10 years of analytical consulting experience in finance and private equity investment, focusing on technology research, with in-depth research and reflection on various fields such as cloud computing, big data, and artificial intelligence.
Note: Click the lower left corner“Read the original text”, to receive the complete video recording and sharing materials from the expert..