When Windsurf was first launched (around October-November 2024), its agentic model was truly stunning. I even wrote an article: AI-Assisted Programming: Should We Follow the Agentic Trend? However, considering that the model’s ability to combine tools was not strong enough at that time (for example, Windsurf achieved its agentic model through significant engineering efforts rather than relying on the inherent capabilities of large models), we decided not to follow up.
Auto-coder.chat itself provides /chat (code design, information retrieval), /coding (programming), /conf (configuration of the software system), project index maps, and automatic command execution capabilities. However, these atomic capabilities require users to combine them on their own. Deep down, I always felt that there should be an additional layer to allow auto-coder.chat to autonomously decide how to combine these atomic capabilities to complete the coding tasks.
With the release of R1 and V3, I became acutely aware that the path of AI-assisted programming in China had officially begun. AI-assisted programming officially launched in China in 2025, and we rapidly completed the integration of R1 during the Spring Festival: the world’s first AI-assisted programming tool with built-in Deepseek R1 + V3 combination has arrived. However, throughout the Spring Festival, there wasn’t a stable R1/V3 supplier, causing our agentic model to be less smooth until February, when the chaos subsided. In February, the king of R1/V3 suppliers finally emerged, allowing us to develop and test our capabilities. We achieved the instruction combination capability through R1 (similar to cline, but much more advanced than cline’s tools).
Now, let me show you an inference example, which should outperform other agentic models like Windsurf. Additionally, due to the initial design flaws of Windsurf’s agentic model, its performance after integrating R1 is quite poor and requires significant adjustments.
First, here’s a conclusion: the agentic programming experience brought by R1 in auto-coder.chat has reached a stage that is completely “production-ready”.
Next, let’s see how R1 autonomously uses the commands provided by auto-coder.chat to complete the entire workflow after I proposed a requirement.
First, let’s look at the requirement:

R1’s automatic workflow:
1. Retrieve the entire project’s directory tree to check if the user-specified file exists. The directory tree excludes the .auto-coder directory, so it was not found.

2. Next, it didn’t give up and searched by file name, obtaining a list of files.

3. It automatically selected the most matching file, opened it, and viewed the first ten lines, confirming that it indeed matched my description.

4. It invoked coding to perform programming and, based on the information it had obtained, rewrote my requirement and called coding to execute the programming action.

5. Now it starts executing the programming.

6. The code is completed and merged.

7. It will check the code before and after modification to determine if it meets the requirements. Here, it felt satisfied, so it stopped running.

If you try a few more times, you will find that similar requirements may lead to different inference processes, but they can generally solve the final problem.
For instance, sometimes it will find multiple files (of course, this is mainly because I only provided index.json, and indeed multiple files with the same name exist), and it will proactively seek user assistance:

Moreover, R1’s intent prediction capability is strong enough to autonomously select responses for non-programming tasks.
For example, when asked what a project is for (it performs far better than my previous implementation of /ask, which was achieved through soft simulation of cot), it will autonomously read the directory structure, attempt to view the README, and related code files, and finally provide a project description:


Then you can also ask it to help you configure auto-coder.chat:


Or ask it questions, and it will automatically choose the chat command to interact with you:




