Ollama-deep-research is an open-source agent similar to OpenAI deep research. Of course, its functionality is much weaker than that of OpenAI deep research, but it allows you to experience how to use agents for research topics and how to develop agents based on the Langgraph framework.
The ollama-deep-research agent generates the content to be retrieved based on the given research topic using a large model. The agent uses tools like Tavily or Perplexity to retrieve information, and the model generates summaries based on the retrieved information. Then, the large model reflects on the generated summary, identifies knowledge gaps or areas that need further exploration, and generates a follow-up question to continue retrieving information, improving the research results. After a set number of iterations (configurable), it ultimately generates a research report and references. Below is the execution process of the agent.
Figure 1 Execution process of the ollama deep research agent
The ollama-deep-research can efficiently conduct research and generate well-formatted reports. It has the following features:
– π§ Local model support: The agent uses a local LLM and can utilize open-source models like Deepseek R1, Quen 2.5, and Llama 3.2.
– πIterative research: Supports setting the number of research iterations to continuously refine research summaries.
– πWeb search functionality: Integrates with the APIs of Tavily or Perplexity to automatically perform web searches and extract information.
– πFormatted output: The generated summaries are in Markdown format, making it easy for subsequent organization and archiving.
– π οΈ Code transparency: An open-source project, the code is easy to understand and modify.
– π Audit tracking: Using the Langsmith tool, each step of the research agent’s operations can be tracked in detail, ensuring transparency in the agent’s workflow.
Figure 2 Configuration interface of the research agent
The LangGraph Studio UI can display the working process of the agent in real-time.
Figure 3 LangGraph Studio UI
Figure 4 Generated report with reference sources
Ollama-deep-research is just a proof-of-concept research agent, and there are many areas that need enhancement. For example, it could consider using RAG (Retrieval-Augmented Generation) to retrieve local knowledge bases as research input. The logic of the research process is still relatively simple. It can serve as a starting point for further development and refinement to achieve practical value.
References:
[1]https://github.com/langchain-ai/ollama-deep-researcher
[2]https://www.youtube.com/watch?v=XGuTzHoqlj8