In previous articles, such as “Exploring LLM Application Development (26) – Prompt (Architecture Patterns of Agent Frameworks like AutoGPT, AutoGen, etc.)”, I introduced several multi-agent frameworks like AutoGen and ChatDev. Recently, a promising framework has emerged in the industry – CrewAI, which stands on the shoulders of frameworks like AutoGen, aiming for practical deployment. It combines the flexibility of AutoGen’s dialogue agents with the domain process advantages of ChatDev, avoiding the issues of AutoGen’s lack of framework-level process support and ChatDev’s overly narrow domain processes. CrewAI supports dynamic, broad-scenario process design, seamlessly adapting to development and production workflows. In CrewAI, you can define your roles like AutoGen based on scenarios, while also establishing certain process executions like ChatDev, allowing these agents to better achieve specific complex goals. The project has already garnered 3.9K stars and achieved a second-place ranking on ProductHunt.

https://github.com/joaomdmoura/crewAI
CrewAI has the following key features:
-
Role-based Agent Design: Customize agents with specific roles, goals, and tools. -
Autonomous Delegation Between Agents: Agents can autonomously delegate tasks and inquire between each other, enhancing problem-solving efficiency. -
Flexible Task Management: Define tasks using customizable tools and dynamically assign them to agents. -
Process-driven (the biggest highlight): Currently supports only sequential task execution, but higher-level process definitions such as consensus and hierarchical processes are planned.
Creating a process is generally as follows:
import os
from crewai import Agent, Task, Crew, Process
os.environ["OPENAI_API_KEY"] = "YOUR KEY"
# You can choose to use a local model through Ollama for example.
## from langchain.llms import Ollama
# ollama_llm = Ollama(model="openhermes")
# Install duckduckgo-search for this example:
# !pip install -U duckduckgo-search
from langchain.tools import DuckDuckGoSearchRun
search_tool = DuckDuckGoSearchRun()
# Define your agents with roles and goals
researcher = Agent( role='Senior Research Analyst', goal='Uncover cutting-edge developments in AI and data science', backstory="""You work at a leading tech think tank. Your expertise lies in identifying emerging trends. You have a knack for dissecting complex data and presenting actionable insights.""", verbose=True, allow_delegation=False, tools=[search_tool] # You can pass an optional llm attribute specifying what mode you wanna use. # It can be a local model through Ollama / LM Studio or a remote # model like OpenAI, Mistral, Antrophic of others (https://python.langchain.com/docs/integrations/llms/) # # Examples: # llm=ollama_llm # was defined above in the file # llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7))
writer = Agent( role='Tech Content Strategist', goal='Craft compelling content on tech advancements', backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles. You transform complex concepts into compelling narratives.""", verbose=True, allow_delegation=True, # (optional) llm=ollama_llm)
# Create tasks for your agents
task1 = Task( description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024. Identify key trends, breakthrough technologies, and potential industry impacts. Your final answer MUST be a full analysis report""", agent=researcher)
task2 = Task( description="""Using the insights provided, develop an engaging blog post that highlights the most significant AI advancements. Your post should be informative yet accessible, catering to a tech-savvy audience. Make it sound cool, avoid complex words so it doesn't sound like AI. Your final answer MUST be the full blog post of at least 4 paragraphs.""", agent=writer)
# Instantiate your crew with a sequential process
crew = Crew( agents=[researcher, writer], tasks=[task1, task2], verbose=2,# You can set it to 1 or 2 to different logging levels process=Process.sequential )
# Get your crew to work!
result = crew.kickoff()
print("######################")
print(result)
As shown above, two agents are defined, with the researcher responsible for gathering information and the writer crafting content based on that information, executing according to the task order. Tasks themselves can also be customized, such as specifying particular sources for information collection, as shown in the example below, which retrieves information from reddit on the LocalLLaMA version.
# pip install praw
from langchain.tools import tool
class BrowserTool(): @tool("Scrape reddit content") def scrape_reddit(max_comments_per_post=5): """Useful to scrape a reddit content""" reddit = praw.Reddit( client_id="your-client-id", client_secret="your-client-secret", user_agent="your-user-agent", ) subreddit = reddit.subreddit("LocalLLaMA") scraped_data = []
for post in subreddit.hot(limit=10): post_data = {"title": post.title, "url": post.url, "comments": []}
try: post.comments.replace_more(limit=0) # Load top-level comments only comments = post.comments.list() if max_comments_per_post is not None: comments = comments[:5]
for comment in comments: post_data["comments"].append(comment.body)
scraped_data.append(post_data)
except praw.exceptions.APIException as e: print(f"API Exception: {e}") time.sleep(60) # Sleep for 1 minute before retrying
return scraped_data
Using it only requires replacing search_tool with BrowserTool().scrape_reddit. Additionally, it supports integrating human as a tool within the agent. For the benefits of doing this, refer to the article “Human as a Tool for LLM Agents (Human-In-The-Loop), Enhancing Success Rates in Solving Complex Problems”.
# Define your agents with roles and goals
researcher = Agent( role='Senior Research Analyst', goal='Uncover cutting-edge developments in AI and data science in', backstory="""You are a Senior Research Analyst at a leading tech think tank. Your expertise lies in identifying emerging trends and technologies in AI and data science. You have a knack for dissecting complex data and presenting actionable insights.""", verbose=True, allow_delegation=False, # Passing human tools to the agent tools=[search_tool]+human_tools)
Moreover, to save on token costs and for privacy, local LLM platforms like Ollama can be integrated. For information on Ollama, refer to “Exploring LLM Application Development (17) – Model Deployment and Inference (Framework Tools – ggml, mlc-llm, ollama)”. After installation and configuration, integration can be done as follows. Ollama has many local models, and the popular Mistral model is recommended.
from langchain.llms import Ollama
ollama_openhermes = Ollama(model="openhermes")
# Pass Ollama Model to Agents: When creating your agents within the CrewAI framework, you can pass the Ollama model as an argument to the Agent constructor. For instance:
local_expert = Agent( role='Local Expert at this city', goal='Provide the BEST insights about the selected city', backstory="""A knowledgeable local guide with extensive information about the city, it's attractions and customs""", tools=[ SearchTools.search_internet, BrowserTools.scrape_and_summarize_website, ], llm=ollama_openhermes, # Ollama model passed here verbose=True)
The following is a complete example of CrewAI based on Ollama: