Building Intelligent APIs with CrewAI and FastAPI

Building Intelligent APIs with CrewAI and FastAPI

Table of Contents

· 1. Introduction ∘ Customer Support Agent ∘ Personal Finance Advisor ∘ Medical Assistant ∘ Learning Companion ∘ Project Management Assistant ∘ Creative Partner ∘ Smart Home Manager ∘ Cybersecurity Assistant · 2. Setting Up the Basics · 3. Understanding CrewAI ∘ 1. AI Agents ∘ 2. Tools ∘ 3. Processes ∘ 4. Tasks · 4. Setting Up FastAPI with CrewAI · Conclusion · Learn More

1. Introduction

In 2025, we have different ways to develop software, especially with the latest AI improvements. AI has entered our lives through various tools and applications, even in our best IDEs like IntelliJ IDEA or Microsoft Visual Studio Code.

Before introducing Crew AI, I would like to briefly explain what LLM is. **Large Language Models (LLMs)** are machine learning models capable of understanding and generating natural language text. This is a type of artificial intelligence (AI) program that can recognize and generate text, performing other tasks.

In simple terms, LLMs have received enough data to be able to recognize and interpret human language or other types of data. LLMs can automate complex and sequential workflows and tasks. For example, you can use an LLM to build an assistant that can autonomously order products online on your behalf and arrange for their delivery in the app. These LLM-based assistants are known as agents.

An agent is an LLM-driven assistant assigned specific tasks and tools to accomplish them. In its basic form, a typical AI agent may be equipped with memory to store and manage user interactions, communicate with external data sources, and use functions to perform its tasks. Common examples of what agents can do include the following.

Customer Support Agent

AI agents can act as 24/7 customer service representatives, handling FAQs, resolving customer issues, and escalating complex queries to human agents. For instance, an AI agent in an e-commerce application can assist users in tracking orders, processing returns, or providing product recommendations in real-time.

Personal Finance Advisor

AI agents in financial applications can serve as virtual advisors, helping users manage budgets, analyze spending patterns, and suggest investment opportunities based on their financial goals. For example, after analyzing a user’s risk profile, it might recommend specific mutual funds or ETFs.

Medical Assistant

A healthcare-focused agent can assist patients in scheduling doctor appointments, reminding them of their medication schedules, or answering basic health inquiries. For instance, an agent can help users monitor chronic illnesses by analyzing data from wearable devices and providing health insights.

Learning Companion

In education, AI agents can serve as tutors, guiding learners through personalized learning plans. They can help users reinforce learning by explaining difficult concepts, suggesting additional resources, or even creating practice quizzes.

Project Management Assistant

Agents integrated into project management tools can help organize tasks, set deadlines, and automate meeting scheduling. For example, they can analyze team progress, identify bottlenecks, and suggest solutions to improve productivity.

Creative Partner

AI agents can act as co-creators in the arts. For instance, in content creation, agents can help generate ideas, write scripts, or create graphic designs. They may analyze trends to suggest creative formats that resonate with specific audiences.

Smart Home Manager

In smart home ecosystems, AI agents can automate and optimize household operations. For example, they can adjust lighting, control thermostats, or even recommend energy-saving tips based on homeowners’ preferences and behaviors.

Cybersecurity Assistant

An AI agent can monitor network activity in real-time, detect anomalies, and respond to potential threats. It might take proactive measures by blocking suspicious IPs or notifying administrators of critical vulnerabilities.

These examples illustrate how AI agents adapt to various scenarios, enhancing efficiency, convenience, and user experience. Let me know if you would like to dive deeper into any of these topics!

Multi-agent platforms have been developed to manage such complex AI workflows, and crewAI is one of them. In this article, I will develop a workflow using crewAI and make it callable externally via FastAPI. Additionally, I will provide a background task mechanism to support concurrent requests.

2. Setting the Stage

Let’s create a folder (you can name it whatever you like), I call it <span>app</span>.

mkdir -p app 

Create a virtual environment. I assume you have Python installed; by the way, I tried it on Python 3.12, but encountered issues with the crew AI dependencies on Python 3.13.

python -m venv .venv

Activate the virtual environment, then install dependencies, creating a <span>requirements.txt</span> file beforehand.

crewai
fastapi
uvicorn
python-dotenv
pydantic
celery
requests

Then, the rest comes in… This will activate the virtual environment and install the dependencies required for the project.

source venv/bin/activate
pip install -r requirements.txt

3. Understanding CrewAI

crewAI is an open-source multi-agent orchestration framework created by João Moura. This Python-based framework leverages artificial intelligence (AI) collaboration by orchestrating role-playing autonomous AI agents to work together as a cohesive team or “crew” to accomplish tasks. The goal of crewAI is to provide a powerful framework for automating multi-agent workflows.

I won’t delve into how crewAI works, as this article aims to create a FastAPI supported by crewAI. For more information on how crewAI works, you can check out the documentation.

Building Intelligent APIs with CrewAI and FastAPI

This diagram provides the conceptual framework of Crew, focusing on the roles and collaboration of AI agents in completing tasks to achieve specific outcomes. Here is a detailed breakdown of the key components:

1. AI Agents

  • • The framework shows multiple AI agents presented at the top in a black box format.
  • • These agents work collaboratively, being able to delegate tasks or ask each other questions.

2. Tools

  • • A specific section emphasizes tools, which agents can use to perform their tasks. Tools are external tools or functions that enhance agents’ capabilities.

3. Processes

  • Processes define how AI agents collaborate.
  • • This includes:
  • • How tasks are assigned.
  • • How agents interact with each other.
  • • How agents perform their work.

4. Tasks

  • Tasks are displayed at the bottom, representing individual operations or responsibilities that agents need to handle.
  • • Tasks can:
  • • Override agent tools by specifying which tools to use.
  • • Assign specific tasks to specific agents.

4. Setting Up FastAPI with CrewAI

Before jumping into the demonstration, I want to assure you that I have prepared a repository for you to use. You can also open the issues I hope to address.

  • • In this repository, I created a <span>analyzer.py</span> Python script and added my endpoints, which you will be able to call to start crewAI as a background task.
from fastapi import APIRouter, HTTPException, BackgroundTasks
from app.models.models import TopicRequest, TaskResponse
from app.services.services import BotService

router = APIRouter()

@router.post("/analyze", response_model=TaskResponse)
async def analyze_topic(request: TopicRequest, background_tasks: BackgroundTasks):
    task_id = BotService.create_task(request.topic)
    background_tasks.add_task(BotService.process_task, task_id, request.topic)
    return BotService.get_task_status(task_id)

@router.get("/task/{task_id}", response_model=TaskResponse)
async def get_task_status(task_id: str):
    task = BotService.get_task_status(task_id)
    if not task:
        raise HTTPException(status_code=404, detail="Task not found")
    return task

This code snippet defines a FastAPI router with endpoints for creating and managing background tasks related to topic analysis.

  • • The <span>BotService</span> class in <span>services.py</span> manages the lifecycle of tasks analyzing topics, using <span>UrlInsightBot</span>, including task creation, asynchronous processing, status updates, and logging.
import asyncio
import uuid
import logging
from typing import Dict
from app.models.models import TaskStatus, TaskResponse
from app.crew.crew import UrlInsightBot

### Configure logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)

class BotService:
    _tasks: Dict[str, TaskResponse] = {}

    @classmethod
    def create_task(cls, topic: str) -> str:
        task_id = str(uuid.uuid4())
        cls._tasks[task_id] = TaskResponse(
            task_id=task_id,
            status=TaskStatus.PENDING
        )
        logger.info(f"Task {task_id} created with status PENDING for topic: {topic}")
        return task_id

    @classmethod
    async def process_task(cls, task_id: str, topic: str):
        try:
            cls._tasks[task_id].status = TaskStatus.PROCESSING
            logger.info(f"Task {task_id} status changed to PROCESSING")
            bot = UrlInsightBot()
            result = await bot.crew().kickoff_async(inputs={'topic': topic})
            cls._tasks[task_id].status = TaskStatus.COMPLETED
            cls._tasks[task_id].result = result
            logger.info(f"Task {task_id} completed successfully with status {cls._tasks[task_id].status}")
        except Exception as e:
            cls._tasks[task_id].status = TaskStatus.FAILED
            cls._tasks[task_id].error = str(e)
            logger.error(f"Task {task_id} failed with error: {e}")

    @classmethod
    async def process_task_sleep(cls, task_id: str, topic: str):
        try:
            cls._tasks[task_id].status = TaskStatus.PROCESSING
            logger.info(f"Task {task_id} status changed to PROCESSING")
            # Simulate processing
            await asyncio.sleep(5)  # Simulate a long-running task
            cls._tasks[task_id].status = TaskStatus.COMPLETED
            cls._tasks[task_id].result = f"Processed topic: {topic}"
            logger.info(f"Task {task_id} completed successfully with status {cls._tasks[task_id].status}")
        except Exception as e:
            cls._tasks[task_id].status = TaskStatus.FAILED
            cls._tasks[task_id].error = str(e)

    @classmethod
    def get_task_status(cls, task_id: str) -> TaskResponse:
        return cls._tasks.get(task_id)
  • • This code uses the <span>CrewAI</span> framework to define a <span>UrlInsightBot crew</span> class, configuring agents and tasks using a YAML file, setting up two agents, a researcher and a reporting analyst, along with their respective configurations.
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
import logging

### If you want to run a snippet of code before or after the crew starts, 
### you can use the @before_kickoff and @after_kickoff decorators
### https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators

### Suppress logs from LiteLLM and httpx
logging.getLogger("LiteLLM").setLevel(logging.WARNING)

@CrewBase
class UrlInsightBot():
    """UrlInsightBot crew"""

    # Learn more about YAML configuration files here:
    # Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
    # Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
    agents_config = 'config/agents.yaml'
    tasks_config = 'config/tasks.yaml'

    # If you would like to add tools to your agents, you can learn more about it here:
    # https://docs.crewai.com/concepts/agents#agent-tools
    @agent
    def researcher(self) -> Agent:
        return Agent(
           config=self.agents_config['researcher'],
          )

    @agent
    def reporting_analyst(self) -> Agent:
        return Agent(
           config=self.agents_config['reporting_analyst'],
          )

    # To learn more about structured task outputs, 
    # task dependencies, and task callbacks, check out the documentation:
    # https://docs.crewai.com/concepts/tasks#overview-of-a-task
    @task
    def research_task(self) -> Task:
        return Task(
           config=self.tasks_config['research_task'],
          )

    @task
    def reporting_task(self) -> Task:
        return Task(
           config=self.tasks_config['reporting_task'],
          )

    @crew
    def crew(self) -> Crew:
        """Creates the UrlInsightBot crew"""
        # To learn how to add knowledge sources to your crew, check out the documentation:
        # https://docs.crewai.com/concepts/knowledge#what-is-knowledge

        return Crew(
           agents=self.agents, # Automatically created by the @agent decorator
           tasks=self.tasks, # Automatically created by the @task decorator
           process=Process.sequential,
          )

After you clone the repository and set up the project, you can run the project using the following command and make calls using <span>curl</span>.

cd ./crewai/url_insight_api
uvicorn app.main:app --reload --port 8000 --log-config config/log_config.yaml

Testing the Server

### Start analysis
curl -X POST http://localhost:8000/api/v1/analyze \
  -H "Content-Type: application/json" \
  -d '{"topic": "AI LLMs"}'

### Check task status (replace <task_id> with actual ID from previous response)
curl http://localhost:8000/api/v1/task/<task_id></task_id></task_id>

You can make concurrent requests, and you will see it supports concurrency.

Conclusion

Integrating CrewAI with FastAPI showcases the power of combining AI-driven agent collaboration with a robust Python web framework to create efficient, scalable, and intelligent applications. By leveraging CrewAI’s capabilities, developers can seamlessly manage processes, delegate tasks to AI agents, and achieve optimized outcomes. Using FastAPI ensures that the system is not only fast and reliable but also highly scalable, making it suitable for real-world applications such as automating operations, providing co-pilot assistance, or streamlining complex workflows.

As AI agents continue to evolve, frameworks like CrewAI will become essential tools for building innovative systems that simulate human collaboration and decision-making. Through this demonstration, developers now have a blueprint to harness this potential, delivering smarter applications that elevate productivity and efficiency to new heights.

Building Intelligent APIs with CrewAI and FastAPI
Visit 200+ LLM aggregation platforms: https://rifx.online
!!! Note: Please click the original link to view the links in the article

Leave a Comment