LangGraph Practical Series Part 2: Extending Multi-Agent Applications with Tools

In Part 1, “LangGraph Practical Series Part 1: Creating Stateful Multi-Agent Applications“, I added short-term memory to the chatbot, allowing it to retain context during conversations. In this section, I will take it a step further by introducing tools into our chatbot.

LangGraph Practical Series Part 2: Extending Multi-Agent Applications with Tools

Tools allow the chatbot to retrieve real-time data from external sources, making it more dynamic and useful. For instance, large language models (LLMs) are trained on vast amounts of data, but they have a limitation—they lack real-time knowledge. If you ask an LLM about current events or trending topics, it may provide outdated or incorrect information. This phenomenon is known as hallucination, where the model generates plausible but incorrect responses.

To overcome this limitation, we can equip the chatbot with tools to fetch real-time data. In this article, I will add a simple tool to retrieve the current time. This is just an example, but the same concept can be applied to access APIs, databases, or even live news feeds.

Besides tools, there are other strategies to improve LLM responses, such as Retrieval-Augmented Generation (RAG) and Prompt Engineering. However, in this article, we will focus on integrating tools. Let’s get started!

# Import necessary libraries and modules
from langgraph.graph import StateGraph, START, END, add_messages, MessagesState  # State graph, nodes, and message management
from dotenv import load_dotenv  # Load environment variables from .env file
from langchain_openai.chat_models import AzureChatOpenAI  # LangChain-based Azure ChatGPT model
from typing_extensions import TypedDict, Annotated  # Type hints
from langchain.schema import AIMessage, HumanMessage  # Message roles in the chatbot
from langgraph.prebuilt import ToolNode  # Node for integrating tools into the chatbot workflow
from datetime import datetime  # Get current date and time
from langchain_core.tools import tool  # LangChain's tool decorator

# Load environment variables
load_dotenv()

# Define a custom tool to get the current time
@tool
def get_current_time():
    """Call this tool to get the current time"""
    return datetime.now()  # Return current time

# Register the tool in the tool list
tools = [get_current_time]
tool_node = ToolNode(tools)  # Create a ToolNode containing the tools

# Initialize the language model (LLM) and bind tools
llm = AzureChatOpenAI(
    azure_deployment="gpt-4o-mini",  # Deployment version of the model
    api_version="2024-08-01-preview",  # API version
    temperature=0,  # Set temperature for deterministic output
    max_tokens=None,  # No limit on token count
    timeout=None,  # No timeout set
    max_retries=2  # Allow up to 2 retries on failure
).bind_tools(tools)  # Bind tools to LLM

# Define a conditional function to decide whether the chatbot should continue using tools or end the conversation
def should_continue(state: MessagesState):
    messages = state["messages"]  # Get messages from the state
    last_message = messages[-1]  # Get the last message in the conversation
    if last_message.tool_calls:  # If the last message called a tool, continue to the "tools" node
        return "tools"
    return END  # Otherwise, end the conversation

# Define the chat node to send messages to the LLM and get responses
def chat(state: MessagesState):
    messages = state["messages"]  # Get messages from the state
    response = llm.invoke(messages)  # Send messages to the LLM for responses
    return {"messages": [response]}  # Wrap the LLM response in the "messages" key and return

# Build the state graph (workflow) to manage the chatbot's conversation flow
workflow = StateGraph(MessagesState)  # Create state graph using MessagesState
workflow.add_node("chat", chat)  # Add "chat" node for normal conversation handling
workflow.add_node("tools", tool_node)  # Add "tools" node for calling tools
workflow.add_edge(START, "chat")  # Connect START to "chat" node to start the workflow
workflow.add_conditional_edges("chat", should_continue, ["tools", END])  # Continue to "tools" or END based on condition
workflow.add_edge("tools", "chat")  # Return to "chat" node after using tools

# Compile the workflow into an executable application
app = workflow.compile()

# Generate a visual representation of the state graph (workflow) and save it as a PNG image
image = app.get_graph().draw_mermaid_png()  # Generate PNG image using Mermaid
with open("sreeni_chatbot_with_tools.png", "wb") as file:  # Save image to file
    file.write(image)

# Start an interactive loop to handle user input and responses
while True:
    query = input("Enter your query or question?  ")  # Prompt user for input
    if query.lower() in ["quit", "exit"]:  # If user inputs "quit" or "exit", terminate the program
        print("bye bye")
        exit(0)

    # Process user input using the application's workflow and print response
    for chunk in app.stream(
            {"messages": [("human", query)]}, stream_mode="values"  # Process response in streaming mode
    ):  
        chunk["messages"][-1].pretty_print()  # Print formatted response message

Key Components Explained in the Code:

  1. <span>@tool</span> decorator: Used to define tools that can be utilized in the LangChain workflow. For example, the <span>get_current_time()</span> function has been registered as a tool that the chatbot can call.

  2. <span>llm.invoke(messages)</span>: Sends messages to the language model (LLM) and returns the generated response. This is the core method for interacting with the LLM.

  3. <span>should_continue()</span> function: Checks whether a tool was called in the last message and decides whether to continue calling tools or end the conversation.

  4. StateGraph Workflow: The state graph defines the flow of the chatbot conversation. It contains two nodes: one is the <span>chat</span> node for handling normal conversations; the other is the <span>tools</span> node for calling external tools. Conditional edges determine when to call tools or end the conversation.

  5. <span>ToolNode</span> node: Connects tools (like <span>get_current_time()</span>) into the workflow, allowing the chatbot to invoke external functionalities.

  6. Graph Visualization: Uses the <span>draw_mermaid_png()</span> function to visualize the state graph and save it as an image for an intuitive display of the chatbot’s workflow.

This code demonstrates how to integrate external tools into a LangChain-driven chatbot and manage the conversation flow with LangGraph. The example tool is a simple utility that provides the current time but can be extended to more complex API calls or data retrieval operations.

LangGraph Practical Series Part 2: Extending Multi-Agent Applications with Tools

In this Part 2, we addressed the limitations of LLMs by integrating tools to fetch real-time information (such as the current time), enhancing the chatbot’s response capabilities. This integration of tools helps bridge the gap between the static knowledge of LLMs and the dynamic real world. By leveraging external APIs, we can enable the chatbot to access the latest information, thereby reducing hallucinations and improving user experience.

Leave a Comment