Getting Started with LangGraph: Enhancing Chatbots with Tools
In today’s rapidly evolving AI field, chatbots have become an indispensable part of our lives. Whether providing customer service, answering questions, or engaging in simple social interactions, chatbots demonstrate immense potential. However, traditional chatbots are often limited by their training data and cannot access the latest information in real-time, significantly restricting their capabilities. Today, we will introduce how to use LangGraph and the tool-calling feature to empower chatbots with real-time search capabilities, allowing them to answer a broader range of questions. This case uses the Zhipu AI large model and requires knowledge from the previous article. Getting Started with LangGraph: Building a Basic Chatbot
1. Why Enhance Chatbots?
In practical applications, users may ask chatbots a variety of questions that exceed the knowledge scope of the pre-trained models. For example, users might ask, “What’s the weather like today?” or “What are the latest popular movies?” These questions require real-time access to the latest information to answer accurately. Traditional chatbots often fail to provide accurate answers due to their lack of real-time data retrieval capabilities. By integrating tool-calling features, we can enable chatbots to call external tools (such as search engines) to retrieve the latest information and provide more accurate responses.
2. Introduction to LangGraph
LangGraph is a powerful tool for building intelligent language model applications with state management and multi-role collaboration capabilities. It is developed based on the LangChain library, extending its functionalities to allow the introduction of loops during computation, thus enabling more complex, agent-like behaviors. For instance, you can make the language model continuously decide on the next action to take in a loop until a certain condition is met.
Core Concepts of LangGraph include:
State Graph: LangGraph revolves around the concept of state graphs, where each node in the graph represents a step in the computational process, and the state of the graph is passed and updated during computation.
Nodes: Nodes are the fundamental building blocks of LangGraph, with each node representing a function or computational step. You can define nodes to perform specific tasks, such as processing input, making decisions, or interacting with external APIs.
Edges: Edges are used to connect nodes in the graph, defining the flow of computation. LangGraph supports conditional edges, allowing for dynamic decisions on the next node to execute based on the current state of the graph.
3. Integrating Tools to Enhance Chatbots
To enable the chatbot to retrieve real-time information, we need to integrate a web search tool. In this example, we will use the Tavily search engine. Here are the steps to implement this functionality:
1. Install Required Packages and Set Up API Key
First, we need to install the Python library for the Tavily search engine and the community version of LangChain. Then, set your TAVILY_API_KEY.
pip install -U tavily-python langchain_community
Set the environment variable:
import os
os.environ["TAVILY_API_KEY"] = "your_tavily_api_key_here"
2. Define the Tool
Next, we define a tool for calling the Tavily search engine. This tool will serve as an extension for the chatbot, helping it retrieve the latest information.
from langchain_community.tools.tavily_search import TavilySearchResults
tool = TavilySearchResults(max_results=2)
tools = [tool]
3. Build the LangGraph Graph
Now, we will build a LangGraph graph that defines the behavior of the chatbot. This graph will contain two main nodes: one for the main logic of the chatbot and another for tool invocation.
import os
from typing import Annotated
from langchain_community.chat_models import ChatZhipuAI
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
os.environ["ZHIPUAI_API_KEY"] = "your-zhipu-api"
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
llm = ChatZhipuAI(
model="glm-4-flash",
temperature=0.5,
)
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
return {"messages": [llm_with_tools.invoke(state["messages"])]}
graph_builder.add_node("chatbot", chatbot)
4. Add Tool Node
We need to define a tool node that will invoke the tool when requested by the chatbot.
import json
from langchain_core.messages import ToolMessage
class BasicToolNode:
def __init__(self, tools: list) -> None:
self.tools_by_name = {tool.name: tool for tool in tools}
def __call__(self, inputs: dict):
if messages := inputs.get("messages", []):
message = messages[-1]
else:
raise ValueError("No message found in input")
outputs = []
for tool_call in message.tool_calls:
tool_result = self.tools_by_name[tool_call["name"]].invoke(tool_call["args"])
outputs.append(ToolMessage(content=json.dumps(tool_result), name=tool_call["name"], tool_call_id=tool_call["id"]))
return {"messages": outputs}
tool_node = BasicToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)
5. Define Conditional Edges
Conditional edges are used to determine the next action based on the chatbot’s output. If the chatbot requests to call a tool, we will route to the tool node; otherwise, we will end the conversation.
from typing import Literal
def route_tools(state: State) -> Literal["tools", END]:
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools"
return END
graph_builder.add_conditional_edges("chatbot", route_tools, {"tools": "tools", END: END})
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()
4. Testing the Chatbot
Now, our chatbot has the capability of real-time searching. Let’s test its performance!
while True:
try:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
# Call the LangGraph graph to process user input
result = graph.invoke({"messages": [{"role": "user", "content": user_input}]})
print("Assistant:", result["messages"][-1].content)
except:
user_input = "What do you know about LangGraph?"
print("User:", user_input)
result = graph.invoke({"messages": [{"role": "user", "content": user_input}]})
print("Assistant:", result["messages"][-1].content)
break
You can ask the chatbot various questions, such as “What’s the weather like today?” or “What is LangGraph?” The chatbot will utilize its integrated search engine tool to provide you with the latest answers.
5. Conclusion
Through LangGraph and the tool-calling feature, we successfully empowered the chatbot with real-time information retrieval capabilities. This not only expands the chatbot’s knowledge scope but also enables it to better meet user needs. In practical applications, this enhanced chatbot can be used in various scenarios, such as customer service, educational tutoring, and information inquiries. The powerful features of LangGraph are not limited to this; it also supports more complex state management and multi-role collaboration. In future developments, we can further explore LangGraph’s potential to add more functionalities to chatbots, such as memory and multi-turn dialogue, making them smarter and more human-like. We hope this article helps you better understand the fundamentals of LangGraph and how to enhance chatbots with tools. If you are interested in LangGraph, feel free to try it out yourself and explore its many possibilities!