LangGraph is a library developed by LangChainAI for creating workflows for agents and multi-agent systems. It offers the following core advantages: cycles, controllability, and persistence, which undoubtedly reduce the workload for agent developers. This article will highlight the key points and usage methods of LangGraph from my perspective during the development process.
Basic Introduction
The StateGraph of LangGraph is a state machine that consists of nodes and edges. Nodes are generally predefined functions, while edges connect different nodes to represent the execution order of the graph. In simple terms, the steps to build a workflow using LangGraph are as follows:
1. Initialize models and tools
2. Define the state information of the graph
3. Define graph nodes
4. Define the entry nodes and edge relationships of the graph
5. Compile the graph
6. Execute the graph
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langchain_openai import ChatOpenAI
# Initialize model
llm = ChatOpenAI()
# Define state information of the graph
class State(TypedDict):
# Messages have the type "list". The `add_messages` function
# in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]
# Define graph nodes
def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])])}
# Define entry and edges of the graph
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)
# Compile the graph
graph = graph_builder.compile()
# Execute the graph
user_input = 'Introduce yourself'
for event in graph.stream({"messages": [("user", user_input)]}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)
Installation
LangGraph and its dependencies can be installed via the PyPI source.
pip install -U langgraph
Features
1. Supports Loops and Branching Structures
Using regular edges<span>add_edge</span>
and conditional edges<span>add_conditional_edges</span>
, loops and branches can be constructed in the graph.
from typing import Literal
def route_tools(
state: State,
):
if isinstance(state, list):
ai_message = state[-1]
elif messages := state.get("messages", []):
ai_message = messages[-1]
else:
raise ValueError(f"No messages found in input state to tool_edge: {state}")
if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
return "tools"
return END
# The `tools_condition` function returns "tools" if the chatbot asks to use a tool, and "END" if
# it is fine directly responding. This conditional routing defines the main agent loop.
graph_builder.add_conditional_edges(
"chatbot",
route_tools,
{"tools": "tools", END: END},
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()
2. Node State Persistence
For LangGraph, the core point lies in its implementation of graph state persistence, which allows for multi-turn conversations without requiring users to retain historical dialogue information. It also allows users to interrupt the execution of workflows at any point and supports modifying the state of the graph and breakpoint execution.
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
# Just add the checkpointer parameter when compiling the graph
graph = graph_builder.compile(checkpointer=memory)
The following uses the graph structure introduced earlier to illustrate the difference before and after adding state persistence. ● Without state persistence
user_input = "Hello, I am Xiao Wang. Can you introduce yourself to me?"
config = {"configurable": {"thread_id": "1"}}
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================ Human Message =============
Hello, I am Xiao Wang. Can you introduce yourself to me?
================= Ai Message ================
Hello, Xiao Wang! I am an AI assistant named ChatGLM. I was developed based on the language model GLM, which was jointly trained by Tsinghua University’s KEG laboratory and Zhizhu AI Company in 2024. My task is to provide appropriate responses and support for user questions and requests. How can I help you?
user_input = "Do you remember my name?"
config = {"configurable": {"thread_id": "1"}}
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"messages": [("user", user_input)]}, config, stream_mode="values"
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================ Human Message ============
Do you remember my name?
================ Ai Message ================I’m sorry, as an AI, I do not have the ability to remember personal user information, including their names. Each time I interact with you, I will start as a brand new conversation. If you wish, you can tell me your name again.
● After enabling state persistence
# Just add the checkpointer parameter when compiling the graph
graph = graph_builder.compile(
checkpointer=memory
)
================= Human Message =============== Do you remember my name?
=============== = Ai Message =================== Of course I remember, your name is Xiao Wang. If you have any questions or need help, feel free to let me know.
MemorySaver will store all states in memory, which may pose a risk of memory leaks. To address this, LangGraph also allows for state persistence via a database.
|
|
|
|
|
|
3. Interruption and Human Intervention
<span>interrupt_before</span>
and <span>interrupt_after</span>
allow setting breakpoints before and after node execution, waiting for user verification before resuming graph execution.
def classify_condition(state):
category = state["category"]
if category == "chitchat":
return "chitchat"
return "tools"
def chatbot(state: State):
print('Chitchat mode')
query = state["query"]
response = client.chat(query).choices[-1].message.content
print(response)
def tools(state):
print('Using tools')
graph_builder.add_node("classify", classify)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_node("tools", tools)
graph_builder.add_edge(START, "classify")
graph_builder.add_conditional_edges(
"classify",
classify_condition,
{"chitchat": "chatbot", "tools": "tools"}
)
graph_builder.add_edge("chatbot", END)
graph_builder.add_edge("tools", END)
graph = graph_builder.compile(
checkpointer=MemorySaver(),
interrupt_after=['classify']
)

● 1. Default Case
user_input = "What is the Douban rating of The Lord of the Rings"
config = {"configurable": {"thread_id": "1"}}
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"query": user_input}, config, stream_mode="values"
)
for event in events:
print(event)
This question will be considered as requiring a tool call to answer.
snapshot = graph.get_state(config)
snapshot.next
(‘tools’,)
Continuing the execution of the graph, we can see that a simulated function is called.
events = graph.stream(
None, config, stream_mode="values"
)
for event in events:
print(event)
Using tools
● 2. Human Intervention
user_input = "What is the Douban rating of The Lord of the Rings"
config = {"configurable": {"thread_id": "2"}}
# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
{"query": user_input}, config, stream_mode="values"
)
for event in events:
print(event)
snapshot = graph.get_state(config)
snapshot.next
(‘tools’,)
From here, modify the state of the graph, forcing it to be “chitchat”.
graph.update_state( config, {"category": "chitchat"}, )
You can see that the next step is “chitchat”, indicating successful human intervention.
snapshot = graph.get_state(config) snapshot.next
(‘chatbot’,)
The “chitchat” mode is the result of calling the model’s own capabilities, so you can see the following output.
events = graph.stream( None, config, stream_mode="values" ) for event in events: print(event)
The Douban ratings for The Lord of the Rings series are as follows:
The Lord of the Rings: The Fellowship of the Ring has a Douban rating of 9.1. The Lord of the Rings: The Two Towers has a Douban rating of 9.2. The Lord of the Rings: The Return of the King has a Douban rating of 9.3. These three films have received very high ratings on Douban and are among the highest-rated films on the platform.
Conclusion
I believe the biggest highlight of LangGraph is its feature of state persistence, which allows developers to focus on constructing the graph without worrying about saving the intermediate states of the agent workflow. This feature also enables users or developers to intervene in the execution of the workflow from specified locations, modify the state of the graph, or interrupt the workflow, making it more controllable.