1. What Is LangGraph
Why Use LangGraph?
Developing agents using frameworks like langchain involves significant development effort, lacks flexibility, and incurs high modification costs. Reducing development effort while increasing the flexibility of agents is a key point for promoting agents.
LangGraph supports production-grade agents and is trusted by companies like Linkedin, Uber, Klarna, and GitLab. LangGraph provides fine-grained control over the processes and states of agent applications. It implements a central persistent layer that supports features common to most agent architectures:
- Memory
: LangGraph can save any aspect of the application state, supporting memory for user interactions internally and across user interactions for dialogues and other updates; - Human-in-the-loop
: With state checkpoints, execution can be paused and resumed, allowing for decision-making, verification, and correction at critical stages through human input.
What Is LangGraph?
LangGraph is a library for building stateful, multi-participant applications using LLMs, designed for creating agents and multi-agent workflows. Please check the getting started tutorial here.
LangGraph is inspired by Pregel and Apache Beam. The public interface is inspired by NetworkX. LangGraph is built by LangChain Inc, the creator of LangChain, but can be used without LangChain.

Core Components of LangGraph
The core components of LangGraph include Graphs, State, Nodes, Edges, Send, Checkpointer. The advantages of LangGraph include: controllability, persistence, Human-in-the-loop, ReactAgent.
State
from langgraph.graph import StateGraph
from typing import TypedDict, List, Annotated
import Operator
class State(TypedDict):
input: str
all_actions: Annotated[List[str], operator.add]
graph = StateGraph(State)
Node
# Add nodes
graph.add_node("model", model)
graph.add_node("tools", tool_executor)
Edges
Entry Edge: Connects the starting point of the graph to a specific node, making that node the first to be called when input is passed to the graph. The pseudocode is as follows:
graph.set_entry_point("model")
Normal Edge: On these edges, one node should always be called after another. For example, during the basic agent runtime, we typically want the model to be called after invoking tools:
graph.add_edge("tools", "model")
Conditional Edge: These edges use functions (usually supported by LLM) to determine which node to go to first. To create such an edge, three parameters need to be passed in:
-
Upstream Node: It looks at the output of this node to determine what should be done next. -
A function: This function will be called to determine which node to call next. It should return a string. -
Mapping: This mapping will be used to map the output of the function in the second parameter to another node. The keys should be the possible values that the function might return. If a certain value is returned, those values should be the names of the nodes to go to.
Compile
The defined state graph is compiled into executable code, similar to the compilation in programming languages.
app = graph.compile()

Agent React Executor
from typing import TypedDict, Annotated, List, Union
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.messages import BaseMessage
import operator
class AgentState(TypedDict):
input: str
chat_history: list[BaseMessage]
agent_outcome: Union[AgentAction, AgentFinish, None]
intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]
LangGraph Examples
-
Chatbots, Multi-agent Systems, Planning Agents;
2. Typical Application Scenarios
- ReAct Architecture Agents
Complete tasks through iterative execution of Reasoning – Acting – Observing steps, such as agents that combine Google search and DALL-E to generate images. - Multi-agent Systems
Build collaborative networks, such as one agent generating code, another testing and providing feedback on errors, forming a self-correcting loop. - Long-term Task Handling
Support interruption and recovery, suitable for scenarios requiring human intervention, such as data analysis and automated processes.
3. Comparison with Traditional LangChain Agents
Feature | LangChain Agent | LangGraph |
---|---|---|
Control Flow |
|
|
Reliability |
|
|
Applicable Scenarios |
|
|
4. Quick Start Example
Quickly create ReAct agents using pre-built functions:
from langgraph import StateGraph, ToolNode
# Define state, tools, and model
graph = StateGraph(initial_state={"messages": []})
graph.add_node("agent", run_llm)
graph.add_node("tools", ToolNode([search_tool, image_generator]))
# Set edges and conditions
graph.add_conditional_edge("agent", decide_next_step)
graph.compile()
# Execute task
response = graph.invoke({"input": "Generate an image of a snowy mountain lake"})
Conclusion
LangGraph addresses the shortcomings of traditional Agent frameworks in complex process control, making it particularly suitable for scenarios requiring iteration, multi-role collaboration, or human intervention. Its design philosophy strikes a balance between controllability, reliability, and flexibility.