
-
values: This will transmit the full value of the state after each step of the graph. -
updates: This will transmit updates to the state after each step of the graph. If multiple updates are made in the same step (e.g., running multiple nodes), these updates will be transmitted separately.
-
custom: This will stream custom data from within the graph nodes.
-
messages: This will transmit LLM tokens and metadata for graph nodes calling LLM.
-
debug: This will transmit as much information as possible throughout the execution of the graph.
graph.stream(..., stream_mode=["updates", "messages"])
...('messages', (AIMessageChunk(content='Hi'), {'langgraph_step': 3, 'langgraph_node': 'agent', ...}))...('updates', {'agent': {'messages': [AIMessage(content="Hi, how can I help you?")]}})

-
event: This is the type of event being emitted. A detailed table of all callback events and triggers can be found here (https://python.langchain.com/docs/concepts/#callback-events). -
name: the name of the event -
data: data related to the event
-
Each node (runnable) emits on_chain_start when it begins execution, on_chain_stream during execution, and on_chain_end when it completes. Node events will include the node name in the event’s name field. -
on_chain_start will be emitted at the start of graph execution, on_chain_stream after each node execution, and on_chain_end when the graph completes. Graph events will include LangGraph in the event’s name field. -
Any write to the state channel (i.e., any time a value of a certain state key is updated) will emit on_chain_start and on_chain_end events.
from langchain_ollama import ChatOllama
import base_conf
from langgraph.graph import StateGraph, MessagesState, START, END
import asyncio
from langchain_core.messages import HumanMessage
model = ChatOllama(base_url=base_conf.base_url, model=base_conf.model_name, temperature=base_conf.temperature)
def call_model(state: MessagesState): response = model.invoke(state['messages']) return {"messages": response}
workflow = StateGraph(MessagesState)
workflow.add_node(call_model)
workflow.add_edge(START, "call_model")
workflow.add_edge("call_model", END)
app = workflow.compile()
async def run(app): async for event in app.astream_events({"messages": [HumanMessage("你好")]}, version="v2"): kind = event["event"] print(f"{kind}: {event['name']}")
asyncio.run(run(app))
on_chain_start: LangGraph
on_chain_start: __start__
on_chain_start: _write
on_chain_end: _write
on_chain_start: _write
on_chain_end: _write
on_chain_start: _write
on_chain_end: _write
on_chain_stream: __start__
on_chain_end: __start__
on_chain_start: call_model
on_chat_model_start: ChatOllama
on_chat_model_stream: ChatOllama
on_chat_model_stream: ChatOllama
on_chat_model_stream: ChatOllama
on_chat_model_stream: ChatOllama
on_chat_model_stream: ChatOllama
on_chat_model_end: ChatOllama
on_chain_start: _write
on_chain_end: _write
on_chain_stream: call_model
on_chain_end: call_model
on_chain_stream: LangGraph
on_chain_end: LangGraph
These events are as follows:
{'event': 'on_chat_model_stream', 'name': 'ChatOpenAI', 'run_id': '3fdbf494-acce-402e-9b50-4eab46403859', 'tags': ['seq:step:1'], 'metadata': {'langgraph_step': 1, 'langgraph_node': 'call_model', 'langgraph_triggers': ['start:call_model'], 'langgraph_task_idx': 0, 'checkpoint_id': '1ef657a0-0f9d-61b8-bffe-0c39e4f9ad6c', 'checkpoint_ns': 'call_model', 'ls_provider': 'openai', 'ls_model_name': 'gpt-4o-mini', 'ls_model_type': 'chat', 'ls_temperature': 0.7}, 'data': {'chunk': AIMessageChunk(content='Hello', id='run-3fdbf494-acce-402e-9b50-4eab46403859')}, 'parent_ids': []}