How to Update Graph State via Tool
A common use case is to update the graph state from within the tool. For example, in a customer support application, you may want to look up the customer account or ID at the start of the conversation.
To update the graph state from the tool, you can return Command(update={“my_custom_key”: “foo”, “messages”: […]}) from the tool:
@tooldef lookup_user_info(tool_call_id: Annotated[str, InjectedToolCallId], config: RunnableConfig): """Use this tool to look up user information to better answer their questions.""" user_info = get_user_info(config) return Command( update={ # Update state key "user_info": user_info, # Update message history "messages": [ToolMessage("Successfully retrieved user information", tool_call_id=tool_call_id)] } )
First, let’s define the tool used to look up user information. We will use a simple implementation that looks up user information using a dictionary:
USER_INFO = [ {"user_id": "1", "name": "Zhang San", "location": "Beijing"}, {"user_id": "2", "name": "Li Si", "location": "Chengdu"},]USER_ID_TO_USER_INFO = {info["user_id"]: info for info in USER_INFO}from langgraph.prebuilt.chat_agent_executor import AgentStatefrom langgraph.types import Commandfrom langchain_core.tools import toolfrom langchain_core.tools.base import InjectedToolCallIdfrom langchain_core.messages import ToolMessagefrom langchain_core.runnables import RunnableConfigfrom typing_extensions import Any, Annotatedclass State(AgentState): # Updated by the tool user_info: dict[str, Any]@tooldef lookup_user_info( tool_call_id: Annotated[str, InjectedToolCallId], config: RunnableConfig): """Use this tool to look up user information to better answer their questions.""" user_id = config.get("configurable", {}).get("user_id") if user_id is None: raise ValueError("Please provide a user ID") if user_id not in USER_ID_TO_USER_INFO: raise ValueError(f"User '{user_id}' not found") user_info = USER_ID_TO_USER_INFO[user_id] return Command( update={ # Update state key "user_info": user_info, # Update message history "messages": [ ToolMessage( "Successfully retrieved user information", tool_call_id=tool_call_id ) ], } )
Now let’s spice things up: after the state is updated from the tool, we will respond differently based on the state values.
To achieve this, let’s define a function that will dynamically construct the system prompt based on the graph state. This function will be called every time the LLM is invoked, and its output will be passed to the LLM:
def state_modifier(state: State): user_info = state.get("user_info") if user_info is None: return state["messages"] system_msg = ( f"The username is {user_info['name']}. The user lives in {user_info['location']}" ) return [{"role": "system", "content": system_msg}] + state["messages"]
from langgraph.prebuilt import create_react_agentfrom langchain_ollama import ChatOllamaimport base_confmodel = ChatOllama(base_url=base_conf.base_url, model=base_conf.model_name, temperature=0)agent = create_react_agent( model, # Pass the tool that can update the state [lookup_user_info], state_schema=State, # Pass the dynamic prompt function state_modifier=state_modifier,)for chunk in agent.stream( {"messages": [("user", "I am a user, what should I do this week?")]}, # provide user ID in the config {"configurable": {"user_id": "1"}},): print(chunk) print("\n")
{'agent': {'messages': [AIMessage(content='', additional_kwargs={}, response_metadata={'model': 'qwen2.5:7b', 'created_at': '2025-01-08T06:39:47.488317Z', 'done': True, 'done_reason': 'stop', 'total_duration': 3534691667, 'load_duration': 22909042, 'prompt_eval_count': 162, 'prompt_eval_duration': 799000000, 'eval_count': 60, 'eval_duration': 2710000000, 'message': Message(role='assistant', content='', images=None, tool_calls=None)}, id='run-3909aefc-f875-49f6-952e-9f24290f3780-0', tool_calls=[{'name': 'lookup_user_info', 'args': {}, 'id': '17f1dc87-07fd-4261-8c35-f554fbce7fcd', 'type': 'tool_call'}], usage_metadata={'input_tokens': 162, 'output_tokens': 60, 'total_tokens': 222})]}},
{'tools': {'user_info': {'user_id': '1', 'name': 'Zhang San', 'location': 'Beijing'}, 'messages': [ToolMessage(content='Successfully retrieved user information', name='lookup_user_info', id='45207262-0344-41a4-a28d-ce33a4927d44', tool_call_id='17f1dc87-07fd-4261-8c35-f554fbce7fcd')]}}
{'agent': {'messages': [AIMessage(content='Hello! Based on your personal information, you have been learning programming recently. If you like, you can try completing some small projects this week to consolidate what you have learned, or take online courses to further enhance your skills. Additionally, you can read related books or articles to broaden your horizons.
Moreover, you mentioned an interest in photography. If time permits, you can spend your weekends taking outdoor landscape photos to improve your photography skills.
Of course, these suggestions are just for reference; you can adjust them based on your personal interests and actual situation. I hope these suggestions can help you!', additional_kwargs={}, response_metadata={'model': 'qwen2.5:7b', 'created_at': '2025-01-08T06:39:53.329455Z', 'done': True, 'done_reason': 'stop', 'total_duration': 5823573375, 'load_duration': 9685875, 'prompt_eval_count': 194, 'prompt_eval_duration': 854000000, 'eval_count': 108, 'eval_duration': 4957000000, 'message': Message(role='assistant', content='Hello! Based on your personal information, you have been learning programming recently. If you like, you can try completing some small projects this week to consolidate what you have learned, or take online courses to further enhance your skills. Additionally, you can read related books or articles to broaden your horizons.
Moreover, you mentioned an interest in photography. If time permits, you can spend your weekends taking outdoor landscape photos to improve your photography skills.
Of course, these suggestions are just for reference; you can adjust them based on your personal interests and actual situation. I hope these suggestions can help you!', images=None, tool_calls=None)}, id='run-984dfe37-7aa0-4133-bd20-ccf92cef88dc-0', usage_metadata={'input_tokens': 194, 'output_tokens': 108, 'total_tokens': 302})]}}
Of course, you can also try changing {“configurable”: {“user_id”: “1”}} to 2 and see what happens.
As you can see, our tool effectively acts as a node that modifies the state.