Mastering LangGraph Tools: A Comprehensive Guide

Mastering LangGraph Tools: A Comprehensive Guide
Sometimes, we want the LLM that calls the tools to fill in a subset of parameters for the tool functions, while providing other values for the remaining parameters at runtime.
If you are using LangChain-style tools, a simple way to handle this situation is to annotate the function parameters with InjectedArg. This annotation excludes the parameter from the LLM.
In a LangGraph application, we might want to pass the graph state or shared memory (storage) to the tool at runtime.
This type of stateful tool is particularly useful when the tool’s output is influenced by previous agent steps (for example, if we use a sub-agent as a tool and want to pass message history to the sub-agent), or when the tool’s input needs to be validated based on the context of past agent steps.
Below, we will demonstrate how to achieve this using LangGraph’s pre-built ToolNode.
The core of the following example is annotating the parameters as “injected”, meaning they will be injected by the program and should not be seen or filled by the LLM. Let the following code snippet serve as a tl;dr:
from typing import Annotated
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import InjectedToolArg
from langgraph.store.base import BaseStore
from langgraph.prebuilt import InjectedState, InjectedStore

# Can be synchronous or asynchronous; no need for @tool decorator
async def my_tool(
    # These parameters are filled by the LLM
    some_arg: str,
    another_arg: float,
    # config: RunnableConfig is always available in LangChain calls
    # This will not be exposed to the LLM
    config: RunnableConfig,
    # The following three are specific to the pre-built ToolNode
    # (and the `create_react_agent` extension). If you call tools in your own nodes,
    # then you need to provide these yourself.
    store: Annotated[BaseStore, InjectedStore],
    # This passes the full state.
    state: Annotated[State, InjectedState],
    # You can also inject individual fields from the state
    messages: Annotated[list, InjectedState("messages")]
    # The following is incompatible with create_react_agent or ToolNode
    # You can also exclude other parameters that are not shown to the model.
    # These must be provided manually, and they are useful if you call tools/functions in your own nodes
    # some_other_arg=Annotated["MyPrivateClass", InjectedToolArg],):
    """Call my_tool to have an impact on the real world.
        Parameters:
        some_arg: A very important parameter
        another_arg: Another parameter provided by the LLM
    """
    # The docstring becomes the tool's description and is passed to the model
    print(some_arg, another_arg, config, store, state, messages)
    # Config, some_other_arg, store, and state are "hidden" from the LangChain model when passed to bind_tools or with_structured_output
    return "... some response"
Passing graph state to the tool
First, let’s look at how to allow our tool to access the graph state. We need to define the graph state:
from typing import List

# This is the state schema used by the prebuilt create_react_agent we'll be using below
from langgraph.prebuilt.chat_agent_executor import AgentState
from langchain_core.documents import Document

class State(AgentState):
    docs: List[str]
Defining tools
We want our tool to take the graph state as input, but we do not want the model to attempt to generate this input when calling the tool (as mentioned above, we want the LLM to ignore this input when constructing parameters).
We can use the InjectedState annotation to mark parameters as required graph state (or certain fields of the graph state). These parameters will not be generated by the model.
When using ToolNode, the graph state will automatically be passed to the relevant tools and parameters.
In this example, we will create a tool that returns documents and then create another tool that actually references documents that prove claims.
from typing import List, Tuple
from typing_extensions import Annotated
from langchain_core.messages import ToolMessage
from langchain_core.tools import tool
from langgraph.prebuilt import InjectedState

@tool
def get_context(question: str, state: Annotated[dict, InjectedState]):
    """Get relevant context for answering the question."""
    return "\n\n".join(doc for doc in state["docs"])
If we look at the input_schema of these tools, we will see that the state is still listed, which is understandable because the tool’s input parameters must have these parameters:
print(get_context.get_input_schema().model_json_schema())
{'description': 'Get relevant context for answering the question.', 'properties': {'question': {'title': 'Question', 'type': 'string'}, 'state': {'title': 'State', 'type': 'object'}}, 'required': ['question', 'state'], 'title': 'get_context', 'type': 'object'}

However, if we look at the tool-call schema (i.e., the content passed to the model for tool invocation), the state has been removed because we do not want the LLM to see that it needs to fill these parameters:

print(get_context.tool_call_schema.model_json_schema())
{'description': 'Get relevant context for answering the question.', 'properties': {'question': {'title': 'Question', 'type': 'string'}}, 'required': ['question'], 'title': 'get_context', 'type': 'object'}
Defining the graph
In this example, we will use the pre-built ReAct agent. First, we need to define our model and tool invocation nodes (ToolNode):
from langchain_ollama import ChatOllama
import base_conf
from langgraph.prebuilt import ToolNode, create_react_agent
from langgraph.checkpoint.memory import MemorySaver

model = ChatOllama(base_url=base_conf.base_url, model=base_conf.model_name, temperature=0)
tools = [get_context]
# ToolNode will automatically take care of injecting state into the tool
tool_node = ToolNode(tools)
checkpointer = MemorySaver()
graph = create_react_agent(model, tools, state_schema=State, checkpointer=checkpointer)
docs = [
    "FooBar company just raised 1 Billion dollars!",
    "FooBar company was founded in 2019",
]
inputs = {
    "messages": [{"type": "user", "content": "what's the latest news about FooBar"}],
    "docs": docs,
}
config = {"configurable": {"thread_id": "1"}}
for chunk in graph.stream(inputs, config, stream_mode="values"):
    chunk["messages"][-1].pretty_print()
================================ Human Message =================================
what's the latest news about FooBar================================== Ai Message ==================================Tool Calls:  get_context (e1eeaa88-b5f4-45ae-abf3-ed7fd861ce66) Call ID: e1eeaa88-b5f4-45ae-abf3-ed7fd861ce66  Args:    question: latest news about FooBar================================= Tool Message =================================Name: get_context
FooBar company just raised 1 Billion dollars!
FooBar company was founded in 2019================================== Ai Message ==================================
The latest news about FooBar is that the company has just raised 1 Billion dollars! For reference, FooBar was founded in 2019.
As we can see, the ToolNode automatically injected the docs data from the graph state!
Passing shared memory (store) to the graph
You may also want the tool to access memory shared across multiple conversations or users. We can achieve this by passing the LangGraph Store to the tool using a different annotation, InjectedStore.
Let’s modify the example to store documents in memory storage and use the get_context tool to retrieve them. We will also access the documents based on user ID so that certain documents are only visible to specific users.Then, the tool will retrieve a set of correct documents based on the user ID provided in the config.
from langgraph.store.memory import InMemoryStore

doc_store = InMemoryStore()
namespace = ("documents", "1")  # user ID
doc_store.put(
    namespace, "doc_0", {"doc": "FooBar company just raised 1 Billion dollars!"})
namespace = ("documents", "2")  # user ID
doc_store.put(namespace, "doc_1", {"doc": "FooBar company was founded in 2019"})
Defining tools
from langgraph.store.base import BaseStore
from langchain_core.runnables import RunnableConfig
from langgraph.prebuilt import InjectedStore

@tool
def get_context(
    question: str,
    config: RunnableConfig,
    store: Annotated[BaseStore, InjectedStore()],
) -> Tuple[str, List[Document]]:
    """Get relevant context for answering the question."""
    user_id = config.get("configurable", {}).get("user_id")
    docs = [item.value["doc"] for item in store.search(("documents", user_id))]
    return "\n\n".join(doc for doc in docs)
Verifying that the tool-calling model ignores the store parameter of the get_context tool:
print(get_context.tool_call_schema.model_json_schema())
{'description': 'Get relevant context for answering the question.', 'properties': {'question': {'title': 'Question', 'type': 'string'}}, 'required': ['question'], 'title': 'get_context', 'type': 'object'}
Defining the graph
tools = [get_context]
# ToolNode will automatically inject the Store into the tools
tool_node = ToolNode(tools)
checkpointer = MemorySaver()
graph = create_react_agent(model, tools, checkpointer=checkpointer, store=doc_store)

messages = [{"type": "user", "content": "what's the latest news about FooBar"}]
config = {"configurable": {"thread_id": "1", "user_id": "1"}}
for chunk in graph.stream({"messages": messages}, config, stream_mode="values"):
    chunk["messages"][-1].pretty_print()
Similarly, it retrieved the data from the store:
================================ Human Message =================================
what's the latest news about FooBar================================== Ai Message ==================================Tool Calls:  get_context (89b2d78c-6b6d-4c46-a5a7-cee513bff5cb) Call ID: 89b2d78c-6b6d-4c46-a5a7-cee513bff5cb  Args:    question: latest news about FooBar================================= Tool Message =================================Name: get_context
FooBar company just raised 1 Billion dollars!================================== Ai Message ==================================
The latest news is that FooBar company has just raised 1 Billion dollars!
The tool retrieved the correct documents for user “1” when searching in the store. Now let’s try again for another user:
messages = [{"type": "user", "content": "what's the latest news about FooBar"}]
config = {"configurable": {"thread_id": "2", "user_id": "2"}}
for chunk in graph.stream({"messages": messages}, config, stream_mode="values"):
    chunk["messages"][-1].pretty_print()
================================ Human Message =================================
what's the latest news about FooBar================================== Ai Message ==================================Tool Calls:  get_context (817da555-c13e-4fa1-8bbe-3854713fc643) Call ID: 817da555-c13e-4fa1-8bbe-3854713fc643  Args:    question: latest news about FooBar================================= Tool Message =================================Name: get_context
FooBar company was founded in 2019================================== Ai Message ==================================
The information I have currently states that FooBar company was founded in 2019. However, this doesn't provide the latest news. Could you please specify a date range or give me some more time to fetch the most recent updates?
As we can see, it retrieved the document content for user ID = 2!

Leave a Comment