How to Interact with Agents Using LangGraph

LangGraph provides a hands-on approach to how humans can interact with agents.

How to Interact with Agents Using LangGraph

`LangGraph` is a library for building stateful, multi-participant applications using `LLM`, designed for creating agent and multi-agent workflows. Compared to other `LLM` frameworks, it has the following core advantages: loops, controllability, and persistence. `LangGraph` allows you to define processes that involve loops, which are essential for most agent architectures, making it distinct from `DAG`-based solutions. As a very low-level framework, it provides fine-grained control over the flow and state of applications, which is crucial for creating reliable agents. Additionally, LangGraph includes built-in persistence for advanced human-machine loops and memory functionality. Below, we will discuss how to implement human-agent interaction using `LangGraph`.

1. Define Environment and Graph

1. Set up the environment reference

from dotenv import load_dotenv
load_dotenv()
from typing import Annotated
from langchain_openai import ChatOpenAI
from langchain_community.tools.tavily_search import TavilySearchResults
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition

2. Define the message format contained in the state and the state graph

class State(TypedDict):
    messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)

2. Pausing the Agent

1. Define a search tool `TavilySearch`, which can also be customized as follows:

tool = TavilySearchResults(max_results=2)
tools = [tool]

2. Initialize a large model as follows:

llm = ChatOpenAI(model="gpt-4o-mini")
llm_with_tools = llm.bind_tools(tools)

3. Define a method to call the large model and return the results:

def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}

4. Add a node for the large model callback method

graph_builder.add_node("chatbot", chatbot)

5. Add a tool usage node

tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

6. Add conditional edges, starting from `chatbot`, using `tools_condition` – this is a conditional function that determines which node to jump to next. Here it can either be the tool node or the `END` node, which also tells the graph that whenever the `chatbot` node runs, if it calls a tool, it should go to ‘tools’; if it responds directly, it should end the loop.

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)

7. Add an edge from `tools` to `chatbot` (the large model callback method node)

graph_builder.add_edge("tools", "chatbot")

8. Add an edge from `START` to `chatbot` (the large model callback method node)

graph_builder.add_edge(START, "chatbot")

9. We use a memory checkpoint. It saves everything in memory. In production applications, you might change it to use `SqliteSaver` or `PostgresSaver` and connect to your own database.

memory = MemorySaver()

10. Now, compile the graph, specifying the `interrupt_before` node as `tools`.

graph = graph_builder.compile(
    checkpointer=memory,
    interrupt_before=["tools"],
)

11. Next, start calling our chatbot.

user_input = "I'm learning LangGraph. Could you do some research on it for me?"
config = {"configurable": {"thread_id": "2"}}
events = graph.stream(
    {"messages": [("user", user_input)]}, config, stream_mode="values")
for event in events:
    if "messages" in event:
        event["messages"][-1].pretty_print()

Did it get interrupted? Let’s check the graph state to confirm it is valid. The image below shows the execution result:

snapshot = graph.get_state(config)
print(f"snapshot.next: {snapshot.next}")
How to Interact with Agents Using LangGraph

Since we set the “next” node to be “tools” here, we have interrupted here! Let’s check the tool calls. The image below shows the execution result:

existing_message = snapshot.values["messages"][-1]
print(f"existing_message.tool_calls: {existing_message.tool_calls}")
How to Interact with Agents Using LangGraph

To keep our graph running, passing in `None` will allow the graph to continue from the interruption without adding any new content to the state.

events = graph.stream(None, config, stream_mode="values")
for event in events:
    if "messages" in event:
        event["messages"][-1].pretty_print()
How to Interact with Agents Using LangGraph

3. Summary

This is the first step in controlling the state graph of the `agent`. We have added a breakpoint in the human-agent interaction using `interrupt` for manual supervision and intervention when needed. Since we have added checkpoints, the graph can pause indefinitely and resume at any time, as if nothing happened.

Next, we will explore how to use custom state updates to further customize the behavior of the robot.

Leave a Comment