Mastering LangGraph: Human-Computer Interaction

Mastering LangGraph: Human-Computer Interaction
The human-computer interaction feature allows us to involve the user in the decision-making process of the graph. The following guide demonstrates how to implement human-computer interaction workflows in the graph.
The human-computer interaction workflow integrates user input into automated processes, allowing for decision-making, validation, or correction at critical stages.
This is particularly useful in applications based on LLM, as the underlying model may occasionally produce inaccuracies (hallucinations). In low-tolerance scenarios such as compliance, decision-making, or content generation, human involvement can ensure reliability through review, correction, or overriding model outputs.
The main use cases for human-computer interaction workflows in applications based on LLM include:
🛠️ Review tool calls: Humans can review, edit, or approve tool calls requested by the LLM before execution.
✅ Validate LLM outputs: Humans can review, edit, or approve content generated by the LLM.
💡 Provide context: Allow the LLM to explicitly request human input for clarification or additional details or to support multi-turn conversations.
interrupt
The interrupt function in LangGraph achieves human-computer interaction workflows by pausing the graph at specific nodes, presenting information to humans, and resuming the graph based on their input.
The interrupt function is used in conjunction with the Command object to resume graph execution using values provided by humans. The Command is another way to implement conditional edges. Below is a core code example:
from typing import TypedDict
import uuid
from langgraph.checkpoint.memory import MemorySaver
from langgraph.constants import START
from langgraph.graph import StateGraph
from langgraph.types import interrupt, Command
class State(TypedDict):
    some_text: str
def human_node(state: State):
    value = interrupt(
        # Any value serializable to JSON to present to humans
        # For example, a question or a piece of text or a set of keys in state
        {
            "text_to_revise": state["some_text"]
        }
    )
    return {
        # Update state using human input
        "some_text": value
    }
# Build the graph
graph_builder = StateGraph(State)
# Add human node to the graph
graph_builder.add_node("human_node", human_node)
graph_builder.add_edge(START, "human_node")
# `interrupt` requires a checkpoint
checkpointer = MemorySaver()
graph = graph_builder.compile(
    checkpointer=checkpointer
)
# Pass a thread ID to the graph to run it
thread_config = {"configurable": {"thread_id": uuid.uuid4()}}
# Use `stream()` to directly display `__interrupt__` information
for chunk in graph.stream({"some_text": "Original text"}, config=thread_config):
    print(chunk)
# Use Command to resume
for chunk in graph.stream(Command(resume="Edited text"), config=thread_config):
    print(chunk)
{'__interrupt__': (Interrupt(value={'text_to_revise': 'Original text'}, resumable=True, ns=['human_node:1cee88f8-54f0-5c9e-a5b9-f67ae9b81448'], when='during'),)}{'human_node': {'some_text': 'Edited text'}}
⚠️: Interrupts are both powerful and ergonomic. However, while they may resemble Python’s input() function from a developer experience perspective, it is important to note that they do not automatically resume execution from the interrupt point.
Instead, they re-run the entire node that uses the interrupt. Therefore, it is generally best to place interrupts at the start of a node or in a dedicated node.
Prerequisites
To use interrupt in the graph, the following conditions must be met:
  • Specify a checkpoint to save the state of the graph after each step.
  • Call interrupt() at appropriate locations.
  • Run the graph using a thread ID until an interrupt occurs.
  • Use invoke/ainvoke/stream/astream to resume execution.
Design Patterns
Generally, we can perform three different operations through human-computer interaction workflows:
  1. Approval or Rejection: Pause the graph before critical steps (e.g., API calls) to review and approve actions. If the action is rejected, you can prevent the graph from executing that step and possibly take alternative actions. This pattern often involves routing the graph based on human input.
  2. Edit graph state: Pause the graph to view and edit its state. This is useful for correcting errors or updating the state with additional information. This pattern typically involves updating the state using human input.
  3. Get Input: Explicitly request human input at specific steps in the graph. This is very useful for gathering additional information or context to inform the agent’s decision-making process or support multi-turn conversations.
Below we showcase different design patterns that can be implemented using these operations.
Approval or Rejection
Mastering LangGraph: Human-Computer Interaction
Based on human approval or rejection, the graph can continue executing operations or take alternative paths.
from typing import Literal
from langgraph.types import interrupt, Command
def human_approval(state: State) -> Command[Literal["some_node", "another_node"]]:
    # Ask a question and interrupt, waiting for human approval
    is_approved = interrupt(
        {
            "question": "Is this correct?",
            # Display output that requires human review and approval
            "llm_output": state["llm_output"]
        }
    )
    # Based on human approval result, return corresponding command
    if is_approved:
        return Command(goto="some_node")
    else:
        return Command(goto="another_node")
# Add the node to the graph at the appropriate position and connect it to related nodes
graph_builder.add_node("human_approval", human_approval)
graph = graph_builder.compile(checkpointer=checkpointer)
# Run the graph and pause on interrupt. Resume the graph by approving or rejecting.
thread_config = {"configurable": {"thread_id": "some_id"}}
graph.invoke(Command(resume=True), config=thread_config)
Review and Edit State
Mastering LangGraph: Human-Computer Interaction
Humans can view and edit the state of the graph. This is useful for correcting errors or updating the state with additional information.
from langgraph.types import interrupt
def human_editing(state: State):
    ...
    result = interrupt(
        # Interrupt information appears on the client.
        # Can be any value serializable to JSON.
        {
            "task": "View the output of the <strong>LLM</strong> and make necessary edits.",
            "llm_generated_summary": state["llm_generated_summary"]
        }
    )
    # Update state with edited text
    return {
        "llm_generated_summary": result["edited_text"]
    }
# Add the node to the graph at the appropriate position
# and connect it to related nodes.
graph_builder.add_node("human_editing", human_editing)
graph = graph_builder.compile(checkpointer=checkpointer)
...
# Run the graph and after triggering interrupt, the graph will pause.
# Resume it with the edited text.
thread_config = {"configurable": {"thread_id": "some_id"}}
graph.invoke(
    Command(resume={"edited_text": "Edited text"}),
    config=thread_config)
Review Tools Call
Mastering LangGraph: Human-Computer Interaction
Humans can check and edit the output of the LLM before proceeding. This is especially important in applications where tool calls requested by the LLM may be sensitive or require human oversight.
def human_review_node(state) -> Command[Literal["call_llm", "run_tool"]]:
    # This is the value we will provide via Command(resume=<human_review>)
    human_review = interrupt(
        {
            "question": "Is this correct?",
            # Provide tool call for review
            "tool_call": tool_call
        }
    )
    review_action, review_data = human_review
    # Approve tool call and continue
    if review_action == "continue":
        return Command(goto="run_tool")
    # Manually modify tool call and then continue
    elif review_action == "update":
        ...
        updated_msg = get_updated_msg(review_data)
        # Remember, to modify existing messages, you need to pass messages with matching IDs.
        return Command(goto="run_tool", update={"messages": [updated_message]})
    # Provide natural language feedback and pass it back to the agent
    elif review_action == "feedback":
        ...
        feedback_msg = get_feedback_msg(review_data)
        return Command(goto="call_llm", update={"messages": [feedback_msg]})
Multi-Turn Conversations
Mastering LangGraph: Human-Computer Interaction
A multi-turn conversation architecture, where the agent and human nodes loop back and forth until the agent decides to hand off the conversation to another agent or part of the system.
Multi-turn conversations involve multiple back-and-forth interactions between the agent and humans, allowing the agent to collect more information from humans in a conversational manner.
This design pattern is very useful in applications composed of multiple agents. One or more agents may need to have multi-turn conversations with humans, where humans provide input or feedback at different stages of the conversation.
For simplicity, the following agent implementation is shown as a single node, but in reality, it may be part of a larger graph composed of multiple nodes and may include conditional edges.
Each agent uses a human node:
In this pattern, each agent has its own human node to collect user input, which can be achieved by naming the human node with unique names (e.g., “Human of Agent 1”, “Human of Agent 2”) or using subgraphs (where the subgraph contains the human node and agent nodes).
from langgraph.types import interrupt
def human_input(state: State):
    human_message = interrupt("human_input")
    return {
        "messages": [
            {
                "role": "human",
                "content": human_message
            }
        ]
    }
def agent(state: State):
    # Agent logic
    ...
graph_builder.add_node("human_input", human_input)
graph_builder.add_edge("human_input", "agent")
graph = graph_builder.compile(checkpointer=checkpointer)
# Run the graph and after triggering interrupt, the graph will pause.
# Use human input to resume it.
graph.invoke(
    Command(resume="hello!"),
    config=thread_config)
Share a Single Human Node Across Multiple Agents
In this pattern, a single human node is used to collect user input for multiple agents. The active agent is determined based on the state so that after collecting human input, the graph can route to the correct agent.
from langgraph.types import interrupt
def human_node(state: MessagesState) -> Command[Literal["agent_1", "agent_2", ...]]:
    """Node for collecting user input."""
    user_input = interrupt(value="Ready to receive user input.")
    # Determine the **active agent** from the state,
    # so that after collecting input, it can route to the correct agent.
    # For example, add a field to the state or use the last active agent.
    # Or fill in the `name` attribute of the AI messages generated by the agent.
    active_agent = ... 
    return Command(
        update={
            "messages": [{
                "role": "human",
                "content": user_input,
            }]
        },
        goto=active_agent,
    )
Validate Human Input
If needed, to validate human input provided within the graph itself (rather than on the client), this can be achieved by using multiple interrupt calls within a single node.
from langgraph.types import interrupt
def human_node(state: State):
    """Human node with validation."""
    question = "What is your age?"
    while True:
        answer = interrupt(question)
        # Validate the answer, if invalid, re-input.
        if not isinstance(answer, int) or answer < 0:
            question = f"'{answer}' is not a valid age. What is your age?"
            answer = None
            continue
        else:
            # If the answer is valid, we can continue.
            break
    print(f"The human age in the loop is {answer} years.")
    return {
        "age": answer
    }
Command Primitives
When using the interrupt function, the graph pauses at the interrupt and waits for user input. The Command primitives that can be passed via invoke, ainvoke, stream, or astream methods can be used to resume graph execution.
The Command primitives provide several options to control and modify the graph state during resumption:
Pass a value to the interrupt:
Use Command(resume=value) to provide data to the graph, such as the user’s response. Execution resumes from the node where the interrupt occurred, but this time, the interrupt(…) call will return the value passed in Command(resume=value), rather than pausing the graph.
Update the graph state:
Use Command(update=update) to change the values of the graph state. Execution resumes from the node where the interrupt occurred, but with the updated state.
# Update the graph state and resume.
# You must provide a `resume` value if using `interrupt`.
graph.invoke(Command(update={"foo": "bar"}, resume="Let's go!!!"), thread_config)
By leveraging commands, you can resume graph execution, handle user input, and dynamically adjust the state of the graph.
⚠️ How to Resume from an Interrupt?
Resuming from an interrupt is different from Python’s input() function, which resumes from the exact location where the input() function was called.
A key aspect of using interrupts is understanding how resumption works. When resuming execution after an interrupt, the graph execution will start from the beginning of the graph node that triggered the interrupt.
All code from the start of that node to the interrupt will be re-executed (this is why it is recommended to place interrupts at the very start of a node).
counter = 0
def node(state: State):
    # When the graph resumes, all code from the node to the interrupt will be re-executed.
    global counter
    counter += 1
    print(f"> Entered the node: {counter} times")
    # Pause the graph and wait for user input.
    answer = interrupt()
    print("The value of counter is:", counter)
    ...
> Entered the node: 2 times
The value of counter is: 2
Common Pitfalls
Side Effects
Place code with side effects (non-idempotent operations) after the interrupt to avoid duplication, as such code will be re-triggered each time the node resumes.
Subgraph Calls as Functions
When calling a subgraph as a function, the parent graph resumes execution from the node that called the subgraph (and the node that triggered the interrupt), similarly, the subgraph will resume from the node that called the interrupt() function.
Using Multiple Interrupts
Using multiple interrupts within a single node is helpful for patterns like validating human input. However, if not handled properly, using multiple interrupts in the same node can lead to unexpected behavior.
When a node contains multiple interrupt calls, LangGraph maintains a list of recovery values specific to the task of executing that node.
Each time a recovery is executed, it will start from the beginning of the node. For each interrupt encountered, LangGraph will check if there is a matching value in the recovery list of the task. Matching is strictly based on index, so the order of interrupt calls within the node is crucial.
To avoid issues, do not dynamically change the structure of the node during execution. This includes adding, removing, or reordering interrupt calls, as such changes may lead to index mismatches.
These issues often stem from unconventional patterns, such as changing the state dynamically through Command(resume=…, update=SOME_STATE_MUTATION) or relying on global variables to modify the structure of the node dynamically.

Leave a Comment