Mastering LangGraph: Multi-Agent System

Mastering LangGraph: Multi-Agent System
An agent is a system that uses LLM to determine the control flow of applications. As you develop these systems, they may become more complex over time, making them harder to manage and scale. For example, we may encounter the following issues:
There are too many tools that agents can use, making it overly complex to correctly decide which tool to call next. An individual agent cannot track the need for multiple areas of expertise in the system (e.g., planners, researchers, math experts, etc.).
To address these issues, we can consider breaking the application down into multiple smaller independent agents and combining them into a multi-agent system. These independent agents can be as simple as prompts and LLM calls or as complex as ReAct agents (or even more!).
The main benefits of using a multi-agent system are:
  • Modularity: Independent agents make it easier to develop, test, and maintain the agent system.
  • Specialization: Expert agents focused on specific domains can be created, which helps improve the overall system performance.
  • Control: Clear control over how agents communicate can be established (rather than relying on function calls).
The architecture that can be implemented is shown in the figure below.
Mastering LangGraph: Multi-Agent System

In a multi-agent system, there are various ways to connect agents:

  • Network: Each agent can communicate with other agents, and any agent can decide which agent to call next.
  • Supervisor: Each agent communicates with a single supervisor agent. The supervisor agent decides which agent should be called next.
  • Supervisor (Tool Call): This is a specific case of the supervisor architecture. A single agent can be represented as a tool. In this case, the supervisor agent uses tool calls to determine which agent tools to call and what parameters to pass to those agents.
  • Hierarchy: We can define a multi-agent system with multiple supervisors. This generalizes the supervisor architecture, allowing for more complex control flows.
  • Custom Multi-Agent Workflow: Each agent communicates only with a subset of agents. Certain parts of the process are deterministic, and only some agents can decide which other agents to call next.
Next, let’s take a look at the basics of how to transfer between agents:
Our example is as follows: we define two nodes, simulating agents, one addition expert and one multiplication expert. When receiving user input, it first goes to the addition expert, and then based on the LLM’s decision, it may go to the multiplication expert to complete the user’s final question. So easy!
from langgraph.constants import END
from typing_extensions import Literal
from langchain_core.tools import tool
from langgraph.graph import MessagesState, StateGraph, START
from langgraph.types import Command
from langchain_core.messages import SystemMessage, ToolMessage
from langchain_ollama import ChatOllama
import base_conf

model = ChatOllama(base_url=base_conf.base_url, model=base_conf.model_name, temperature=0)
# The following two tools do not return any content:
# We just use them as prompts: to let the LLM know it needs to hand off to another agent
@tool
def transfer_to_multiplication_expert():    """Request help from the multiplication expert."""    return

@tool
def transfer_to_addition_expert():    """Request help from the addition expert."""    return

# The following two nodes represent the addition expert and multiplication expert
def addition_expert(state: MessagesState) -> Command[Literal["multiplication_expert", END]]:
    system_prompt = (
        "You are an addition expert, and you can request the multiplication expert's help for multiplication."
        "Always complete your part of the calculation before handing off."
    )
    messages = [SystemMessage(content=system_prompt)] + state["messages"]
    ai_msg = model.bind_tools([transfer_to_multiplication_expert, transfer_to_addition_expert]).invoke(messages)
    # First, it goes to addition here, and the ai_msg returns a tool_call
    # If there is a tool call, the LLM needs to hand off to another agent
    if len(ai_msg.tool_calls) > 0:
        # Here, why does the official tutorial always use -1? Because the tool_call_id is a list, so -1 is the last one
        # Is it possible for the list to have multiple entries? Yes, if the prompt is modified and multiple tools are passed in, the ai_msg will return multiple tool_calls
        # Therefore, in actual development, pay attention to this issue
        tool_call_id = ai_msg.tool_calls[-1]["id"]
        # Note: Inserting ToolMessage here is important because the LLM provider expects all AI messages to have corresponding tool result messages
        tool_msg = ToolMessage(content="Transfer successful", tool_call_id=tool_call_id)
        return Command(
            # After the above processing, when reaching multiplication_expert, the input parameters have been changed to those needed for multiplication after addition,
            # That is, the original addition part has already been calculated: 8 * 12
            goto="multiplication_expert", update={"messages": [ai_msg, tool_msg]}        )
    # If the expert has an answer, return it directly to the user
    return Command(goto=END, update={"messages": [ai_msg]})

def multiplication_expert(state: MessagesState) -> Command[Literal["addition_expert", END]]:
    system_prompt = (
        "You are a multiplication expert, and you can request the addition expert's help for addition."
        "Always complete your part of the calculation before handing off."
    )
    messages = [SystemMessage(content=system_prompt)] + state["messages"]
    # When here with the 8*12 AI message, the ai_msg's final tool_call is
    # empty, as it has already provided the result.
    ai_msg = model.bind_tools([transfer_to_addition_expert]).invoke(messages)
    if len(ai_msg.tool_calls) > 0:
        tool_call_id = ai_msg.tool_calls[-1]["id"]
        tool_msg = ToolMessage(content="Transfer successful", tool_call_id=tool_call_id)
        return Command(goto="addition_expert", update={"messages": [ai_msg, tool_msg]})
    # Here, it directly returns the result
    return Command(goto=END, update={"messages": [ai_msg]})

builder = StateGraph(MessagesState)
builder.add_node("addition_expert", addition_expert)
builder.add_node("multiplication_expert", multiplication_expert)
# We always start from the addition expert
builder.add_edge(START, "addition_expert")
graph = builder.compile()
from langchain_core.messages import convert_to_messages

def pretty_print_messages(update):
    if isinstance(update, tuple):
        ns, update = update
        # Skip parent graph updates in print output
        if len(ns) == 0:
            return
        graph_id = ns[-1].split(":")[0]
        print(f"Update from subgraph {graph_id}:")
        print("\n")
    for node_name, node_update in update.items():
        print(f"Update from node {node_name}:")
        print("\n")
        for m in convert_to_messages(node_update["messages"]):
            m.pretty_print()
        print("\n")

# Let's run the graph
for chunk in graph.stream({"messages": [("user", "what's (3 + 5) * 12")]},):
    pretty_print_messages(chunk)
Update from node addition_expert:

================================== Ai Message ==================================
Tool Calls:  transfer_to_multiplication_expert (3e43e7c6-5ca4-433e-9a0e-e3a6f7913109) Call ID: 3e43e7c6-5ca4-433e-9a0e-e3a6f7913109  Args:    expression: 8 * 12================================= Tool Message =================================
Transfer successful

Update from node multiplication_expert:

================================== Ai Message ==================================
The result of (3 + 5) * 12 is 96.
This is the basic idea of agent transfer. We primarily use Command for node routing.
There are many code details; key parts have been commented. It is recommended that readers personally debug and observe how the specific values flow at each point.

Leave a Comment