Using LlamaIndex Agent to Call Multiple Tool Functions

Overview

This article introduces how to use LlamaIndex’s Agent to call multiple custom Agent tool functions. As with the previous articles in this series, this article does not use the OpenAI API and relies entirely on a local large model to complete the entire functionality.

The goal of this article is simple: to save the responses from the large model into a PDF file and also to save them to a database (not actually saving, just calling the corresponding functions and printing the results).

Running Environment

My environment is a virtual machine with the following CPU configuration:

  • CPU Type: x86

  • Cores: 16 cores

  • Memory: 32G

Implementation Steps:

(1) Define two Agent functions: one to save data to a PDF file, and another to save data to a database;

(2) Wrap the functions using FunctionTool: note that the names of the wrapped functions must match the defined function names.

(3) Create an LLM object: here I am using Ollama to define an LLM object with the local large model Llama3.2. Different LLM models can be used, and the results may vary. You can set the model parameter value according to your needs to specify the model name.

(4) Create a ReActAgent object to create an agent object.

(5) Use agent.chat to interact with the agent, allowing the agent and LLM to complete the task.

Note: The content of the conversation is very important, meaning that the phrasing of the prompt is crucial. If the prompt is poorly written, the performance of the large model will also vary. Therefore, prompt engineering is fundamental and must be mastered well.

Implementation Code

from llama_index.llms.ollama import Ollama
from llama_index.core.agent import ReActAgent
from llama_index.core.tools import FunctionTool
from fpdf import FPDF  # Using fpdf library to generate PDF files

# Define a tool function to write response content to a PDF

def save_response_to_pdf(response: str, **kwargs) -> str:
    """Save the response to a PDF file."""
    # Initialize PDF
    pdf = FPDF()
    pdf.add_page()
    pdf.set_font("Arial", size=12)
    # Write response content to PDF
    pdf.cell(200, 10, txt="Agent Response:", ln=True, align='L')
    pdf.multi_cell(0, 10, txt=response)
    # Save PDF file
    pdf_filename = "agent_response.pdf"
    pdf.output(pdf_filename)
    
    print(f"Response saved to {pdf_filename}.")
    return f"Response saved to PDF as {pdf_filename}."

# Save response to the database

def save_response_to_database(response: str, **kwargs) -> str:
    """Save the response to database."""
    db="testdb"
    print(f"Response saved to {db}.")
    return f"Response saved to database: {db}."

# Wrap tool functions as FunctionTool
save_response_to_pdf_tool = FunctionTool.from_defaults(fn=save_response_to_pdf)
save_response_to_database = FunctionTool.from_defaults(fn=save_response_to_database)

# Set LLM and Agent
llm = Ollama(model="llama3.2", request_timeout=360)
# Create an agent
agent = ReActAgent.from_tools([save_response_to_pdf_tool, save_response_to_database], llm=llm, verbose=True)

# Tell the Agent what to do, note: this prompt is very important
response = agent.chat("Please answer the question: 'who are you?'. And then save the answer to pdf file and save the answer to database!")
print(str(response))

Running Result:


> Running step 071bc01e-2c1c-4cc1-9dfc-db2c5ad45ec7. Step input: Please answer the question: 'who are you?'. And then save the answer to pdf file and save the answer to database!
Thought: The current language of the user is: English. I need to use a tool to help me answer the question.
Action: save_response_to_pdf
Action Input: {'response': 'I am an AI designed to simulate conversations and answer questions to the best of my knowledge.'}
Response saved to agent_response.pdf.
Observation: Response saved to PDF as agent_response.pdf.

> Running step a3f702b2-38e4-4803-a21a-81ff28a0bd04. Step input: None
Thought: The current language of the user is still: English. I need to save the response to database.
Action: save_response_to_database
Action Input: {'response': 'I am an AI designed to simulate conversations and answer questions to the best of my knowledge.', 'database': 'my_database'}
Response saved to testdb.
Observation: Response saved to database: testdb.

> Running step 545e0098-cf18-47d4-ba6a-748bdaca3b58. Step input: None
Thought: I can answer without using any more tools. I'll use the user's language to answer
Answer: I am an AI designed to simulate conversations and answer questions to the best of my knowledge.
I am an AI designed to simulate conversations and answer questions to the best of my knowledge.

From the above output, we can see that the large model correctly understood our intent and planned well, completing each step I wanted to achieve.

Summary

Using the Agent to call multiple custom functions involves placing multiple custom functions into the tool function list. The most important thing is to write good prompts so that the large model correctly understands the tasks we need to complete.

Leave a Comment