Full-Stack Chatbot Template for Multi-Document Analysis on LlamaIndex

Project Introduction

The easiest way to start using LlamaIndex is by using <span>create-llama</span>. This CLI tool allows you to quickly start building new LlamaIndex applications and sets everything up for you.

Quick Run

npx create-llama@latest

to get started, or refer to the options below for more choices. After generating the application, run

npm run dev

to start the development server. You can then access http://localhost:3000 to view your application.

What You Will Get

  • A frontend powered by Next.js. The application is set up as a chat interface that can answer questions about your data (see below).

    • You can style it using HTML and CSS, or choose to use components from shadcn/ui.

  • You can choose from 3 backends:

    • Next.js: If you choose this option, you will have a complete Next.js application stack that can be deployed to hosts like Vercel with just a few clicks. This uses our TypeScript library LlamaIndex.TS.

    • Express: If you want a more traditional Node.js application, you can generate an Express backend. This also uses LlamaIndex.TS.

    • Python FastAPI: If you choose this option, you will get a backend powered by the llama-index python package, which you can deploy to services like Render or Fly.io.

  • The backend has an endpoint that allows you to send chat states and receive responses.

  • You can choose whether you need a streaming or non-streaming backend (if you are unsure, we recommend using streaming).

  • You can choose whether to use ContextChatEngine or SimpleChatEngine.

    • SimpleChatEngine will converse directly with the LLM without using your data.

    • ContextChatEngine will use your data to answer questions (see below).

  • The application defaults to using OpenAI, so you will need an OpenAI API key, or you can customize it to use any of the dozens of LLMs we support.

Using Your Data

If you have enabled ContextChatEngine, you can provide your own data, and the application will index it and answer questions. The generated application will have a folder named data:

  • For Next.js backend, it’s located at ./data.

  • For Express or Python backend, it’s located at ./backend/data.

The application will ingest any supported files you place in this directory. Your Next.js and Express applications use LlamaIndex.TS, so they will be able to extract any PDF, text, CSV, Markdown, Word, and HTML files. The Python backend can read more types, including video and audio files.

Before using the data, you need to index it. If you are using Next.js or Express applications, run:

npm run generate

Then restart your application. Remember, if you add new files to the data folder, you need to rerun generate. If you are using the Python backend, you can trigger data indexing by deleting the ./storage folder and restarting the application.

Don’t Want a Frontend?

This is optional! If you choose the Python or Express backend, simply delete the frontend folder, and you will have an API without any frontend code.

Customizing LLM

By default, the application will use OpenAI’s gpt-3.5-turbo model. If you want to use GPT-4, you can modify it by editing the files:

  • In the Next.js backend, edit ./app/api/chat/route.ts and replace gpt-3.5-turbo with gpt-4.

  • In the Express backend, edit ./backend/src/controllers/chat.controller.ts and similarly replace gpt-3.5-turbo with gpt-4.

  • In the Python backend, edit ./backend/app/utils/index.py and again replace gpt-3.5-turbo with gpt-4.

You can also replace OpenAI with any of the other dozens of LLMs we support.

Examples

The simplest thing is to run create-llama in interactive mode:

npx create-llama@latest # or npm create llama@latest # or yarn create llama # or pnpm create llama@latest

The system will prompt you for the project name and other configuration options, as follows:

&gt;&gt; npm create llama@latest Need to install the following packages: create-llama@latest Ok to proceed? (y) y ✔ What is your project named? … my-app ✔ Which template would you like to use? › Chat with streaming ✔ Which framework would you like to use? › NextJS ✔ Which UI would you like to use? › Just HTML ✔ Which chat engine would you like to use? › ContextChatEngine ✔ Please provide your OpenAI API key (leave blank to skip): … ✔ Would you like to use ESLint? … No / Yes Creating a new LlamaIndex app in /home/my-app.

Non-Interactive Run

You can also pass command-line arguments to set up a new project non-interactively. See create-llama –help:

create-llama &lt;project-directory&gt; [options] Options: -V, --version output the version number --use-npm Explicitly tell the CLI to bootstrap the app using npm --use-pnpm Explicitly tell the CLI to bootstrap the app using pnpm --use-yarn Explicitly tell the CLI to bootstrap the app using Yarn

Project Links

https://github.com/run-llama/LlamaIndexTS/tree/main/packages/create-llama

Multi-document template: github.com/jerryjliu/create_llama_projects/blob/main/multi-document-agent/README.md

Follow the 「GitHubStore」 public account

Scan the following WeChat

1 Join the technical exchange group, noteProgramming Language-City-Nickname

Full-Stack Chatbot Template for Multi-Document Analysis on LlamaIndex

Leave a Comment