Easily Build LLM Applications with Flowise Drag-and-Drop Components

Build large model applications using a drag-and-drop approach. This project allows you to customize large model (LLM) workflows through a visual drag-and-drop interface, making it easy to create LLM applications, and it supports one-click service startup with Docker.Easily Build LLM Applications with Flowise Drag-and-Drop Components

How to Use

API

You can use the chat flow as an API and connect it to front-end applications.Easily Build LLM Applications with Flowise Drag-and-Drop ComponentsYou can also flexibly use the overrideConfig property to override input configurations.Easily Build LLM Applications with Flowise Drag-and-Drop ComponentsExample of calling the API using Postman:Easily Build LLM Applications with Flowise Drag-and-Drop Components

Document Loader Workflow

Note: Users are responsible for ensuring that the file types are compatible with the expected file types of the document loader. For example, if using a text file loader, only files with the .txt extension should be uploaded.Easily Build LLM Applications with Flowise Drag-and-Drop ComponentsExample of calling the API using Postman with form-data:Easily Build LLM Applications with Flowise Drag-and-Drop Components

Streaming

When the final node is a Chain or OpenAI Function Agent, Flowise also supports streaming back to front-end applications.Easily Build LLM Applications with Flowise Drag-and-Drop ComponentsEasily Build LLM Applications with Flowise Drag-and-Drop Components

Embedding

You can also embed the chat widget into your website; simply copy and paste the provided embed code into any position within your HTML file tags.Easily Build LLM Applications with Flowise Drag-and-Drop ComponentsTo modify the complete source code of the embedded chat widget, follow these steps:

  • Fork the Flowise Chat Embed repository
  • Then you can make any code changes
  • Run yarn build
  • Push the changes to the forked repository
  • Then use it as an embedded chat as follows:

Replace username with your GitHub username and forked-repo with the forked repository:

<script type="module">
      import Chatbot from "https://cdn.jsdelivr.net/gh/username/forked-repo/dist/web.js"
      Chatbot.init({
          chatflowid: "chatflow-id",
          apiHost: "http://localhost:3000",
      })
</script>
Easily Build LLM Applications with Flowise Drag-and-Drop Components

Intelligent Models

Local AI Setup

AI is a direct alternative to REST API, compatible with OpenAI API specifications for local inference. It allows running LLMs locally or on-premises using consumer-grade hardware (not limited to this), supporting multiple model families compatible with ggml format. To use ChatLocalAI in Flowise, follow these steps:

git clone https://github.com/go-skynet/LocalAI
cd LocalAI
# copy your models to models/
cp your-model.bin models/

For example: Download one of the models from gpt4all.io:

# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

In the /models folder, you will be able to see the downloaded models:Easily Build LLM Applications with Flowise Drag-and-Drop Components

Smooth Setup

Drag and drop the new ChatLocalAI component onto the canvas:Easily Build LLM Applications with Flowise Drag-and-Drop Components

Quick Start

  • Install Flowise
npm install -g flowise
  • Start running
npx flowise start
  • Open http://localhost:3000

Portal

Open source license: MIT license

Open source address: https://github.com/FlowiseAI/Flowise

Project collection: https://github.com/RepositorySheet

-END-

Leave a Comment