01
1. Introduction to Flowise
Today, I will introduce a powerful workflow orchestration tool for large models—Flowise. It has currently reached 22.5k stars on GitHub. It is considered one of the most user-friendly workflow orchestration tools for large models and can be used commercially for free.
Flowise is based on LangChain.js and is a very advanced graphical user interface for developing applications based on LLMs. These applications are also referred to as Gen Apps, LLM Apps, Prompt Chaining, LLM Chains, etc.
Flowise is specifically designed as a user interface (UI) for LangChain, utilizing React-Flow technology. Its purpose is to provide a seamless platform for easy experimentation and prototyping of workflows. Users can enjoy the convenience of drag-and-drop components and chat box functionality to enhance their experience. Below is a flowchart of an application.
Currently, the available low-code agent building solutions are summarized as follows:
Platform\Function |
Website |
Description |
Commercial Agreement |
FastGPT |
Code: https://github.com/labring/FastGPT Website: https://fastgpt.in/ |
Provides out-of-the-box capabilities for data processing, model invocation, etc. It also allows workflow orchestration through visual Flow to achieve complex Q&A scenarios! |
Open source, can be part of a product but cannot provide SaaS services |
Flowise |
Code: https://github.com/FlowiseAI/Flowise Website: https://flowiseai.com/ |
A low-code/no-code tool for building custom large language model (LLM) applications, providing a drag-and-drop interface, memory, data loading, caching, review, and more functionalities. |
Apache License Version 2.0 retains all copyrights, patents, trademarks, and attribution notices for the original work in any derivative works. |
Langsmith |
Code: https://github.com/langchain-ai/langchainjs https://github.com/langchain-ai/langchain Website: https://smith.langchain.com/ Documentation: https://docs.smith.langchain.com/ |
A developer platform designed for new types of applications, providing observability, testing, evaluation, and monitoring tools for building and managing complex language model (LLM) applications. |
Not open source |
Bisheng |
Documentation: Analysis Report Generation Code: https://github.com/dataelement/bisheng Website: https://bisheng.dataelem.com/ |
A combination of Dify and Flowise, allowing visual customization of various junior and intermediate agents, knowledge bases, contract reviews, prospectus analysis, intelligent investment advice, interviews, and more. |
Apache-2.0 license |
Dify |
Code: https://github.com/langgenius/dify Website: https://dify.ai/ Documentation: https://docs.dify.ai/getting-started/readme |
LLMOps platform that allows teams to develop AI applications visually based on models like GPT-4. It provides rich tool plugins and documentation/webpage/Notion content as context for the AI, as well as enterprise-level AI application monitoring, logging, data annotation, and model fine-tuning functionalities. |
Apache-2.0 license allows commercial use as a product backend, but cannot be used as a multi-tenant SaaS. |
lang-chatchat |
Code: https://github.com/chatchat-space/Langchain-Chatchat Documentation: https://github.com/chatchat-space/Langchain-Chatchat/wiki/ |
A local knowledge base Q&A system based on Langchain and models like ChatGLM, using GPT-4 and FAISS for efficient document indexing, memory conversations, and providing accurate, context-aware answers based on indexed data. |
Apache-2.0 license |
langflow |
Documentation: https://docs.langflow.org/ Code: https://github.com/logspace-ai/langflow |
A tool for prototyping LangChain workflows easily, featuring drag-and-drop capabilities, a built-in chat interface, editable prompt parameters, creating chains and agents, tracking thought processes, and exporting workflows. |
MIT License |
danswer |
Code: https://github.com/danswer-ai/danswer Website: https://www.danswer.ai/ |
An open-source enterprise Q&A tool that supports asking questions in natural language and receiving answers powered by private resources. It can connect to tools like Slack, GitHub, Confluence, etc. |
MIT License |
dialoqbase |
Code: https://github.com/n4ze3m/dialoqbase Website: https://dialoqbase.n4ze3m.com/ |
An open-source application designed to facilitate the creation of custom chatbots using personalized knowledge bases. The application utilizes advanced language models to generate accurate, context-aware responses and uses PostgreSQL for efficient vector search operations and knowledge base storage. |
MIT License |
02
2. Installation of Flowise
Project address:
https://github.com/FlowiseAI/Flowise
2.1 NPM Deployment
Download and install NodeJS >= 18.15.0
1. Install Flowise
npm install -g flowise
2. Start the workflow
npx flowise start
With username and password (recommended)
npx flowise start --FLOWISE_USERNAME=user --FLOWISE_PASSWORD=1234
3. Open http://localhost:3000
2.2 Docker Deployment
Docker Compose
1. Go to the folder under the docker project root directory
2. Create a .env file and specify PORT (refer to .env.example)
3. Start the container
docker-compose up -d
4. Open http://localhost:3000
5. You can stop the container with the following command
docker-compose stop
Docker Image
1. Build the image locally:
docker build --no-cache -t flowise .
2. Run the image:
docker run -d --name flowise -p 3000:3000 flowise
3. Stop the image:
docker stop flowise
2.3 Development Environment
Flowise has three different modules in a single code repository.
-
server: Node backend used to provide API logic
-
ui: React frontend
-
components: LangChain components
2.3.1 Prerequisites
-
Install Yarn
npm i -g yarn
2.3.2 Installation Steps
1. Clone the repository
git clone https://github.com/FlowiseAI/Flowise.git
2. Enter the repository folder
cd Flowise
3. Install dependencies for all modules:
yarn install
4. Build all code:
yarn build
5. Start the application:
yarn start
Now you can access the application at http://localhost:3000.
6. For development builds:
yarn dev
Any code changes will automatically reload the application, accessible at
http://localhost:8080
03
3. Authorization
Flowise has two types of authorization:
-
Application-level
-
Chatflow-level
3.1 Application-Level
Application-level authorization protects your Flowise instance with a username and password. This prevents anyone from accessing your application when deployed online.
3.2 Chatflow-Level
Suppose you have built a chat flow and only want specific people to access and interact with it. You can achieve this by assigning an API key to that specific chat flow.
3.3 API Key
In the dashboard, navigate to the API key section, and you should see a DefaultKey created. You can also add or delete any key.
3.4 Chatflow
Navigate to Chatflow, and now you can select the API key to protect the Chatflow.
After assigning the API key, when making HTTP calls, only the correct API key specified in the Authorization header can access the Chatflow API.
"Authorization": "Bearer <your-api-key>"
04
4. Application Integration
Now that you have tested your chat flow on Flowise’s chat interface, you want to “export” it for use in other applications. Flowise provides three methods to achieve this:
-
API
-
Embed
-
Share Chatbot
4.1 API
You can use the chat flow as an API and connect it to the front-end application.
4.2 Embed
You can also embed a chat widget on your website.
Simply copy and paste the provided embed code anywhere in your HTML file.
4.3 Share Chatbot
05
5. Building LLM Apps
A variety of building blocks have emerged in the ecosystem for building LLM applications, including prompt engineering, agents, chaining, semantic search, chat models, vector storage, and various tools that can be assigned to agents to perform actions.
These new approaches make it easier to build flexible conversational interfaces. With LLM-based chat flows, the conversation design and building process no longer require excessive detail and do not have to handle too many edge cases, chit-chat, or fix paths, as these issues can be shifted to the resilience of the LLM.
It is important to note that while Flowise is free to use, hosting costs and all costs associated with third-party API calls should also be considered. These costs can rise quickly with the increase in the number of users and depend on the extent to which these systems are utilized. Another factor to consider is latency and the need to access geographically distributed systems, etc.
Flowise is indeed a very intuitive framework for developing LLM applications. This tutorial allows you to get started with installing and using Flowise. If you have any questions, feel free to join the discussion group~
Currently, I have created several free Sora communities with hundreds of members and a paid community, with over 17 members.
My WeChat/Xiaohongshu: flytoagi
Avoid information overload, and focus on high-quality information sources.
Most popular ChatGPT shell projects: Independent developers report earning only 30K per month—project analysis.
Engineer claiming to have Google’s large model platform experience arrested – maximum sentence of 40 years.