Generative AI Application Creation Platform

01#

Introduction

The Qizhi platform is designed for building generative AI native applications. Through the Qizhi platform, we can significantly simplify complex technical tasks and build various types of applications through visual orchestration, making creativity and innovation faster, better, and easier to achieve.

When we were kids, we played with building blocks, constructing castles, planes, or even entire cities by stacking various colors and shapes. Now, if there is a digital world of building blocks, we can use these “blocks” to build intelligent programs that can read, understand, and write text, and even converse with us. This is what the Qizhi platform aims to do and has already achieved; it is like a massive set of building blocks waiting for creators of AI applications to explore and build.

02#

Overview of Qizhi Platform

The full name of the Qizhi platform is the Qizhi Large Model Open Platform. The Qizhi platform integrates a large number of large models and various capabilities related to large models, allowing for rapid construction of large model applications and supporting API integration. Through the Qizhi platform, R&D teams can quickly and flexibly apply various large model capabilities to business scenarios without spending a lot of development time on complex integration processes, significantly improving R&D efficiency.
The Qizhi platform is built on infrastructure such as data centers and scheduling platforms, providing basic model capabilities for text, images, videos, audio, multi-modal, embedding, etc. It also offers one-click deployment capabilities for LLM and embedding. With Qizhi, users can quickly access global large models, adapt to different application scenarios, freely experience, and seamlessly switch, achieving rapid and flexible binding and decoupling between the business layer and the model layer.
The Qizhi platform provides developers with a comprehensive application template, orchestration framework, toolchain, etc., offering various usage methods from building ordinary AI applications to complex AI workflow orchestration to multi-agent collaboration. It also supports RAG retrieval, model management, application management, cost control, automatic generation of API documentation, and more, enabling easy construction and operation of generative AI native applications in a one-stop manner. Based on the Qizhi platform, business teams can quickly build generative AI applications driven by large language models, easily turning ideas into reality, and can seamlessly scale according to the usage of AI applications, effectively supporting business growth.
Generative AI Application Creation Platform
The Qizhi platform provides a professional workstation for visual orchestration of generative AI applications (All in One Place). It covers the core tech stack required to build generative AI native applications, allowing developers to focus on creating the core value of the applications.
Generative AI Application Creation Platform
The Qizhi platform provides a series of tools to help us better build AI native applications. This mainly includes six categories of tools, some of which will be introduced in detail later.
Generative AI Application Creation Platform

03#

Prompt Engineering

One of the core values of the Qizhi platform is that it provides a standard model interface, allowing us to freely switch between different models, including text, image, video, audio, and multi-modal models.

When it comes to models, you can think of them in terms of ChatGPT. A simple model can only generate text content, but as large models continue to evolve, their cross-modal capabilities are continuously enhanced, including:

1. Standard LLM:Receives a text string as input and returns a text string as output.

2. Chat Model:Takes a list of chat messages as input and returns a chat message.

3. Visual Model:Takes text and images or videos as input and returns a text string as output.

4. Speech Model:Takes speech as input and returns a text string as output.

The Qizhi platform provides a unified Prompt engineering system, supporting orchestration, debugging, optimization, and version management of Prompts, making it easier for us to construct the desired Prompt templates. We can save Prompt templates for reuse. Based on the platform, we can quickly implement the construction and tuning of Prompt engineering, with common functional processes as shown in the diagram below.

Generative AI Application Creation Platform

04#

RAG Pipeline

To address the limitations of purely parametric models, language models can adopt a semi-parametric approach, combining non-parametric corpus databases with parametric models. This approach is known as RAG (Retrieval-Augmented Generation).

1. Overall Process of RAG

The overall business chain of RAG mainly consists of five steps: knowledge production and processing, query rewriting, data retrieval, post-processing, and large model production.

Generative AI Application Creation Platform

The Qizhi platform, based on Native RAG, provides various general RAG capabilities combined with Advanced RAG and RAG-Fusion solutions:

Knowledge Slicing: Fixed character slicing, redundant slicing, semantic sentence slicing, regex slicing, etc.

Query Generation/Rewriting: Uses LLM models to rewrite the user’s initial query, generating multiple queries.

Vector Search: Conducts vector-based searches for each generated query, forming multi-route search retrieval.

Inverse Ranking Fusion: Applies inverse ranking fusion algorithms to reorder documents based on their relevance across multiple queries.

Re-ranking: Uses various re-ranking algorithms to reorder the results.

Output Generation: Generates final output based on the top K search results after reordering.

2. RAG Platform Plan

To facilitate user usage, the Qizhi platform provides a visual RAG Pipeline construction, allowing for quick setup of RAG applications. Below is the architecture diagram of Qizhi RAG.

Generative AI Application Creation Platform

The RAG platform provides knowledge base management, RAG application orchestration and debugging, supports one-click deployment of various popular open-source embedding models, and supports Rerank reordering, multi-version management, data and annotation, etc.

It also offers a series of functional modules and API services, including chunking services, embedding, vector services, knowledge base services, etc. For businesses with in-depth customization needs, more tailored RAG applications can be built through flexible combinations via APIs.

On the platform, users can quickly build a RAG application through visual orchestration, and then integrate the application into engineering projects by calling the application APIs:

Generative AI Application Creation Platform

05#

Workflow

By breaking down complex tasks into smaller steps (nodes), workflows can reduce system complexity, decrease reliance on prompt engineering and model reasoning capabilities, and enhance the interpretability, stability, and fault tolerance of LLM applications for complex tasks.
1. Introduction to Qizhi Workflows
Qizhi workflows are divided into two types:
  • Conversation Workflow: Designed for conversational scenarios, requiring multi-step logic in building responsive conversational applications.
  • Text Workflow: Designed for automation and batch processing scenarios, suitable for high-quality translation, data analysis, content generation, and more.
Qizhi workflows consist of a user interface and workflow execution service, as shown in the diagram below:
Generative AI Application Creation Platform
The user interface exposes various executors of the workflow execution service in the form of nodes, including LLM, knowledge retrieval, question classification, code, HTTP, plugins, conditions, etc.:
LLM Node: Calls a large language model to answer questions or process natural language;
Knowledge Retrieval: Retrieves text content related to the user’s question from the knowledge base, which can serve as context for downstream LLM nodes;
Question Classification: By defining classification descriptions, the LLM can select the matching classification based on user input;
Code: Executes Python/Groovy code to perform custom logic such as data transformation within the workflow;
HTTP: Allows sending server requests via HTTP protocol to obtain additional information required for business;
Plugins: Allows calling various custom-created plugins on the Qizhi platform within the workflow;
Businesses can visually drag and orchestrate various nodes to construct various workflow applications.
2. Workflows Bring the Following Convenience to Businesses
Modular Management of Complex Tasks: By breaking complex tasks into multiple nodes, the task processing flow becomes clear, with each node assuming a single responsibility. This modular design reduces system complexity for complex tasks, enhancing maintainability and scalability of the process.
Flexible Adjustment and Expansion of Business Logic: Workflows support conditional judgment and multi-branch parallel execution, allowing business logic to be flexibly orchestrated and adjusted to meet changing business needs. Meanwhile, the workflow structure facilitates adding or replacing nodes, greatly enhancing the system’s adaptability and flexibility.
Enhanced Interpretability and Fault Tolerance of the System: Workflows make task processing paths visual, allowing monitoring and tracking of execution results of each node, improving system interpretability. Additionally, workflows can effectively handle exceptions through conditional branching, enhancing the system’s fault tolerance.
Below is an example of an operations robot workflow:
Generative AI Application Creation Platform
The operations robot helps businesses improve the speed of problem troubleshooting and localization through a Q&A format. In this workflow, the question classification node categorizes the issues, and the selector node directs different types of problems to different branches for processing, ultimately referencing the results of each branch through the end node for comprehensive output. The configuration of each segment is as follows:

Segment

Node Type

Description

Start

Start Node

The start node is used to receive user questions, assigning the user input to variables and passing it to subsequent nodes.

Identify User Intent

Question Classification Node

Fill in classification information in this node. This node will classify the user’s questions and forward them to different branches for processing based on the classification.

For example: Monitoring-related issues will be directed to the monitoring branch for processing.

Process User Questions

Knowledge Base Node, Large Model Node

  1. The knowledge base node retrieves knowledge based on the user’s question;

  2. The large model node summarizes the response.

End

End Node

Outputs the intelligent agent’s reply content. It can directly reference data from preceding nodes through output variables for the response content.

06#

Plugins

Plugins are extensions of large model capabilities, typically expanding large model capabilities by calling external APIs. For example, the process of calling a plugin in OpenAI is shown in the diagram below:

Generative AI Application Creation Platform

Based on the plugin calling process and plugin description specifications of OpenAI, the Qizhi platform has designed and implemented a custom plugin development process, recognizing and calling plugins based on Function call, ReAct, etc., extracting the parameters required by the plugin from user input, calling the plugin API, and then summarizing the results of the plugin API responses through the large model. The diagram below illustrates the plugin process based on Function call:

Generative AI Application Creation Platform

For users, the process of creating custom plugins on the Qizhi platform is relatively simple. After writing the plugin’s API description in the ibrain.yaml file, users can upload it to the Qizhi platform, and upon verification, the plugin can be created. Once created, it can be used across various applications to extend the capabilities of large models. The process of creating plugins is shown in the diagram below:

Generative AI Application Creation Platform

07#

Agent

The Qizhi platform supports creating agents through basic templates and free creation.

According to the review paper on LLM-based Agents by the Natural Language Processing Team at Fudan University titled <<The Rise and Potential of Large Language Model Based Agents: A Survey>>, it mentions that “Agent = LLM + Perception + Planning + Memory + Tool Usage” is the basic architecture, where the large model LLM acts as the “brain” of the agent, providing reasoning and planning capabilities in this system. The overall architecture is shown in the diagram below:

Generative AI Application Creation Platform
(Image reference)

The agent framework consists of several components, namely Perception, Planning, Memory, Action, etc., which will be briefly introduced:

  • Perception: Receives and processes multi-modal information from the external environment.

  • Planning: Mainly includes sub-goal decomposition, reflection, and improvement.

  • Memory: Includes short-term memory and long-term memory.

  • Action: Uses text or tools (calls external APIs to obtain additional information missing in model weights, including current information, code execution capabilities, access to proprietary information sources, etc.) to perform actions that impact the external environment.

Currently, the Qizhi platform supports text-based input and output, utilizing various tools through large model calls.
On the basis of individual agent applications, the Qizhi platform supports users to build multi-agent applications. The platform allows user-built agents and agents created on the Qizhi platform to be integrated into the same agent group to solve the same problem. It also provides functions to control and regulate communication and collaboration between agents, as well as to save and load short-term historical group information. Agents within the same group can share information and pass messages to each other, iterating to ultimately solve problems. Each group instance’s historical information is isolated from one another, providing independent working environments. Agents in the group act as proxies for the original agents in the new environment/group, responsible for message passing between the group and the original agent.
Generative AI Application Creation Platform
Moreover, we can effectively integrate agent functionalities by specifying the order of operations among the agents, allowing for efficient task completion. As shown in the diagram below, when creating a group in Qizhi, the following parameter settings can be used to construct a workflow diagram using a dictionary:
We have the Planner, Engineer, Executor, and Critic agents. The Planner can make plans, the Engineer can write code, the Executor can execute code, and the Critic can provide feedback.
The desired workflow is for the Planner to first make a plan, followed by the Engineer issuing instructions, then either the Executor executing or the Critic providing feedback. The Critic can return results to the Planner or ask the Engineer to recalculate, while the Executor can submit execution results to the Engineer, and the Planner can decide whether to revise the plan.
In the parameter settings below, the keys and values represent the two ends of the running order arrows.
 "relationship_graph": {
        "planner": ["engineer"],
        "engineer": ["executor", "critic"],
        "critic": ["engineer", "planner"],
        "executor": ["engineer"]
       }
Generative AI Application Creation Platform
Multi-Agent not only provides a platform for cooperation among agents but also serves as a platform for interaction among agents, meaning that agents can also refute and compete with each other.
The Qizhi platform will soon launch a memory function for large models. Based on the memory function, we will further enhance higher-level capabilities such as reflection and summarization, allowing agents to learn and summarize during interactions, helping agents continue to learn and grow in capability even when training data is limited. With the memory function enabled for agents, this interaction will contribute to improving the capabilities of individual agents. Future plans related to agents also include multi-modal storage, extraction, processing, and enhancing agent tool-calling capabilities.

08#

Memory (Coming Soon)

The Qizhi platform provides simple settings for businesses to enable agent memory functions. Once the memory function is activated, interactions in business scenarios with agents will be recorded and converted into long-term memory based on pre-defined business requirements. The conversion process mainly involves fact extraction during interactions, updating existing long-term memory, deletion, etc. When interacting with agents again in business scenarios, the memory service will provide the most relevant long-term memory based on the current context, offering a more relevant user experience.

Memory Application Case:

During conversations with the large model application, users reveal their preferences but are forgotten after multiple rounds of dialogue (the model responds incorrectly):

1. User: Sing a song for me.
2. Application: I can only sing pop songs, is that okay?
3. User: Yes, I like Jay Chou.
4. Application: Okay (clears throat), starting to sing Chrysanthemum Terrace.
5. … (over 100 rounds)
6. User: Play a song I like.
7. Application: Okay, what song do you like?

After a certain number of dialogues, using the memory function to store and extract transformations, we obtain key preference settings for the character, which can then be extracted based on user input (such as user preferences) in the future.

Generative AI Application Creation Platform

09#

Application Case: Build an AI Robot in 10 Minutes

On the Qizhi platform, it only takes 10 minutes, without writing a single line of code, to create an AI robot empowered by large model capabilities. This robot can respond to user inquiries 24/7 and can answer private domain questions, becoming a dedicated robot for the business.

Creating an AI robot on the Qizhi platform is very convenient:

  1. Create a large model application, including conversation, plugins, workflows, agents, and other types of applications.

  2. After creating the application, apply for API permissions, and then submit a form to create a Feishu robot on the Qizhi platform to convert the application into a Feishu robot.

  3. You can also add private domain knowledge to the robot, enable knowledge retrieval enhancement (RAG), and add memory support for long-term memory, gradually enriching and expanding the robot’s capabilities to better meet user inquiries.

Generative AI Application Creation Platform

10#

Outlook

In the future, the Qizhi platform will provide more practical development kits, continuously improving the efficiency and output of large model development for businesses. At the same time, it will enhance the integrated ecosystem of training and inference to better meet the needs of vertical business scenarios.
Generative AI Application Creation Platform
Generative AI Application Creation Platform
Perhaps You Also Want to See
Building Efficient Agents
Cursor’s Open Source Alternative Has Arrived So Quickly! Roo-Cline is a Great Tool
2025, Agent Developers Enter Deep Waters: Product Iteration Driven by Evaluation
Andrew Ng: AI Agent Design Patterns for Tool Usage
Introduction to AI Agents – Intelligent Agents Centered on Large Models

👆👆👆Welcome to Follow👆👆👆

Welcome to Leave Comments for Discussion

🧐Share, Like, and Follow, You Are the Best! 👇

Leave a Comment