Install Ollama and Open-WebUI on Windows Using Docker Compose

In the field of artificial intelligence, GPT (Generative Pre-trained Transformer) models are popular for their powerful text generation capabilities. However, due to resource limitations, individual users may find it difficult to run and train such large models directly. Fortunately, there are some open-source projects like Ollama and Open-WebUI that can help us set up a private GPT environment. This article will guide you on how to install and configure these two projects using Docker Compose on Windows.

1. Environment Preparation

Before we begin, please ensure that you have the following software installed on your Windows system:

  • Docker Desktop: The desktop version of Docker, which allows you to run Docker containers on Windows. You can download and install it from the official Docker website.
  • Docker Compose: A tool for defining and running multi-container Docker applications. Docker Desktop for Windows version 2.0 and above already includes Docker Compose.
  • GPU support in Docker Desktop (optional, needs to be installed if there is a local GPU): https://docs.docker.com/desktop/gpu/
GPU acceleration for large models is as follows:

2. Installation Steps

2.1 Configure Docker Compose

Here is an example of a docker-compose.yml file:

version: '3.8'
services:
  ollama:
    image: ollama/ollama:latest
    ports:
      - 11434:11434
    volumes:
      - D:\software\llms\ollama\docker\ollama:/root/.ollama
    container_name: ollama
    pull_policy: if_not_present
    tty: true
    restart: always
    networks:
      - ollama-docker
    # GPU support
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities:
                - gpu

  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    pull_policy: if_not_present
    volumes:
      - D:\software\llms\open-webui\data:/app/backend/data
    depends_on:
      - ollama
    ports:
      - 3000:8080
    environment:
      - 'OLLAMA_BASE_URL=http://ollama:11434'
      - 'WEBUI_SECRET_KEY=xxxxyourkey'
      - 'HF_ENDPOINT=https://hf-mirror.com'
    extra_hosts:
      - host.docker.internal:host-gateway
    restart: unless-stopped
    networks:
      - ollama-docker

networks:
  ollama-docker:
    external: false
  • deploy: GPU support; if only CPU, this is not required.
  • HF_ENDPOINT: Open WebUI provides multiple installation methods, including Docker, Docker Compose, Kustomize, and Helm. Regardless of the installation method you choose, you cannot avoid downloading the whisper model from huggingface.co. Due to the Great Firewall, you may encounter issues where the installation hangs, resulting in an error. In this case, you need to set the environment variable HF_ENDPOINT to https://hf-mirror.com, which means downloading the required model from the https://hf-mirror.com mirror instead of the official https://huggingface.co.

2.2 Run Docker Compose

In the directory containing the docker-compose.yml file, open the command prompt or PowerShell and run the following command:

docker-compose up -d

This will start and run the services for Ollama and Open-WebUI. The -d parameter indicates that the services will run in the background.Install Ollama and Open-WebUI on Windows Using Docker Compose

2.3 Verify Installation

Once the installation is complete, you can verify if Open-WebUI is running successfully by visiting http://localhost:3000. If everything is working correctly, you should see the Open-WebUI interface.

Install Ollama and Open-WebUI on Windows Using Docker Compose

2.4 Use Private GPT

Now, you have successfully set up a private GPT environment. You can interact with the GPT model through the Open-WebUI interface or use the API provided by Ollama to develop your own applications.

3. Conclusion

By following the steps above, you can easily install and configure Ollama and Open-WebUI on your Windows system using Docker Compose to create your own private GPT environment. This not only helps you better understand how GPT models work but also provides robust support for your personal projects or research.

Leave a Comment