Deploying DeepSeek Large Model Using Ollama
Prerequisites
Download CUDA drivers using NVIDIA graphics card
https://developer.nvidia.com/cuda-downloads
Ollama
Ollama Official Version: https://ollama.com/
My graphics card is on a Windows computer, so I will install using the Windows installation method.If your graphics card is on Linux, you can install using the following command.
curl -fsSL https://ollama.com/install.sh | sh
Of course, Ollama can not only start the DeepSeek model but also its models.https://ollama.com/search
# Model installation commands
# 1.5B Qwen DeepSeek R1
# Required space approximately 1.1G
ollama run deepseek-r1:1.5b
# 7B Qwen DeepSeek R1
# Required space approximately 4.7G
ollama run deepseek-r1:7b
# 8B Llama DeepSeek R1
# Required space approximately 4.9G
ollama run deepseek-r1:8b
# 14B Qwen DeepSeek R1
# Required space approximately 9G
ollama run deepseek-r1:14b
# 32B Qwen DeepSeek R1
# Required space approximately 20G
ollama run deepseek-r1:32b
# 70B Llama DeepSeek R1
# Required space approximately 43G
ollama run deepseek-r1:70b
# 671B Llama DeepSeek R1
# Required space approximately 404G
ollama run deepseek-r1:671b
# Windows environment variable listener
# OLLAMA_HOST 0.0.0.0
# Start command
ollama serve
Open WebUI
Official installation documentation: https://docs.openwebui.com/
Translation of Open WebUI official documentation:
Note:
When installing Open WebUI using Docker, please ensure to include
-v open-webui:/app/backend/data
This step is crucial as it ensures your database is correctly mounted to avoid any data loss.
Install default configuration
1. If you have Ollama installed on your computer, you can use the following command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
2. If Ollama is on another server, please use the following command:
When connecting to Ollama on another server, change OLLAMA_BASE_URL to the server’s URL:
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
To run Open WebUI with Nvidia GPU support, please use the following command:
docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
3. Installation for OpenAI API usage only
If you are only using the OpenAI API, please use the following command:
docker run -d -p 3000:8080 -e OPENAI_API_KEY=your_secret_key -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
4. Open WebUI installation with bundled Ollama support
This installation method uses a separate container image that bundles Open WebUI with Ollama for simplified setup using a single command. Choose the appropriate command based on your hardware setup:
Using GPU support: Utilize GPU resources by running the following command
docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
Only CPU: If you are not using a GPU, please use the following command:
docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
Both commands support built-in, simple installation of Open WebUI and Ollama, ensuring you can get everything up and running quickly.
# Commands I used
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=http://192.168.1.100:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Access: http://192.168.1.120:3000

About
https://www.oiox.cn/
https://www.oiox.cn/index.php/start-page.html
CSDN, GitHub, 51CTO, Zhihu, Open Source China, SiFou, JueJin, JianShu, Huawei Cloud, Alibaba Cloud, Tencent Cloud, Bilibili, Toutiao, Sina Weibo, Personal Blog
Search for “Xiao Chen Operations” across the web
Articles mainly published on WeChat Official Account