How to Implement Local Large Models with Ollama

I previously installed Tsinghua University’s model directly, but today I learned about a tool called Ollama, which is said to be an essential AI tool, so I wanted to try installing it. During the actual installation process, there were quite a few pitfalls for someone without magic like me, but fortunately, I ultimately achieved local deployment without using any magic. Let’s see how it was done.

I actually followed a tutorial from Bilibili directly, here’s the link:

How to Implement Local Large Models with Ollama

https://www.bilibili.com/video/BV13e1jY9EmZ/?spm_id_from=333.337.search-card.all.click&vd_source=b5b37884a7c08a77d4a213d53cde7dc7

However, the author seems to be from Taiwan, and many installation details were not mentioned. He installed Docker in 3 minutes in the video. I spent 2 hours and still hadn’t figured it out.

#1 Install Ollama

This step is quite simple, just download the installation package from the official website and install it.

How to Implement Local Large Models with Ollama

#2 Install Open WebUI

Next, the trouble begins. The prerequisite for installing the Open WebUI interface is to install Docker, and installing Docker in China is quite a troublesome task.

How to Implement Local Large Models with Ollama

##2.1 Install Docker

The video tutorial suggests downloading directly from the official website. However, the website connection is sometimes unstable, but I hurried to download the package while I could connect.

How to Implement Local Large Models with Ollama

However, after installing, using the installation command from the Open WebUI official website didn’t work; there was no response after pressing Enter.

I realized that it was likely caused by Docker Engine being stopped. I learned that I needed to set up the Windows environment and install WSL.

##2.2 Enable Windows Features

This part has a comprehensive introduction on the Runoob tutorial site:

[Windows Docker Installation | Runoob](https://www.runoob.com/docker/windows-docker-install.html)

In simple terms, you need to enable Hyper-V in the “Turn Windows features on or off” section of Windows. In addition to Hyper-V, other tutorials also require enabling Containers; Windows Subsystem for Linux; and Virtual Machine Platform. To be safe, I enabled all of them.

Then I needed to install WSL.

##2.3 Install WSL

The official introduction to WSL is as follows:

[What is Windows Subsystem for Linux | Microsoft Learn](https://learn.microsoft.com/zh-cn/windows/wsl/about)

However, the default installation method did not work.

WSL –install

It kept responding that it couldn’t resolve the name or address of the server.

I found some methods online, and the following two commands were effective, setting WSL2 as the default and completing the download and installation.

wsl –set-default-version 2

wsl –update –web-download

After the installation was completed, I restarted my computer, and Docker was running smoothly, as shown below:

How to Implement Local Large Models with Ollama

##2.4 Install Open WebUI

Next, prepare to officially install Open WebUI. According to the homepage prompt: systems that have installed Ollama can directly run the command for installation:

docker run -d -p 3000:8080 –add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data –name open-webui –restart always ghcr.io/open-webui/open-webui:main

This process took a long time, and I don’t know if it’s because I didn’t use any magic.

How to Implement Local Large Models with Ollama

After waiting a very long time, I finally completed the download. After the download was complete, the Open WebUI entry automatically appeared in the Docker interface, and I could directly use the web login for the Ollama interface.

How to Implement Local Large Models with Ollama

At this point, the basic environment is considered to be successfully configured. You can log into the web interface via localhost:3000, but this is completely offline, so it can run without the internet.

#3 Import Models

Next, it’s time to look at importing local language models. This step seems relatively simple; you can search for the model you want to install in the Open WebUI interface, and if it has Ollama’s support, you can download it directly.

How to Implement Local Large Models with Ollama

You can also check the models available in Ollama’s Library.

How to Implement Local Large Models with Ollama

After installation, you can start a conversation. Since this is a local deployment, and my computer’s configuration is quite average, even using the deepseek-r1 7b model takes a while to respond, but at least it works. Let’s see what interesting application scenarios arise.

How to Implement Local Large Models with Ollama

Leave a Comment