The Easiest Guide to Ollama Local Large Models

First of all, Lao Wang wants to tell everyone that I am also a layman, so I can understand the feeling of being at a loss with these new technologies. Therefore, today I am writing an article that everyone can easily get started with.
The two main tools used here are: Ollama and Jan.AI.
Computer requirements: at least 16GB of RAM, download a smaller model if there is no dedicated graphics card. It is best to have 32GB or more RAM and a dedicated graphics card with more than 8GB of video memory.
Since Ollama needs to download large model files from the internet, and they are quite large, it is recommended that everyone set two system environment variables in Windows before starting.
OLLAMA_HOST=http://127.0.0.1:11434/
OLLAMA_MODELS=D:\Ollama
After setting the system variables, restart your computer and then start installing the Ollama software.
Search for Ollama on the Microsoft Bing website, and you can easily find the Ollama website, then download the Ollama software. Download link: https://ollama.com/download
The software size is 206423KB, and the installation is default.
After the installation is complete, start the PowerShell terminal as an administrator, and you can pull the large model through the command line (the large model will be downloaded to the address set in the second system variable, for example, mine was downloaded to the D:\Ollama folder. If you do not set this system variable in advance, the large model files will be downloaded to the C drive).
We also came from Ollama’s webpage:

The Easiest Guide to Ollama Local Large Models

Click on the Models in the upper right corner to select the large model we want to download, for example, we want to download Alibaba’s Qianwen:

The Easiest Guide to Ollama Local Large Models

I chose the Qwen 7b model, and the command format for downloading the large model is in the red line.

The Easiest Guide to Ollama Local Large Models

Now we can pull the large model in the PowerShell terminal as an administrator by entering the above command: ollama run qwen:7b

The Easiest Guide to Ollama Local Large Models

The download speed is quite fast, and you can download directly. Once the download is complete, you can start chatting here. For example, I directly asked: What universities are there in Zhengzhou? Below is the response from Alibaba Qianwen:

The Easiest Guide to Ollama Local Large Models

Everyone might feel that communicating in PowerShell is a bit simple. So I will look for a more visually appealing localization window tool for communication. First, we go to the website: https://jan.ai/
Then download the Jan software, the software size is 126047KB.

The Easiest Guide to Ollama Local Large Models

Next is the installation of the software. After installation, you need to integrate Ollama and Jan together. The Jan.AI website has help guidance. In the search box in the upper right corner of the Jan page, enter Ollama, and you can find the following webpage:

The Easiest Guide to Ollama Local Large Models

The main steps include:
1. Run a large model in the PowerShell administrator using the command, for example: ollama run qwen:7b
2. In the Jan folder, find ~/jan/engines/openai.json, then modify this json file by replacing the original URL with the following:
“full_url”: “http://localhost:11434/v1/chat/completions”
3. Go to ~/jan/models folder, create a folder for the Ollama large model, and make sure the folder name is the same as the large model name downloaded to the D drive, for example, the Qwen model just now. Below is the path and name of the large model file downloaded by Ollama.

The Easiest Guide to Ollama Local Large Models

In the ~/jan/models folder, create a Qwen folder, and we can copy the model.json file from other folders into it and modify it. For example, I opened the model.json file of qwen-7b with Notepad and modified the following content according to the prompts on the Jan.AI website:
  • Set the id property to the model name as Ollama model name.
  • Set the format property to api.
  • Set the engine property to openai.
  • Set the state property to ready.
After modifying the model.json file to the following content, save it.
“id”: “qwen”,
“format”: “api”,
“engine”: “openai”
Then, start the Jan program, click the hub icon on the left side of the page to select the model:

The Easiest Guide to Ollama Local Large Models

We can scroll up and down on the model page to find the Qwen model. If the above operations successfully linked the large model in Ollama with Jan, you will see that “use” is written on the right side of the model. Click use to load the large model.

The Easiest Guide to Ollama Local Large Models

However, I don’t know why my Qwen large model cannot be called in Jan, but I can call the Gemma and Mistral large models.
For example, enter longwall automation in PowerShell and Jan respectively to see the different interface effects:

The Easiest Guide to Ollama Local Large Models

This is a simple introduction. The rest can be done gradually by everyone!

The Easiest Guide to Ollama Local Large Models

CloseNoteLaoWangsaidCoalMachineIndustryEquipmentDevelopment!

Leave a Comment