Introduction and Testing of Ollama

1. Introduction to Ollama

Ollama is an open-source tool designed for the convenient deployment and execution of large language models (LLMs) on local machines.

It provides a simple and efficient interface that allows users to easily create, execute, and manage these complex models. Additionally, Ollama comes equipped with a rich library of pre-built models, enabling seamless integration of these models into various applications, significantly enhancing development efficiency and user experience.

2. Installation of Ollama

2.1 Official Website
Ollama official website: https://ollama.com/
This website can be opened safely without any restrictions, and upon opening, it looks like this:
Introduction and Testing of Ollama
2.2 Downloading Ollama
Click on [Download], which will redirect you to the download page, as shown below:
Introduction and Testing of Ollama
Choose the download based on your system; we will take Windows as an example for installation.
Click [Download for Windows], and then wait for the download to complete.
Introduction and Testing of Ollama
2.3 Installing Ollama
2.3.1 Double-click [OllamaSetup.exe] to start the installation.
Introduction and Testing of Ollama
2.3.2 Click [Install] to begin the installation.
Introduction and Testing of Ollama
Just wait for the installation to complete; you can see that the current version of Ollama is 0.3.6.
Introduction and Testing of Ollama
2.3.3 Notes
After the installation is complete, there will be no prompt; you need to find Ollama in the start menu.
Introduction and Testing of Ollama
Also, be sure to note that if the environment variable is not added successfully, you will need to add it manually; otherwise, you will not be able to start Ollama from the command line.
Introduction and Testing of Ollama
2.4 Quick Applications of Ollama
2.4.1 Open Command Line Window
Press Win+R, type cmd, and click OK to open the command line window.
Introduction and Testing of Ollama
Introduction and Testing of Ollama
2.4.2 Configure Large Models via Ollama
Type ollama list to view the current status of large models; the first installation should show as follows:
Introduction and Testing of Ollama
2.4.3 Download Pre-trained Models
(1) Click on Models to go to the page of pre-trained models supported by Ollama.
Introduction and Testing of Ollama
Introduction and Testing of Ollama
(2) For example, to download the [yi] model, we search for [yi] in the search box and then click to enter the detailed page of the [yi] model:
Introduction and Testing of Ollama
Introduction and Testing of Ollama
Introduction to the yi model:

The yi model is a series of large language models trained on a high-quality corpus of 3 trillion tokens supporting both English and Chinese. Different yi models provide pre-trained models of 6b, 9b, and 34b, where b represents a billion parameters, so 6b means 6 billion parameters.

(3) In the command line window, type: ollama run yi:6b-chat, this command is originally intended to run the yi model, but it will first check if the yi model is available; if not, it will download the yi model, and if it is available, it will run the yi model.

The correct download command should be: ollama pull yi:6b-chat

Screenshot of the yi model download:

Introduction and Testing of Ollama
Currently, a good point is that Ollama has not been restricted, so we can download online, and the speed is quite fast, reaching over 20MB/s, but it cannot be ruled out that it may be restricted in the future.
(4) After the download is complete, the yi model will start automatically, and we can directly input questions to interact with the yi model.
Introduction and Testing of Ollama
2.4.4 Dialog with the yi Model
Introduction and Testing of Ollama
As we can see, when answering the first question, it is still logical and reasonable. However, when answering the second logical question, the 6b model is still inaccurate and does not understand the question well.
3. Summary of Ollama Commands
To facilitate everyone in operating Ollama, here are some commonly used ollama commands:
ollama serve: Start the Ollama service, which is the basis for subsequent operations.ollama create: Create a model from a model file, suitable for custom models or existing local model files.ollama show: Display model information, allowing viewing of model architecture, parameters, etc., assisting in model analysis.ollama run: Run the model, such as ollama run qwen2; if the model is not available locally, it will automatically download and run it, useful for quickly testing the model.ollama pull: Pull the model from the registry, such as ollama pull llama3, convenient for obtaining official or other sourced models.ollama push: Push the model to the registry for easy sharing.ollama list: List the existing models locally for easy management and selection.ollama cp: Copy the model, useful for backup or creating model copies.ollama rm: Delete the model to free up storage space.ollama help: Get help information for any command, allowing users to quickly query command usage.

Leave a Comment