Local Practice with Ollama Open Source Large Model

Local Practice with Ollama Open Source Large ModelLocal Practice with Ollama Open Source Large Model

Introduction: This article will guide you on how to download and use Ollama, a powerful tool for interacting with open-source large language models (LLMs) on your local computer.

Unlike closed-source large models like ChatGPT, Ollama offers good transparency and customization, making it a valuable resource for developers and AI enthusiasts.

In this article, we will explore how to download Ollama and interact with two exciting open-source LLM models: LLaMA 2 (a text-based model from Meta) and LLaVA (a multimodal model that can handle both text and images).

How to Download Ollama

To download Ollama, please visit the Ollama official website – https://ollama.com/, and click the “download” button.

Local Practice with Ollama Open Source Large Model
Figure 1 – Ollama.com Homepage

Ollama supports three different operating systems, and the current Windows version is still in preview mode.

Local Practice with Ollama Open Source Large Model
Figure 2 – Ollama Download Page
You can choose the executable file based on your operating system. After successfully downloading the executable file, you can install it by running the executable file.
For Linux users, you need to execute the commands displayed on the screen instead of downloading the executable file.

How to Run Ollama

To demonstrate the powerful capabilities of using open-source models locally, I will show multiple examples of different open-source models with various use cases. This will help you easily use any future open-source LLM model.

Now, let’s start with the first example!

How to Run the LLama2 Model from Meta

The Llama 2 model is an open-source LLM model from Meta, and we will interact with it similarly to using ChatGPT (free version), starting with text-based interactions.

First, we use the following command to download the model:

ollama run llama2

When downloading the model, we should see output similar to the following.

Please wait a moment; it may take some time:

pulling manifestpulling 8934d96d3f08... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 3.8 GBpulling 8c17c2ebb0ea... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 7.0 KBpulling 7c23fb36d801... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 4.8 KBpulling 2e0493f67d0c... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏   59 Bpulling fa304d675061... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏   91 Bpulling 42ba7f8a01dd... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏  557 Bverifying sha256 digestwriting manifestremoving any unused layerssuccess>>> Send a message (/? for help)

Now it should prompt you to send a message or relevant prompt.

For example, we can ask llama2: “What can you do for me?” (In fact, you can ask anything, even in Chinese).

If you ask the same question as above, you should receive a response like this:

>>> What can you do for me?
As a responsible AI language model, I am here to assist you with any questions or tasks you may have. Here are some examples of things I can help with:
1. Answering questions: I can provide information on a wide range of topics, from science and technology to history and culture.2. Generating ideas: I can help you brainstorm ideas for creative projects, or provide suggestions for solving problems.3. Writing assistance: I can help you with writing tasks such as proofreading, editing, and suggesting alternative words or phrases.4. Translation: I can translate text from one language to another.5. Summarizing content: I can summarize long pieces of text, such as articles or documents, into shorter, more digestible versions.6. Creativity: I can help you generate creative ideas for stories, poems, or other forms of writing.7. Language learning: I can assist you in learning a new language by providing grammar explanations, vocabulary lists, and practice exercises.8. Chatting: I'm here to chat with you and provide a response to any question or topic you'd like to discuss.
Please let me know if there is anything specific you would like me to help you with.
>>> Send a message (/? for help)

The above is the response I received from llama2.

If you want to exit the utility, you can type /exit.

Now, let’s run a multimodal model where you can send images and ask related questions based on that image.

Running the LLaVA Model

LLaVA is an open-source multimodal LLM model. Multimodal models can accept various types of inputs and generate responses accordingly.

Using this model, we will now pass an image and ask a question based on that image.

First, let’s download the model:

ollama run llava

After successfully downloading the model, you should see something similar in the terminal:

pulling manifestpulling 170370233dd5... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 4.1 GBpulling 72d6f08a42f6... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏ 624 MBpulling 43070e2d4e53... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏  11 KBpulling c43332387573... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏   67 Bpulling ed11eda7790d... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏   30 Bpulling 7c658f9561e5... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████▏  564 Bverifying sha256 digestwriting manifestremoving any unused layerssuccess>>> Send a message (/? for help)

I found an image on pexels.com, downloaded it to my local download directory, and then sent the path and filename to LLaVA.

Here is the output I received from LLaVA:

>>> What's in this image? ./Downloads/test-image-for-llava.jpegAdded image './Downloads/test-image-for-llava.jpeg' The image shows a person walking across a crosswalk at an intersection. There are traffic lights visible, and the street has a bus parked on one side. The road is marked with lane markings and a pedestrian crossing signal. The area appears to be urban and there are no visible buildings or structures in the immediate vicinity of the person.
>>> Send a message (/? for help)

You will see its accurate description of the image. We can also freely try other things and fully enjoy the fun of it.

Conclusion

That’s it! Isn’t it fun? With Ollama, we can try powerful LLMs like LLaMA 2 and LLaVA on our own computers.

Quickly download Ollama and explore the exciting world of open-source large language models!

Author: Lu Tixia

Reference:

https://www.freecodecamp.org/news/how-to-run-open-source-llms-locally-using-ollama

Related Articles:

  • Why Large Language Models Won’t Replace Programmers

  • Databricks Releases New Open Source Model Exceeding GPT-3.5

  • Google Launches “Social Learning” AI Framework Allowing Models to Teach Each Other

Leave a Comment