
Ollama is a local large model running framework that allows users to run and utilize large language models (LLM) on their own computers. Its design goal is to simplify the operation of large models, enabling non-professional ordinary users to easily work with these models that typically require high-end hardware and complex setups to run. Currently, Ollama supports Windows, Linux, and macOS.
Official website:https://ollama.com/
GitHub repository:https://github.com/ollama/ollama



