Presentations31.10.24 - Ollama Local LLMs

Ollama Local LLMs

Author: Ale Date: 31.10.2024

This presentation shows how to install Ollama and models to run local LLMs. The main advantages of using Ollama is the flexibility, control over data, and the ability to run models locally without relying on external APIs.


Description

Ollama is a platform for running large language models (LLMs) locally. It provides tools and libraries to easily set up and manage LLMs on your local machine. Some of the features of Ollama include:

  • Local Execution: Run LLMs on your local machine.
  • Model Management: Easily download, install, and manage different models.
  • Custom Models: Train and use your own custom models.
  • Integration: Integrate with various tools and platforms.

Start


Installation

Hardware Prerequisites

  • A machine with a modern CPU (Intel or AMD).
  • At least 8 GB of RAM.
  • Sufficient disk space to store the models you plan to use.
  • A GPU is recommended but not required for better performance

Steps

Install Ollama

First, you need to install Ollama. You can do this going to Ollama’s download page

Download

Verify that Ollama is running

Open the terminal and run the following command:

ollama --version

The output expected looks like: ollama version is 0.3.13

Search a model

Go the Ollama’s model library and search for a model that suits your needs.

Library

Install a model

Next, you need to install the a model. You can do this using the Ollama CLI:

ollama pull <name:version>

Models can take up to several GBs of disk space depending on the amount of parameters they have.

Verify Installation

To verify that the installation was successful, you can run the following command:

ollama list

This should list the recently installed model among the installed models. For example:

NAMEIDSIZEMODIFIED
qwen2:latestdd314f039b9d4.4 GB3 weeks ago

Running the Model

You can now run the model locally using the following command:

ollama run <name:version>

The terminal will run in prompt mode and you will start a chat with the model.

Run Model

Chatbot UI

By default the terminal doesn’t have nice formatting to interact with, but you can also run the model using a Chatbot UI

Run Model

Next Steps

You can now integrate the local model with the Continue extension that gives you AI assisted coding like GitHub Copilot for free.

IDE Integration


Resources