PolarSPARC

Pi Day Special - Ollama on Raspberry Pi 5


Bhaskar S 03/14/2025


Overview

March 14th is celebrated around the world as the Pi Day. Pi in Mathematics represents a constant - the ratio of the circumference of a circle to its diameter, and is approximately equal to 3.14159 !

Just a few weeks back had shared how one can setup a Raspberry Pi 5 SBC with 8 GB RAM and an M.2 HAT (NVMe SSD) for cost-effective homelab needs !

On this Pi Day, we will try to run the LLM model(s) locally on the Raspberry Pi 5 SBC !!!


Installation and Setup

We will be using the Ollama platform running on Docker container to run the recent LLM model - Gemma 3.

To install Docker, we will execute a series of commands in the order.

To ensure the Raspberry Pi OS is up-to-date, execute the following command in a terminal window:


$ sudo apt-get update


The following should be the typical output:


Output.1

Hit:1 http://deb.debian.org/debian bookworm InRelease
Hit:2 http://deb.debian.org/debian-security bookworm-security InRelease
Hit:3 http://deb.debian.org/debian bookworm-updates InRelease   
Hit:4 http://archive.raspberrypi.com/debian bookworm InRelease  
Reading package lists... Done

To ensure the pre-requisite utilities are installed, execute the following command in a terminal window:


$ sudo apt-get install -y ca-certificates curl


The following should be the typical output:


Output.2

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
ca-certificates is already the newest version (20230311).
curl is already the newest version (7.88.1-10+deb12u8).
The following packages were automatically installed and are no longer required:
  libcamera0.3 libwlroots12 lxplug-network
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

To ensure valid Docker cryptographic keys are setup, execute the following commands in a terminal window:


$ sudo install -m 0755 -d /etc/apt/keyrings

$ sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc

$ sudo chmod a+r /etc/apt/keyrings/docker.asc


There is no output generated.

To add the Docker package repository, execute the following command in a terminal window:


$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


There is no output generated.

To refresh the package index files on the system, execute the following command in a terminal window:


$ sudo apt-get update


The following should be the typical output:


Output.3

Hit:1 http://deb.debian.org/debian bookworm InRelease
Hit:2 http://deb.debian.org/debian-security bookworm-security InRelease                                      
Hit:3 http://deb.debian.org/debian bookworm-updates InRelease                                                
Get:4 https://download.docker.com/linux/debian bookworm InRelease [43.3 kB]                                  
Hit:5 http://archive.raspberrypi.com/debian bookworm InRelease                                               
Get:6 https://download.docker.com/linux/debian bookworm/stable arm64 Packages [36.1 kB]
Fetched 79.4 kB in 1s (154 kB/s)   
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
All packages are up to date.

To install Docker on the system, execute the following command in a terminal window:


$ sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose


The following should be the typical trimmed output:


Output.4

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages were automatically installed and are no longer required:
  libcamera0.3 libwlroots12 lxplug-network
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  docker-ce-rootless-extras iptables libip6tc2 libslirp0 pigz python3-docker python3-dockerpty python3-docopt python3-texttable
  python3-websocket python3-yaml slirp4netns
Suggested packages:
  cgroupfs-mount | cgroup-lite firewalld
Recommended packages:
  docker.io
The following NEW packages will be installed:
  containerd.io docker-buildx-plugin docker-ce docker-ce-cli docker-ce-rootless-extras docker-compose docker-compose-plugin
  iptables libip6tc2 libslirp0 pigz python3-docker python3-dockerpty python3-docopt python3-texttable python3-websocket
  python3-yaml slirp4netns
0 upgraded, 18 newly installed, 0 to remove and 0 not upgraded.
Need to get 104 MB of archives.
...[TRIM]...
Setting up python3-texttable (1.6.7-1) ...
Setting up python3-docopt (0.6.2-4.1) ...
Setting up libip6tc2:arm64 (1.8.9-2) ...
Setting up python3-yaml (6.0-3+b2) ...
Setting up docker-buildx-plugin (0.21.1-1~debian.12~bookworm) ...
Setting up containerd.io (1.7.25-1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
Setting up docker-compose-plugin (2.33.1-1~debian.12~bookworm) ...
Setting up docker-ce-cli (5:28.0.1-1~debian.12~bookworm) ...
Setting up libslirp0:arm64 (4.7.0-1) ...
Setting up pigz (2.6-1) ...
Setting up docker-ce-rootless-extras (5:28.0.1-1~debian.12~bookworm) ...
Setting up python3-websocket (1.2.3-1) ...
Setting up python3-dockerpty (0.4.1-4) ...
Setting up slirp4netns (1.2.0-1) ...
Setting up iptables (1.8.9-2) ...
update-alternatives: using /usr/sbin/iptables-legacy to provide /usr/sbin/iptables (iptables) in auto mode
update-alternatives: using /usr/sbin/ip6tables-legacy to provide /usr/sbin/ip6tables (ip6tables) in auto mode
update-alternatives: using /usr/sbin/iptables-nft to provide /usr/sbin/iptables (iptables) in auto mode
update-alternatives: using /usr/sbin/ip6tables-nft to provide /usr/sbin/ip6tables (ip6tables) in auto mode
update-alternatives: using /usr/sbin/arptables-nft to provide /usr/sbin/arptables (arptables) in auto mode
update-alternatives: using /usr/sbin/ebtables-nft to provide /usr/sbin/ebtables (ebtables) in auto mode
Setting up python3-docker (5.0.3-1) ...
Setting up docker-ce (5:28.0.1-1~debian.12~bookworm) ...
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.
Setting up docker-compose (1.29.2-3) ...
Processing triggers for man-db (2.11.2-2) ...
Processing triggers for libc-bin (2.36-9+rpt2+deb12u9) ...

To add the logged in user to the docker system group, execute the following command in a terminal window:


$ sudo gpasswd -a ${USER} docker


The following should be the typical trimmed output:


Output.5

Adding user bswamina to group docker

To reboot the system, execute the following command in a terminal window:


$ sudo shutdown -r now


Once the system is up (after the reboot), execute the following command in a terminal window:


$ docker info


The following should be the typical trimmed output:


Output.6

Client: Docker Engine - Community
Version:    28.0.1
Context:    default
Debug Mode: false
Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.21.1
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.33.1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
Images: 0
Server Version: 28.0.1
Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bcc810d6b9066471b0b6fa75f557a15a1cbf31bb
runc version: v1.2.4-0-g6c52b3f
init version: de40ad0
Security Options:
  seccomp
  Profile: builtin
  cgroupns
Kernel Version: 6.6.74+rpt-rpi-2712
Operating System: Debian GNU/Linux 12 (bookworm)
OSType: linux
Architecture: aarch64
CPUs: 4
Total Memory: 7.871GiB
Name: pi-5
ID: e33469fb-fad9-4e0f-adbb-117677bbfabd
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
  ::1/128
  127.0.0.0/8
Live Restore Enabled: false

To download the docker image for the current version of Ollama (0.6.0 at the this article), execute the following command in the terminal window:


$ docker pull ollama/ollama:0.6.0


The following should be the typical output:


Output.7

0.6.0: Pulling from ollama/ollama
1b9f3c55f9d4: Pull complete 
b48598350c3a: Pull complete 
70a48d83870c: Pull complete 
753ca2cc2d7f: Pull complete 
Digest: sha256:5ff3909f83e01e5a13a0caa8269b484059e841abd9cfe9a86b079b5eae315866
Status: Downloaded newer image for ollama/ollama:0.6.0
docker.io/ollama/ollama:0.6.0

Assuming that the ip address of the Raspberry Pi 5 system is 192.168.1.90, start the Ollama platform by executing the following command in the terminal window:


$ docker run --rm --name ollama --network=host -p 192.168.1.90:11434:11434 -v $HOME/.ollama:/root/.ollama ollama/ollama:0.6.0


The following should be the typical trimmed output:


Output.8

...[TRIM]...
2025/03/14 23:45:47 routes.go:1225: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_CONTEXT_LENGTH:2048 OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NEW_ENGINE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://* vscode-file://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
time=2025-03-14T23:45:38.707Z level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-03-14T23:45:38.707Z level=INFO source=images.go:439 msg="total unused blobs removed: 0"
time=2025-03-14T23:45:38.708Z level=INFO source=routes.go:1292 msg="Listening on [::]:11434 (version 0.6.0)"
time=2025-03-14T23:45:38.709Z level=INFO source=gpu.go:217 msg="looking for compatible GPUs"
time=2025-03-14T23:45:38.711Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-03-14T23:45:38.711Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="7.9 GiB" available="7.1 GiB"

Open a new terminal window and execute the following docker command to download the LLM model:


$ docker exec -it ollama ollama run gemma3:1b


The following should be the typical output on Linux:


Output.9

pulling manifest 
pulling dbe81da1e4ba... 100%  ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||  815 MB                         
pulling e0a42594d802... 100%  ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||   358 B                         
pulling dd084c7d92a3... 100%  ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||  8.4 KB                         
pulling 0a74a8735bf3... 100%  ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||    55 B                         
pulling cc0038d7c4c6... 100%  ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||   492 B                         
verifying sha256 digest 
writing manifest 
success

Once the Gemma 3 pre-trained LLM model is downloaded successfully, the command would wait for an user input.

To test the just downloaded LLM model, execute the following user prompt:


>>> Describe the gpu specifications for NVidia RTX 4090 in the following json format: '{name=str, vram=int, cuda_cores: int, tensor_cores: int}'


The following should be the typical output on Linux:


Output.10

```json
{
  "name": "RTX 4090",
  "vram": 24GB,
  "cuda_cores": 16384,
  "tensor_cores": 8664
}
```

WoW - the Raspberry Pi 5 has been very responsive - very impressive !

To test if the Gemma 3 LLM model can generate code, execute the following user prompt:


>>> Generate a working code sample in Java for HMAC


The following should be the typical output on Linux:


Output.11

```java
import java.security.MessageDigest;
import java.security.SecureRandom;

public class HMACExample {

  // Function to generate a HMAC-SHA256 hash
  public static String generateHash(String input, String salt, String key) {
    try {
      MessageDigest digest = SecureRandom.getHash(input, salt, key);
      return digest.digest;
    } catch (Exception e) {
      System.err.println("Error generating HMAC: " + e.getMessage());
      return null; // Or handle the error in a more appropriate way
    }
  }

  public static void main(String[] args) {
    String input = "Hello, world!";
    String salt = "MySecretSalt";
    String key = "MySecretKey";

    String hash = generateHash(input, salt, key);

    if (hash != null) {
      System.out.println("HMAC-SHA256 Hash: " + hash);
    }

    // Example of using the hash for verification (simplified)
    String expectedHash = "MySecretKey";
    String verificationResult = verifyHash(hash, expectedHash);

    if (verificationResult != null) {
      System.out.println("Verification successful!");
      System.out.println("Verification Result: " + verificationResult);
    } else {
      System.out.println("Verification failed!");
    }
  }

  // Simple verification function (for demonstration purposes)
  public static String verifyHash(String hash, String expectedHash) {
    return hash == expectedHash;
  }
}
```

Hmm !!! Definitely not what had expected. Tried few more examples and the Gemma 3 model missed all of them - definitely not a good code generator !

Overall, the Raspberry Pi 5 SBC has been an impressively platform for running LLM model locally !



© PolarSPARC