Chat with LLama3 with Streamlit (and Ollama)

Chat with LLama3 with Streamlit (and Ollama)

June 17, 2024

Sometime ago I was really happy to discover the Ollama Project, that allows us to use open LLM’s in our laptops.

If you are looking for a quick and easy Ollama UI, you should have a look to the Open Web UI project (ex ollama web UI)

Because you already know about Ollama, right?

But, if you want to go one step further and get an understanding on how this work in Python, keep reading and you will learn how to create your Streamlit LLM Chat integrated with Ollama, that you can SelfHost.

Setup Ollama in 5 minutes as per this video in Collaboration with Data Zen Community

ℹ️

The Streamlit Ollama Project

I have covered some very cool Gen AI projects with LLMs and streamlit, but those LLMs were using some 3rd party provider with closed sourced models.

Why dont we make it fully open?

You can try the project very quick by following these steps:

  1. Install Python 🐍
  2. Install an IDE (optional)
  3. Clone the repository
  4. Get Ollama Ready
  5. Download LLama3 with Ollama - Check other models at Ollama official page. As simple as:
ollama pull llama3:8b
#ollama list

ollama run llama3:8b #this will make the llama3 model ready for our streamlit App
  1. And install Python dependencies: for a quick spin, we can use Python Virtual environments to make sure it works
git clone https://github.com/JAlcocerT/Streamlit-Ollama-Chatbot

python -m venv streamllama #create it

streamllama\Scripts\activate #activate venv (windows)
source streamllama/bin/activate #(linux)

#deactivate #when you are done
Activate the venv and install packages (that will affect only that venv) 👇
pip install -r requirements.txt #all at once

#pip list
#pip show streamlit #check the installed version

Now, just sping the Python Streamlit App with:

streamlit run ollama_chatbot.py

If you want a more robust implementation for production, definitely have a look to the Streamlit+Ollama Docker Container Setup

Deploying Streamlit Ollama Chat

You can Build your own Streamlit Ollama Docker Image, or use the one of the project created by Github Actions CI/CD

There is just one pre-requisite to deploy this Streamlit Chat Bot for Free

Really, Just Get Docker 🐋👇

You can install Docker for any PC, Mac, or Linux at home or in any cloud provider that you wish. It will just take a few moments. If you are on Linux, just:

apt-get update && sudo apt-get upgrade && curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
#sudo apt install docker-compose -y

And install also Docker-compose with:

apt install docker-compose -y

When the process finishes, you can use it to self-host other services as well. You should see the versions with:

docker --version
docker-compose --version
#sudo systemctl status docker #and the status

Streamlit Ollama Chat Container

To Build your own Streamlit Ollama Chat image… ⏬

Feel free to use Docker or Podman as containerization platform.

docker build --no-cache -t streamllama . > build_log.txt 2>&1
#podman build --no-cache -t streamllama . > build_log.txt 2>&1

#docker run -p 8501:8501 chat_multiple_pdf:latest
#docker exec -it chat_multiple_pdf /bin/bash

docker run -p 8501:8501 -v ai_streamlit_ollama:/app --name streamlit_ollama streamllama:latest /bin/sh -c "cd /app && streamlit run ollama_chatbot.py"
#podman run -p .....

You can make this build manually, use Github Actions, or your can even combine Gitea and Jenkins to do it for you.

If you are not confortable yet with Docker & containers, you can these tools to manage them with a UI:

SelfHosting the Streamlit Ollama Chat

Use the following Docker Compose Stack the spin the streamlit chat UI.

You can use the same docker compose to deploy the second required service (ollama), if you want to use it as well via container:

version: '3'

services:
  streamlit-ollama-chat:
    image: streamllama
    container_name: streamlit_ollama
    volumes:
      - ai_streamlit_ollama:/app
    working_dir: /app
    command: /bin/sh -c "streamlit run ollama_chatbot.py"
    ports:
      - "8501:8501"

  ollama:
    image: ollama/ollama
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama
    command: /bin/sh -c "ollama run llama3:7b"

volumes:
  ai_streamlit_ollama:
  ollama_data:

To Expose it Safely to the internet, you can use the configuration provided together with Cloudflare Tunnels Docker Container or NGINX


Conclusions

FAQ

How to use Github Actions to Build my Streamlit Docker Image

You need to setup the following configuration file - https://github.com/JAlcocerT/Streamlit-Ollama-Chatbot/blob/main/.github/workflows/streamlit_GH_Actions.yml

https://fossengineer.com/docker-github-actions-cicd/

Remember that you will need to link the package to the repository and to make the package public https://github.com/JAlcocerT/Streamlit-Ollama-Chatbot/pkgs/container/streamlit-ollama-chatbot

How to use streamlit with HTTPs

Feel free to use Cloudflare Tunnels or a Proxy like NGINX, Caddy, …

F/OSS IDE’s for Python