Skip to main content

Run Llama 3.2, DeepSeek and Interact with Open WebUI Locally with Ollama

·720 words·4 mins· loading · loading ·
Ronny Roethof
Author
Ronny Roethof
A little bit about you
Table of Contents

Explore the power of language models like “Llama 3.2” (please verify the actual model name), the coding prowess of DeepSeek and interact with them through a user-friendly web interface using Open WebUI, all powered locally by Ollama. Running LLMs locally offers significant advantages, including enhanced privacy and the freedom to experiment without relying on external APIs. This guide will walk you through the effortless process using Ollama and Open WebUI.

Why Run LLMs Locally with a Web Interface?
#

Operating language models locally with a web interface like Open WebUI provides several benefits:

  • Privacy: Your interactions remain on your system.
  • Offline Access: Use models without an internet connection (once downloaded).
  • Simplified Interaction: A graphical interface makes prompting and managing conversations easier than the command line.
  • Multi-Model Support: Open WebUI can often connect to multiple Ollama-managed models.

Prerequisites
#

Before you begin, ensure you have:

  • Operating System: macOS or Linux (Ollama’s primary supported platforms).
  • RAM: Adequate RAM for your chosen model(s).
  • Storage: Sufficient disk space for Ollama, models and Open WebUI.
  • Internet Connection: Initially required to download Ollama, models and Open WebUI.
  • Docker (Recommended for Open WebUI): While other installation methods might exist, Docker is a straightforward way to run Open WebUI. Ensure Docker is installed on your system. You can find installation instructions at https://docs.docker.com/get-docker/.

Installation with Ollama
#

Follow the steps from the previous guide to install Ollama and download your desired models:

  1. Install Ollama:

    curl -sSL https://ollama.com/install.sh | bash
    
  2. Download “Llama 3.2” and DeepSeek Models:

    ollama pull llama3.2
    ollama pull deepseek-coder-v2
    

    Verify the correct Ollama model name for the Llama version you intend to use on the Ollama Hub.

Installing and Running Open WebUI
#

Open WebUI provides a web interface to interact with your Ollama-managed models. The recommended method is using Docker:

  1. Pull the Open WebUI Docker Image: Open your terminal and run:

    docker pull ghcr.io/open-webui/open-webui:latest
    
  2. Run Open WebUI using Docker:

    docker run -d -p 8080:8080 --network=host --restart always -v open-webui:/app/data ghcr.io/open-webui/open-webui:latest
    

    Let’s break down this command:

    • -d: Runs the container in detached mode (in the background).
    • -p 8080:8080: Maps port 8080 on your host machine to port 8080 in the container (you can change the host port if needed).
    • --network=host: Uses the host’s network, allowing Open WebUI to easily discover Ollama (which typically runs on the host).
    • --restart always: Automatically restarts the container if it crashes.
    • -v open-webui:/app/data: Creates a named volume open-webui to persist Open WebUI’s data.
    • ghcr.io/open-webui/open-webui:latest: The Docker image for Open WebUI.
  3. Access Open WebUI: Once the Docker container is running, open your web browser and navigate to http://localhost:8080 (or the port you specified in the docker run command).

Interacting with Llama 3.2 and DeepSeek via Open WebUI
#

  1. Initial Setup: On your first visit, you might be prompted to create a user account.
  2. Select a Model: Open WebUI should automatically detect the models you have pulled with Ollama (like llama3.2 and deepseek-coder). You can typically select the desired model from a dropdown menu in the chat interface.
  3. Start Chatting: Once a model is selected, you can start typing your prompts in the chat input field and send them. Open WebUI will send these prompts to Ollama, which will then use the selected model to generate a response, displayed in the web interface.

You can now enjoy a more visually oriented and potentially multi-turn conversational experience with both “Llama 3.2” and DeepSeek through Open WebUI.

Further Exploration
#

With Ollama and Open WebUI, your local LLM exploration becomes even more versatile:

  • Experiment with different models: Easily switch between “Llama 3.2”, DeepSeek and any other models you’ve pulled with Ollama within the Open WebUI interface.
  • Explore Open WebUI features: Open WebUI often includes features like conversation history, different chat settings and potentially multi-user support.
  • Check Ollama and Open WebUI documentation: Dive deeper into the advanced configurations and options available for both tools.

Conclusion
#

By combining Ollama and Open WebUI, you can create a user-friendly local environment for running and interacting with powerful language models like “Llama 3.2” and specialized tools like DeepSeek. This setup offers a significant step towards accessible and private AI experimentation. Install Ollama, deploy Open WebUI with Docker, pull your models and start exploring the world of local LLMs through a convenient web browser interface!

Sources:

Related

My Quest for Ergonomic Bliss: A Journey to Find a Split Keyboard that Works with Tietze's Syndrome
·630 words·3 mins· loading · loading
My minimal desktop (Hyprland) installation on a fresh Archlinux
·720 words·4 mins· loading · loading
From FreeBSD and Debian to Arch: A Daily Driver Transition - Initial Impressions
·1052 words·5 mins· loading · loading