OpenWebUI & Ollama

The OpenWebUI & Ollama template provides a pre-configured, self-hosted AI chat interface with direct integration of powerful language models like Llama or DeepSeek via Ollama . It includes an optimized setup for seamless operation without additional configuration.

OpenWebUI main user interface
OpenWebUI main user interface

This template leverages the advanced API capabilities of OpenWebUI, providing enhanced conversation management, context persistence, and streamlined integration compared to Ollama’s simpler API.

Key Features and Capabilities

Installation

Just add the template “OpenWebUI & Ollama” to your Trooper.AI GPU Server and the isntallation goes completely automatically. I you like, it can also directly download your modes from ollama. You can configure them in the Template Configuration dialogue.

Pre-Configure your models
Pre-Configure your models

But of course you can still download models via the OpenWebUI after installation.

Additional Options:

Accessing OpenWebUI

After deploying your Trooper.AI server instance with the OpenWebUI & Ollama template, access it via your designated URL and port:

bash
http://your-hostname.trooper.ai:assigned-port

Or click on the blue port number next to the OpenWebUI Template:

Successfully installed OpenWebUI template
Successfully installed OpenWebUI template

You will configure the initial login credentials upon your first connection. Ensure these credentials are stored securely, as they will be required for subsequent access.

Recommended Use Cases

The OpenWebUI & Ollama template is ideal for:

Technical Considerations

System Requirements

Ensure your model’s VRAM usage does not exceed 85% capacity to prevent significant performance degradation.

Pre-Configured Environment

Data Persistence

All chat interactions, model configurations, and user settings persist securely on your server.

Connecting via OpenAI-compatible API

OpenWebUI provides an OpenAI-compatible API interface, enabling seamless integration with tools and applications that support the OpenAI format. This allows developers to interact with self-hosted models like llama3 as if they were communicating with the official OpenAI API—ideal for embedding conversational AI into your services, scripts, or automation flows.

Below are two working examples: one using Node.js and the other using curl.

Node.js Example

javascript
const axios = require('axios');

const response = await axios.post('http://your-hostname.trooper.ai:assigned-port/v1/chat/completions', {
  model: 'llama3',
  messages: [{ role: 'user', content: 'Hello, how are you?' }],
}, {
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer YOUR_API_KEY'
  }
});

console.log(response.data);

cURL Example

bash
curl http://your-hostname.trooper.ai:assigned-port/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "llama3",
    "messages": [
      { "role": "user", "content": "Hello, how are you?" }
    ]
  }'

Replace YOUR_API_KEY with the actual token generated in the OpenWebUI interface under User → Settings → Account → API Keys. Do not go into admin panel, the API access is user specific! See here:

User settings
User settings

After this go here:

API keys
API keys

You can use this API with tools like LangChain, LlamaIndex, or any codebase supporting the OpenAI API specification.

Manual Updates

If you do not want to update via the template system you can run anytime the following commands to update both OpenWebUI and Ollama:

bash
# Update OpenWebUI:
# 1. Zum OpenWebUI-Verzeichnis wechseln
cd /home/trooperai/openwebui
 
# 2. Repository aktualisieren
git pull
 
# 3. Frontend-Abhängigkeiten installieren und neu bauen
npm install
npm run build
 
# 4. Backend: Python-Venv aktivieren
cd backend
source venv/bin/activate
 
# 5. Pip aktualisieren & Abhängigkeiten neu installieren
pip install --upgrade pip
pip install -r requirements.txt -U
 
# 6. OpenWebUI systemd-Dienst neu starten
sudo systemctl restart openwebui.service
 
# (optional) Update Ollama:
curl -fsSL https://ollama.com/install.sh | sh
sudo systemctl restart ollama.service

Support and Further Documentation

For installation support, configuration assistance, or troubleshooting, please contact Trooper.AI support directly:

Additional resources and advanced configuration guides: