User Tools

Site Tools


ai:ollama-openwebui

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ai:ollama-openwebui [2024/06/27 15:56] Wulf Rajekai:ollama-openwebui [2024/08/08 17:31] (current) Wulf Rajek
Line 1: Line 1:
 ====== Ollama Open-Webui ====== ====== Ollama Open-Webui ======
 +
 +Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
  
 Notes only for now: Notes only for now:
Line 19: Line 21:
  
 <code - docker-ollama.yml> <code - docker-ollama.yml>
 +name: ollama
 services: services:
   ollama:   ollama:
Line 28: Line 31:
       - 11434:11434       - 11434:11434
     #runtime: nvidia     #runtime: nvidia
 +    restart: unless-stopped
     deploy:     deploy:
       resources:       resources:
Line 33: Line 37:
           devices:           devices:
           - driver: nvidia           - driver: nvidia
-            device_ids: ['0']+            #device_ids: ['0'] 
 +            count: 1
             capabilities: [gpu]             capabilities: [gpu]
 </code> </code>
  
 <code - docker-openwebui.yml> <code - docker-openwebui.yml>
 +name: open-webui
 services: services:
   open-webui:   open-webui:
Line 53: Line 59:
     volumes:     volumes:
       - /opt/open-webui:/app/backend/data       - /opt/open-webui:/app/backend/data
-    restart: always+    restart: unless-stopped
     extra_hosts:     extra_hosts:
       host.docker.internal: host-gateway       host.docker.internal: host-gateway
     environment:     environment:
-      - WEBUI_NAME="CustomGPT"+      - WEBUI_NAME=CustomGPTName 
 +      - TZ=Europe/London 
 +      - RAG_EMBEDDING_MODEL_TRUST_REMOTE_CODE=True # allow sentencetransformers to execute code like for alibaba-nlp/gte-large-en-v1.5 
 </code> </code>
  
 <code - docker-openedai-speech.yml> <code - docker-openedai-speech.yml>
 +name: openedai-speech
 services: services:
   openedai-speech:   openedai-speech:
Line 93: Line 103:
  
 <code - docker-pipelines.yml> <code - docker-pipelines.yml>
 +name: pipelines
 services: services:
   pipelines:   pipelines:
Line 112: Line 123:
             capabilities: [gpu]             capabilities: [gpu]
 </code> </code>
 +
 +https://zohaib.me/extending-openwebui-using-pipelines/
 +
  
 Under settings->connections set: Under settings->connections set:
Line 122: Line 136:
  
 <code - docker-faster-whisper-server.yml> <code - docker-faster-whisper-server.yml>
 +name: faster-whisper-server
 services: services:
   faster-whisper-server-cuda:   faster-whisper-server-cuda:
Line 152: Line 167:
  
  
-NOTE: speech to text requires https connection to open-webui as browsers do not have access to microphone on http connection!+NOTE: speech to text requires https connection to open-webui as browsers do not have access to microphone on http connection! 
  
 <code> <code>
Line 160: Line 175:
 </code> </code>
 <code - /opt/docker-ssl-proxy/proxy_ssl.conf> <code - /opt/docker-ssl-proxy/proxy_ssl.conf>
 +server {
 +  listen 80;
 +  server_name _;
 +  return 301 https://$host$request_uri;
 +}
 server { server {
   listen 443 ssl;   listen 443 ssl;
Line 165: Line 185:
   ssl_certificate_key /etc/nginx/conf.d/key.pem;   ssl_certificate_key /etc/nginx/conf.d/key.pem;
   location / {   location / {
-     proxy_pass http://host.docker.internal:80;+     proxy_pass http://host.docker.internal:3000;
   }   }
 } }
 </code> </code>
  
-<code - docker-ssl-proxy.yml> +<code - docker-ssl-proxy.yml> 
 +name: nginx-proxy
 services: services:
   nginx-proxy:   nginx-proxy:
Line 176: Line 197:
     container_name: nginx-proxy     container_name: nginx-proxy
     ports:     ports:
 +      - 80:80
       - 443:443       - 443:443
     volumes:     volumes:
Line 185: Line 207:
       - TZ=Europe/London       - TZ=Europe/London
 </code> </code>
 +
 +To pull an ollama image, better to use ollama directly as the webinterface doesn't handle stalls well:
 +<code>
 +docker exec -ti ollama ollama pull imagename:tag
 +</code>
 +
 +To update all previously pulled ollama models, use this bash script:
 +<code bash update-ollama-models.sh> 
 +#!/bin/bash
 +
 +docker exec -ti ollama ollama list | tail -n +2 | awk '{print $1}' | while read -r model; do
 +  echo "Updating model: $model..."
 +  docker exec -t ollama ollama pull $model
 +  echo "--"
 +done
 +echo "All models updated."
 +</code>
 +
  
 AMD GPU on Windows: AMD GPU on Windows:
Line 226: Line 266:
 </code> </code>
  
-docker install - WSL2 backend +Create the respective docker volumes folder: 
-cmd line +<code> 
 +# p/Docker_Volumes = P:\Docker_Volumes 
 +mkdir P:\Docker_Volumes 
 +</code> 
 + 
 +docker install - choose the WSL2 backend 
 +cmd line  
 <code> <code>
 docker compose -f docker-openwebui.yml up -d docker compose -f docker-openwebui.yml up -d
Line 233: Line 280:
 </code> </code>
  
-mkdir + 
-p/Docker_Volumes P:\Docker_Volumes+to update all ollama models on windows, use this powershell command - adjust for the hostname/ip ollama is running on: 
 +<code powershell> 
 +(Invoke-RestMethod http://localhost:11434/api/tags).Models.Name.ForEach{ ollama pull $_ } 
 + 
 +#or if in docker 
 +(Invoke-RestMethod http://localhost:11434/api/tags).Models.Name.ForEach{ docker exex -t ollama ollama pull $_ } 
 +</code> 
 + 
 + 
 +====== Curl OpenAI API test ====== 
 + 
 +<code> 
 +curl http://localhost:11434/v1/chat/completions \ 
 +    -H "Content-Type: application/json"
 +    -d '{ 
 +        "model": "llama3", 
 +        "messages":
 +            { 
 +                "role": "system", 
 +                "content": "You are a helpful assistant." 
 +            }, 
 +            { 
 +                "role": "user", 
 +                "content": "Hello!" 
 +            } 
 +        ] 
 +    }' 
 +{"id":"chatcmpl-957","object":"chat.completion","created":1722601457,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"message":{"role":"assistant","content":"Hi there! It's great to meet you! I'm here to help with any questions or tasks you might have. What brings you to this virtual space today? Are you looking for recommendations, seeking answers to a specific question, or maybe looking for some inspiration? Let me know, and I'll do my best to assist you."},"finish_reason":"stop"}],"usage":{"prompt_tokens":23,"completion_tokens":68,"total_tokens":91}} 
 +</code> 
ai/ollama-openwebui.1719500190.txt.gz · Last modified: 2024/06/27 15:56 by Wulf Rajek