Avatar

Hey folks, I post some articles about technology and tricks & tips how to do some stuff in dev environment.
My CV page is here.

DeepSeek 🐳 on ur local host right now!

4 minutes read

image

Hello, folks! 🙋🏻‍♂️

Today, I want to show you how to run Blue Whale on your local computer and make its web UI accessible over the internet. I’ll guide you through the setup step by step, so stay tuned!

What will be need:

  • MacBook Air 13 M3 16GB RAM 512GB Disk (In my case)
  • Docker Desktop or Docker CE w/ Compose ability
  • Free Time
  • And as a bonus is registered Domain Name on Cloudflare for remote access for ur blue whale 🐳

Docker Compose File #

Let’s open a Terminal (I do prefer iTerm 2) and build some awesome compose.yml. I recommend to make a new folder for it before:

mkdir ollama-open-web-ui

We need two services and therefore:

services:
  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    ports:
      - "11434:11434"
    volumes:
      - ollama_data:/root/.ollama
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "ollama", "list"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s

  open-webui:
    image: ghcr.io/open-webui/open-webui:main
    container_name: open-webui
    ports:
      - "3000:8080"
    environment:
      - OLLAMA_BASE_URL=http://ollama:11434
    depends_on:
      ollama:
        condition: service_healthy
    volumes:
      - open_webui_data:/app/backend/data
    restart: unless-stopped

volumes:
  ollama_data:
  open_webui_data:
  

I added health check for ollama service and open-webui depends on that check.

So, we have compose file. It’s time to run:

docker compose up -d

Docker’ll see that images aren’t downloaded and try to obtain them (ollama weighs 7.29GB and open-webui weighs 5.27GB)

image

Let’s check if them are healthy guys and running:

docker ps

image

Web UI Configuration #

Ok, everything is ok. Now, try to open web ui by following localhost:3000 address:

image

Use your email and fill some password for initial setup. That user is an admin.

After that need to check connection to ollama. Click on an avatar (right-upper corner) → Admin Panel → Settings → Connections.

In a case if you don’t use ClosedOpenAI API, I suggest to disable this feature. Ollama API should be pointed to http://ollama:11434 because we defined in compose.yml file. Then, click on the gear icon → and click on refresh icon. We should see that connection is verified as shown below.

image

Also, I turned on a web searching using DuckDuckGo Search Engine. To enabling, open Admin Panel → Settings → Web Search → Enable Web Search → Select duckduckgo in Web Search Engine → Save button at the bottom of the page.

Ollama Model Pulling #

So, it’s time to pull the DeepSeek R1 model into the ollama. In my case I use a distilled model with 1.5b parameters but there’re different options until 671b. You can check this out here https://ollama.com/library/deepseek-r1:1.5b.

To pull the selected model, need to do so:

docker exec -it ollama ollama pull deepseek-r1:1.5b

It weighs about 1.1GB

image

Return to UI and refresh again to a model updating and maybe refresh the page.

Chatting With Model #

Try to open a new chat and see deepseek-r1:1.5b that means it works like a charm.

Type something or select from examples to see answer from AI-ed blue whale.

Ta-da, your own self-hosted chatgpt-like AI-ed consultant answered you!

image

You can also utilize a web searching. Try to click on a plus icon and activate Web Search.

image

In theory, your request is addressed to the web and an answer incudes links to sources.

image

Cloudflared #

And as I promised at he beginning, we’ll exposed UI to Web.

I use cloudflared dockerized service. We need to create a tunnel at Cloudflare Zero Trust Dashboard https://one.dash.cloudflare.com.

Networks → Tunnels → + Create a tunnel → Select Cloudflared → Name your tunnel → Select Docker option.

I recommend to modify the command that provided automatically by the following manner:

docker run -d --network host --name cloudflared cloudflare/cloudflared:latest tunnel run --token eyJhIjoiZjZlN2*************************

I added --network host to see every application at the host. If we leave without that option, the cloudflared doesn’t see because webui and ollama are in their network.

Alternatively, you can create some common network and attach it to every container (and it’ll be more securely than this example).

And I added --name cloudflared for more convenient naming because a randomized one by default.

And -d for detached mode.

After running the container, need to check if the tunnel is in online at the dashboard.

image

And a final step is Public Hostname adding. Click on the tunnel → Edit → Public Hostname → + Add a public hostname → Select a Domain (required) → Type some subdomain if your main domain is occupied (in my case it is chat) → Service/Type HTTP → URL localhost:3000

After that, try to open chat.your.domain and see the same Open WebUI from your local host.

image

Thank you for your reading and stay tuned 🫡

all tags