“The gateway to knowledge opens not by force, but by a single word spoken in trust.”
Hint: What word unlocks all doors between friends?
AI Hosting Lab
OpenClaw AI
Host your own AI interface, connect any model provider via API, and keep full control of your data. One install, infinite intelligence.
// 01 — Overview
What Is OpenClaw AI?
OpenClaw is a self-hosted AI gateway that lets you connect to any large language model provider through a single unified interface. Run it on your own server, laptop, or cloud VM — your conversations never leave your infrastructure.
🌐
One Interface, Every Model
Switch between Claude, GPT-4o, Gemini, DeepSeek, Llama, Mistral, and more with a single dropdown. No need for separate tabs or subscriptions — just API keys.
🔒
Full Data Sovereignty
Your prompts, conversations, and files stay on your hardware. Nothing is logged by third-party dashboards. API calls go direct to the provider and back to you.
⚡
Docker-First Architecture
A single docker run command gets you running in under 60 seconds. Auto-updates, persistent storage, and reverse-proxy ready out of the box.
// 02 — Installation
Quick Start Guide
Get OpenClaw running locally or on a remote server in three steps.
1
Install Docker
Docker is the only prerequisite. Install it on any OS.
Pull and run the container. Data persists between restarts.
Terminal — Launch OpenClaw AI
$docker run -d -p 3000:8080 \ -v openclaw-data:/app/backend/data \ --name openclaw --restart always \ ghcr.io/open-webui/open-webui:main# OpenClaw is now running at:http://localhost:3000# Create your admin account on first visit
3
Connect Your First AI
Go to Settings → Connections and add any provider API key. Models will auto-populate.
OpenClaw UI — Add Your First Provider
# Navigate to: Settings → Connections# Click "Add New Connection"Name:Anthropic ClaudeBase URL:https://api.anthropic.com/v1API Key:sk-ant-your-key-here# Models will auto-populate in the model dropdown# Select a model and start chatting!
+
With Local AI (Ollama)
Run completely offline by connecting to a local Ollama instance. Zero cloud dependency.
Terminal — OpenClaw + Ollama (Fully Local)
# Step 1: Install and start Ollama$curl -fsSL https://ollama.com/install.sh | sh$ollama pull llama3.2# Step 2: Launch OpenClaw with Ollama connection$docker run -d -p 3000:8080 \ --add-host=host.docker.internal:host-gateway \ -e OLLAMA_BASE_URL=http://host.docker.internal:11434 \ -v openclaw-data:/app/backend/data \ --name openclaw --restart always \ ghcr.io/open-webui/open-webui:main# All conversations stay 100% on your machine
// 03 — Connect Any AI Provider
Supported Providers
OpenClaw supports the OpenAI-compatible API format. Any provider that uses this format works out of the box. Here's how to connect each one.
Add all your provider keys at once. Use a fast local model (Llama 3.2 7B) for quick questions, Claude Sonnet for complex reasoning, and GPT-4o for image analysis. OpenClaw lets you switch with a single dropdown — every conversation stays in one place.