Runs AI models locally on your hardware. Nothing leaves your machine.
# Linux / macOS / WSL
curl -fsSL https://ollama.ai/install.sh | sh
Choose based on your RAM. All support tool calling.
| RAM | Model | Command |
|---|---|---|
| 8 GB | Qwen3 4B | ollama pull qwen3:4b |
| 16 GB ★ | Qwen3 8B | ollama pull qwen3:8b |
| 32 GB | Qwen3 14B | ollama pull qwen3:14b |
| GPU 16+ | Mistral Small 3.2 | ollama pull mistral-small3.2 |
Test it: ollama run qwen3:8b "What is 2+2?"
Gives your AI persistent memory, tools, web browsing, messaging, and automation.
# Install (Node.js 22+ required)
curl -fsSL https://openclaw.ai/install.sh | bash
Alternative: npm install -g openclaw@latest
The wizard walks you through everything — model selection, auth, channels (Telegram, WhatsApp, Discord), and auto-start.
openclaw onboard --install-daemon
✦ Model & auth configuration
✦ Telegram / WhatsApp / Discord setup
✦ Workspace & memory bootstrap
✦ Background service install
✦ Health check
Choose QuickStart for defaults or Advanced for full control.
Don't want to run Ollama? Skip steps 1–2 and use Claude as your model. The wizard supports it — just pick Anthropic during setup.
You'll need either a Claude Pro/Max subscription or an Anthropic API key from console.anthropic.com. Claude is smarter than most local models and works instantly — you just pay per use.
The wizard also supports OpenAI, OpenRouter, and other cloud providers.
No channel setup needed for a quick test — just open the web dashboard:
openclaw dashboard
Or visit http://localhost:18789 in your browser. Start chatting immediately.
Agent responds? You now have a fully private AI agent on your own hardware. No data leaves your machine.
If you didn't set up a channel during the wizard, run:
# Reconfigure channels anytime
openclaw configure
# Or link WhatsApp directly
openclaw channels login
Message your bot. Wait 10–60 seconds. That's it.
Pair with Claude Code for coding tasks alongside your agent.
curl -fsSL https://claude.ai/install.sh | bash
cd your-project && claude
openclaw status # full system status
openclaw doctor # diagnose issues
openclaw health # health check
openclaw configure # reconfigure anything
openclaw gateway restart # restart agent
openclaw dashboard # open web chat
Ollama has no authentication. By default it binds to localhost (safe), but if you expose it to your network or internet, anyone can use your AI, steal your models, or execute code (CVE-2024-37032). Never change OLLAMA_HOST to 0.0.0.0 unless behind a firewall.
# Run the OpenClaw security audit
openclaw security audit --deep
# Auto-fix common issues
openclaw security audit --fix
✗ Exposing Ollama to the internet (no auth!)
✗ Setting OpenClaw DM policy to "open" (anyone can control your agent)
✗ Skipping the gateway auth token
✗ Giving the agent shell access without sandboxing
✓ The wizard sets safe defaults — don't weaken them.
Stuck? Need help?
Join the community — get help with setup, share what you've built, and stay updated on new models and features.
Join on Telegram →This quick start gets you running. If you want to go deeper — GPU optimization, model comparisons, building custom tools, multi-agent setups, troubleshooting — I wrote a full 12-chapter guide. PDF sent to your email instantly after purchase.