
You can read the official documentation, or you can learn from someone who already hit every single error message during the process. I spent the last few weeks figuring out the best way to setup OpenClaw so you don't have to troubleshoot Node.js version mismatches or wonder why your Docker container won't mount volumes.
I’m Hanks, and in this post, I’m stripping away the theory to focus on the practical: how to choose between the CLI and Docker routes, how to keep your session alive on WhatsApp or Telegram, and how to safely connect your API keys. Let’s get your agent running in the background where it belongs.
I'm the kind of person who likes to gather everything before starting, so I don't hit a wall halfway through and lose momentum. For OpenClaw, the prep is light—but there are a few pieces you'll want ready.

OpenClaw runs on macOS, Linux, or Windows (via WSL2—don't try native Windows, the docs are pretty clear about that). I'm on macOS, which seemed to be the smoothest path based on what I'd read.
You need Node.js installed—version 22 or higher. I already had Node 20 from some other project, so I had to upgrade. If you don't have it at all, the one-liner install script handles it for you, which is nice.

Docker is optional but recommended, especially if you want things running in the background reliably. I didn't use Docker for the initial test—I went with the CLI install first just to see how it felt. Later, I switched to Docker for the "always-on" version. More on that in a bit.
This is where it gets slightly messier, depending on what you want to connect.

I didn't set up all of them at once. Pick one channel to start. You can add more later without re-doing everything.
There are two main ways to install OpenClaw: the fast CLI route, or the Docker route. I tried both, and here's what I noticed.

If you just want to see if this thing works, the CLI install is the way to go. One command:
curl -fsSL https://openclaw.ai/install.sh | bash
It pulls down Node.js (if needed), installs the CLI globally, and drops you into an onboarding wizard. The whole process took me maybe 10 minutes, including the time I spent reading prompts.
This is fine for testing. It's not ideal if you want OpenClaw running 24/7 in the background—because if your terminal session ends, so does the agent. But for "does this actually do what I think it does?" purposes, it's perfect.
Once I decided I wanted this running persistently, I switched to Docker. The setup is a bit more involved—you're working with a docker-compose.yml file, making sure volumes are mounted correctly, checking that ports aren't conflicting with anything else on your machine.
But the payoff is: it just runs. You start the container, and OpenClaw stays up. You can restart your computer, close your terminal, walk away—it keeps going.
I'm using this now. It feels more reliable, and honestly, once it's configured, I don't think about it much. Which is kind of the point. For detailed Docker configuration tips, check out this OpenClaw Docker setup guide.
If you went the CLI route, after the install script finishes, run:
openclaw --version
You should see a version number. If you don't, something went wrong with the PATH setup—more on that in the troubleshooting section.
There's also a built-in diagnostic tool:
openclaw doctor
This checks for common misconfigurations, missing dependencies, and health issues. I ran it out of habit, even though everything seemed fine. It flagged that I didn't have logs enabled yet, which was helpful to know early.
For Docker, you'll use the provided docker-compose.yml file from the OpenClaw GitHub repository. Clone the repo, navigate to the directory, and run:

docker compose up -d
The -d flag runs it in detached mode (background). Once it's up, check the logs:
docker compose logs -f
You're looking for a line that says the gateway started successfully. The dashboard should be accessible at http://localhost:18789/.
If the dashboard loads, you're good. If not, you're probably hitting a port conflict or a volume mount issue. Check the troubleshooting section—it's one of the top ten problems.
The onboarding wizard walks you through the initial setup. It's conversational, which I appreciated—no dense config files to edit manually (at least not yet).
It asks:
I chose local, Anthropic/Claude, Telegram, and yes to the background service. The whole thing took maybe five minutes.
All your settings end up in ~/.openclaw/. This directory holds:
Back this up. Seriously. If you update OpenClaw or something breaks, having a snapshot of this directory means you don't have to redo everything.
I use a simple cron job to zip it weekly and throw it into a folder I already back up to the cloud. Nothing fancy.
Logs aren't on by default, which makes sense for minimalism—but you'll want them if anything goes sideways.
During onboarding, there's a prompt asking if you want verbose logging. Say yes. Or, if you already finished onboarding, you can enable it in the config file later.
Logs go to /tmp/openclaw/ by default on macOS/Linux. I checked them once early on when a command wasn't working, and they told me exactly what was failing. Saved me a lot of guessing.
This is where you hook OpenClaw up to an actual AI.

You can paste your API key directly during onboarding, but I didn't love that. It goes into a config file in plaintext, which felt... okay for local testing, but not great long-term.
Better option: use environment variables. Set them in your shell profile or in a .env file that Docker can read. The docs cover this, and it's not complicated—just one extra step.
If you're using the Docker setup, you can pass environment variables in the docker-compose.yml file under the environment: section.
OpenClaw lets you set different models for different purposes. A "fast chat" model is cheaper and quicker—good for simple back-and-forth. A "strong action" model is better at reasoning, tool use, and complex tasks.
I use Claude Sonnet 4.5 for both right now, because I didn't want to manage multiple keys yet. But if you're cost-conscious, you could route simple chats to something like GPT-4o-mini and save the heavier model for automations.
It's configurable in the routing section of the config. I haven't tweaked it much yet, but it's there if you need it.

You need at least one way to talk to OpenClaw. I went with Telegram first because it felt the least intrusive. No phone dependency, no session weirdness—just a bot token and you're done.
I tried WhatsApp later, out of curiosity. The setup is straightforward: you run the onboarding wizard, it shows you a QR code, you scan it with your phone (like linking WhatsApp Web), and that's it.
The tricky part: session persistence. If OpenClaw restarts or loses connection, WhatsApp might log you out. The session files are stored in ~/.openclaw/, so as long as those don't get deleted, you should be fine.
I did have one session drop randomly after a few days. I had to rescan the QR code. Not a huge deal, but mildly annoying. I later realized my firewall was interfering—network stability matters here. For a detailed walkthrough, see this WhatsApp setup tutorial.
This was the smoothest path for me.
Done. The bot appears in Telegram, you can send it a message, and it responds. No QR codes, no phone, no fuss.
By default, OpenClaw uses polling (it checks for new messages periodically). You can switch to webhooks if you're running this on a VPS with a public URL, but for local testing, polling is fine.
I haven't fully set this up yet, but I skimmed the process. You need to:
Slack supports threaded replies, which is nice if you're using this for team stuff. I'm not, so I stuck with Telegram for now.
This is where it started feeling useful instead of theoretical.
I wanted OpenClaw to send me a morning summary: weather, calendar events, and a nudge about anything I'd marked as high-priority the day before.
I didn't write this from scratch. I found a similar skill in the ClawHub registry (a community repo of pre-built automations), copied it, and tweaked it slightly to match my preferences.

The skill runs on a cron schedule—every morning at 7 a.m., it pulls data from my calendar API and a weather API, formats it, and sends it to me via Telegram.
It worked on the first try, which surprised me. I expected to debug something. To learn more about how skills work, read this guide to OpenClaw skills.
This one took a bit more trial and error. The idea: I send OpenClaw a message like "remind me to call the dentist next Tuesday at 2 p.m.," and it creates a calendar event automatically.
I used a skill template that parses natural language, extracts the time/date, and hits the Google Calendar API. It's not perfect—sometimes it misinterprets vague phrasing—but it works well enough that I've stopped opening my calendar app manually for small additions.
The workflow is: chat message → OpenClaw processes it → event appears in my calendar → it confirms via Telegram. Simple, but it removed a step I used to forget to do.
I skipped this initially. Mistake. Once I realized OpenClaw had access to my files, shell commands, and APIs, I went back and tightened things up.
By default, OpenClaw can execute shell commands if you enable that tool. That's powerful, but also risky. I set up an allowlist: only specific, pre-approved commands can run without confirmation.
The config file has a section for this. You list commands like ls, curl, etc., and anything not on the list triggers a confirmation prompt before it executes.
I also restricted which chat channels can trigger actions. Only my Telegram account is allowlisted—no one else can message the bot and make it do things.
Given that AI agents with system access can pose security risks, it's worth reviewing the OWASP Top 10 for Large Language Model Applications to understand potential vulnerabilities.
I'm running this locally, so I didn't set up a reverse proxy. But if you're deploying on a VPS or want to access it remotely, you'll need HTTPS.
The docs recommend using Tailscale (which auto-configures secure tunnels) or setting up Caddy/Nginx as a reverse proxy. Don't expose the raw gateway port to the internet—bad things will happen.
Logs can contain sensitive info—API keys, personal data from messages, etc. OpenClaw has a setting to redact sensitive fields before writing to logs.
I turned that on. I also set logs to rotate weekly and auto-delete after 30 days, just to keep things tidy and reduce exposure.

I hit a few bumps along the way. Here's what tripped me up and how I fixed it.
After the CLI install, I tried running openclaw and got "command not found." The script installed it to a global npm directory, but that wasn't in my PATH.
Fix: Add ~/.npm-global/bin (or wherever npm puts global packages) to your PATH. Or re-run the install script—it's supposed to handle this, but sometimes it doesn't.
I had Node 20 installed. OpenClaw needs 22+. The error message was clear about this, at least.
Fix: Upgrade Node via nvm or the official installer. If you're using Docker, this doesn't matter—the container has the right version baked in.
I tried to start the Docker container and got an error about port 18789 already being in use. Turns out I had another service running on that port.
Fix: Change the port in docker-compose.yml, or kill the conflicting process. To find what's using a port:
bash
sudo lsof -i :18789
Also, make sure volumes are mounting correctly. If the workspace directory isn't persisting, your config will reset every time the container restarts.
The gateway was running, but the dashboard wouldn't load. Logs showed WebSocket connection failures.
Fix: The gateway needs to bind to localhost (or 0.0.0.0 if you're accessing it from another machine). Check the bind setting in the config. Also, make sure no firewall is blocking it.
QR code wouldn't scan, or it would scan and then immediately disconnect.
Fix: Network stability is key here. If you're on a flaky WiFi connection, WhatsApp will drop the session. Also, make sure the session files in ~/.openclaw/ aren't getting deleted—those need to persist.
Yes. If you use a local model (via Ollama or similar), you don't need any cloud API at all. Chat channels like Telegram still require internet for the bot to communicate, but the AI processing can stay entirely on your machine.
I'm not doing this—I'm using Claude's API—but I like knowing the option exists.
For basic use, pretty modest. An old laptop with 4GB RAM can handle it. If you're running Docker plus a local LLM, you'll want more—at least 16GB RAM, ideally.
I'm running this on a 2019 MacBook Pro (16GB RAM) and it barely registers resource-wise. The gateway uses maybe 200MB of memory when idle.
Run:
openclaw update
It pulls the latest version and tries to migrate your config automatically. I did this once, and it worked fine. But—back up ~/.openclaw/ first. Just in case.
If something does break, you can restore from backup and re-run the onboarding wizard to rebuild the config.
I've been using OpenClaw for a few weeks now. It hasn't replaced every other tool I use, but it's carved out a small, useful niche: handling the gap between "I should do this" and "this thing happened."
The setup took longer than I expected—not because it's complicated, but because I kept stopping to tweak things, test edge cases, and second-guess my security settings. Once it was running, though, it mostly faded into the background. Which is what I wanted.
Your mileage will vary depending on what you're trying to automate and how comfortable you are with config files and terminal commands. But if you've been curious about having an AI that can actually do things instead of just suggesting them, this is worth a weekend afternoon to try.
For more background on the project, you can read the official OpenClaw introduction or check out the Wikipedia article on OpenClaw for an overview of its development and use cases.

Still playing sysadmin for OpenClaw? Stop wrestling with Docker and random disconnects. Macaron is deployment-free and no-code—ready in just 3 minutes. Stop babysitting your AI and let it truly serve you. Reclaim your weekends → macaron