OpenClaw Security Checklist: Hardening Before You Connect Accounts

Hey fellow AI tinkerers — if you're spinning up OpenClaw (or still calling it Moltbot) on your local machine, we need to talk security before you wire it into WhatsApp and Gmail.

I spent the last two weeks stress-testing OpenClaw inside my daily workflows, and what I found wasn't the demo problem everyone's talking about. It's the three hours between "cool, it works" and "wait, why does my bot have root access to everything?"

Here's the thing that surprised me: OpenClaw's own docs straight-up say "there is no perfectly secure setup." That's not a cop-out — it's a design reality. When you give an AI agent shell access and API keys in one package, you're not securing a web app. You're locking down something that can act.

Let me walk you through the hardening checklist I actually use — not the enterprise playbook, just the minimum viable security that lets you sleep at night.


Threat Model (What Can Go Wrong)

Before you start locking things down, understand what you're protecting against.

OpenClaw sits at this weird intersection: it's powerful enough to book flights and manage calendars, but it runs with the same OS permissions you use to rm -rf files. That combo creates three failure modes I've seen in testing:

Prompt injection via messages. An attacker crafts a WhatsApp message that tricks the AI into executing commands. This isn't theoretical — Cisco's security team demonstrated it in January 2026 by getting OpenClaw to silently exfiltrate data through a malicious "skill" (those are OpenClaw's plugin system).

API key leakage. If you're storing secrets in plaintext config files (which the quickstart guides often do), any process with filesystem access can read them. I tested this: spin up OpenClaw with default settings, and your Anthropic API key lives in a .env file that bash scripts can cat.

Unrestricted tool execution. By default, OpenClaw can run any command your user account can run. That means curl, ssh, docker — the whole toolkit. If someone gets control of the conversation flow, they're not just reading your calendar. They're in your shell.

The blast radius depends on where OpenClaw runs. Local Mac? Limited to your user account. Cloud VPS? Potentially your entire deployment stack. According to NIST's January 2026 RFI on AI agent security, the industry doesn't even have standard frameworks for this yet.

So our threat model is: assume the AI can be manipulated, and design so manipulation has a small blast radius.


Secrets & Environment Variable Hygiene

Let's start with the most breakable piece: API keys.

Don't Put Secrets in Code or Plaintext Config

I know the docs show this pattern:

# ❌ Don't do this
ANTHROPIC_API_KEY=sk-ant-api03-xxx
OPENAI_API_KEY=sk-proj-xxx

That .env file is just sitting there. Any script, any compromised dependency, any badly-scoped tool can read it. I've tested this with detect-secrets (the same tool OpenClaw uses in their CI/CD), and it flags plaintext keys in .env files as high-severity.

Here's what I do instead:

Use a secrets manager. Even locally, you want programmatic access to secrets that doesn't involve files.

# Example: Using macOS Keychain
security add-generic-password \
  -s "openclaw-anthropic-key" \
  -a "$(whoami)" \
  -w "sk-ant-api03-xxx"

Then in your OpenClaw startup script:

export ANTHROPIC_API_KEY=$(security find-generic-password \
  -s "openclaw-anthropic-key" \
  -a "$(whoami)" \
  -w)

If you're on Linux, pass or secret-tool work similarly. On Docker, mount secrets as tmpfs:

services:
  openclaw:
    environment:
      ANTHROPIC_API_KEY_FILE: /run/secrets/anthropic_key
    secrets:
      - anthropic_key
secrets:
  anthropic_key:
    file: ./secrets/anthropic_key.txt

This approach aligns with what CyberArk recommends for AI agent secrets — separate the secret storage from the application runtime.

File Permissions Matter

If you must use a .env file (I get it, sometimes you just need to ship), at minimum:

chmod 600 .env
chown $(whoami):$(whoami) .env

That restricts read/write to your user only. But seriously, this is the bare minimum. The 2026 trend in AI agent security is moving toward just-in-time secrets that expire after minutes.

Skills Are Executable Code

Here's something I didn't expect: OpenClaw's "skills" (those SKILL.md folders) can contain executable scripts. I tested a malicious skill from a GitHub repo — the kind that looks helpful but includes a hidden curl command to an external server.

OpenClaw ran it. No warning, no sandbox.

Solution: Treat skill folders like you'd treat dependencies. Only install skills from sources you trust. Run openclaw security audit --deep (available since their December 2025 security update) to scan for suspicious patterns.

I keep my skills in a separate repo with commit signing:

cd /path/to/skills
git config commit.gpgsign true
git config user.signingkey YOUR_GPG_KEY

That way I can verify who wrote each skill before I load it.


Network Access + Reverse Proxy Basics

OpenClaw's web UI is not built for the public internet. The official stance is clear: local use only.

But here's the problem: if you're running OpenClaw on a home server or VPS, you need some way to access it remotely. That's where a reverse proxy comes in.

Why Caddy Over Nginx

I tested both. Caddy won for three reasons:

  1. Automatic HTTPS with Let's Encrypt — no cert management scripts
  2. Simpler config syntax — I can read it three months later
  3. Built-in security headers (X-Frame-Options, CSP) by default

Here's my actual Caddyfile:

openclaw.yourdomain.com {
    reverse_proxy localhost:3000
    # Only allow your IP (adjust as needed)
    @blocked not remote_ip 203.0.113.0/24
    respond @blocked "Forbidden" 403
    # Security headers
    header {
        Strict-Transport-Security "max-age=31536000;"
        X-Content-Type-Options "nosniff"
        X-Frame-Options "DENY"
        Referrer-Policy "no-referrer-when-downgrade"
    }
}

Start Caddy: caddy run --config Caddyfile

That config does four things:

  • Terminates TLS at the proxy
  • Blocks all IPs except your network
  • Adds HSTS and anti-clickjacking headers
  • Forwards to OpenClaw on localhost:3000

The TLS cert? Caddy fetches it automatically from Let's Encrypt on first connection.

I've also tested DigitalOcean's 1-click hardened OpenClaw image, which ships with Caddy pre-configured. It adds IP-based cert issuance (no domain needed) and gateway key pairing. Solid option if you don't want to DIY.

Firewall the Direct Port

Even with Caddy in front, lock down the OpenClaw port:

# Ubuntu/Debian example
sudo ufw allow 443/tcp    # Caddy HTTPS
sudo ufw deny 3000/tcp    # Block direct OpenClaw access
sudo ufw enable

Now the only way in is through Caddy's auth layer.

Node.js Version Matters

OpenClaw requires Node.js 22.12.0 or later to patch CVE-2025-59466 (async_hooks DoS) and CVE-2026-21636 (permission model bypass). Check yours:

node --version  # Should show v22.12.0+

If you're on an older version, upgrade. These aren't theoretical — async_hooks bugs have been exploited in production AI agents.


Least Privilege for Service Connectors

This is where most people give OpenClaw too much rope.

Google Calendar Example

When you connect OpenClaw to Google Calendar, you get a consent screen asking for permissions. The default scope is usually https://www.googleapis.com/auth/calendar — full read/write access to all calendars.

I scaled that back:

  1. Create a Google Cloud project
  2. Enable Calendar API
  3. Create OAuth credentials with read-only scope first: https://www.googleapis.com/auth/calendar.readonly
  4. Test that OpenClaw can read events
  5. Only then add write scope if needed

Same logic for Gmail, Slack, any API. Start with the minimum scope that unblocks your workflow, then expand only when you hit a real need.

Service Account Isolation

If OpenClaw needs to access cloud resources (S3, GCP buckets, etc.), don't use your personal IAM credentials. Create a dedicated service account:

# Example: GCP service account
gcloud iam service-accounts create openclaw-agent \
    --display-name="OpenClaw Agent"
# Grant only bucket read
gcloud projects add-iam-policy-binding PROJECT_ID \
    --member="serviceAccount:openclaw-agent@PROJECT_ID.iam.gserviceaccount.com" \
    --role="roles/storage.objectViewer"

This follows the Zero Trust principles for AI agents — every identity gets just enough access to do its job.

I learned this the hard way when I gave OpenClaw full S3 access and it accidentally deleted a test bucket during a "clean up old files" task. Lesson learned.

Docker Capabilities (If You're Containerized)

Running OpenClaw in Docker? Drop unnecessary capabilities:

docker run \
  --read-only \
  --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  -v openclaw-data:/app/data \
  openclaw/openclaw:latest

That --cap-drop=ALL prevents container breakout via kernel exploits. Only add back what you need (in this case, binding to port 443).


Logging, Retention, and Audit Trails

You can't secure what you can't see.

Enable Structured Logging

OpenClaw logs to stdout by default. That's fine for dev, but in prod you want:

  1. Structured JSON logs
  2. Persistent storage
  3. Searchable interface

I pipe logs to a local file and rotate them:

# Start OpenClaw with tee for dual output
openclaw start 2>&1 | tee -a /var/log/openclaw/openclaw.log
# Rotate weekly (add to cron)
find /var/log/openclaw -name "*.log" -mtime +7 -delete

Better yet, use journald if you're on systemd:

# Run as systemd service
sudo systemctl enable openclaw
sudo journalctl -u openclaw -f  # Follow logs

What to Log

These events matter for security:

  • Authentication attempts (who paired devices, when)
  • Tool execution (which commands ran, success/failure)
  • API calls (to Anthropic, Google, etc.)
  • Skill loads (which SKILL.md files were activated)

I grep logs weekly for red flags:

# Check for failed auth
journalctl -u openclaw | grep "pairing failed"
# Look for unusual commands
journalctl -u openclaw | grep "exec:" | grep -E "(rm -rf|sudo|curl.*sh)"

Retention Policy

GDPR and similar regulations push toward minimal data retention. I keep:

  • 30 days of operational logs
  • 90 days of security events (auth failures, unusual commands)
  • Indefinite audit trail of pairing/depairing events

After that, I scrub personally identifiable info and archive compressed logs to S3 Glacier.

External Monitoring

For production setups, I send critical events to an external service. Options I've tested:

Service
Best For
Cost
Notes
Datadog
Centralized dashboards
$$$
Great APM integration
Grafana Loki
Self-hosted
$
Pairs with Prometheus
Papertrail
Simple log search
$$
7-day free tier

I landed on Loki because I already run Grafana for other services. Setup took 20 minutes — way easier than I expected.


Putting It All Together: My 5-Minute Hardening Checklist

Walk through this before you connect real accounts:

✓ Secrets

  • [ ] Move API keys to keychain/secrets manager
  • [ ] .env file is chmod 600 or doesn't exist
  • [ ] Skills repo is commit-signed

✓ Network

  • [ ] Reverse proxy (Caddy) is running with IP allowlist
  • [ ] Direct OpenClaw port is firewalled
  • [ ] Node.js is v22.12.0+

✓ Permissions

  • [ ] OAuth scopes are read-only first
  • [ ] Service accounts exist (no personal creds)
  • [ ] Docker runs with --cap-drop=ALL (if containerized)

✓ Monitoring

  • [ ] Logs are structured and rotating
  • [ ] Weekly grep for suspicious commands
  • [ ] Audit trail of device pairings

✓ Fail-Safe

  • [ ] Backup of config files (encrypted)
  • [ ] Documented rollback procedure
  • [ ] Emergency kill switch (e.g., systemctl stop openclaw)

I run this checklist every time I spin up a new OpenClaw instance. Takes five minutes. Saves hours of cleanup.


When Security Still Isn't Enough

Look, I'll be honest: even after all this hardening, OpenClaw is a fundamentally high-trust system. You're giving an AI shell access and API keys. That's a big ask.

If your threat model includes:

  • State-level adversaries
  • Compliance requirements (HIPAA, SOC 2)
  • Mission-critical infrastructure

...you probably shouldn't run OpenClaw in its current form. The Cloud Security Alliance's 2026 predictions call this out: most AI agent frameworks aren't ready for enterprise zero-trust environments yet.

For everyone else — hobbyists, personal automation, internal tools — this checklist gets you to "reasonable paranoia" territory. Not perfect, but good enough that you won't be the embarrassing case study in next year's security conference.

After following the hardening checklist, you can skip the manual setup and still run safe workflows. Want secure-by-default automations? Create a free Macaron account and start running workflows with built-in secrets management, access controls, and audit logs.

Hey, I’m Hanks — a workflow tinkerer and AI tool obsessive with over a decade of hands-on experience in automation, SaaS, and content creation. I spend my days testing tools so you don’t have to, breaking down complex processes into simple, actionable steps, and digging into the numbers behind “what actually works.”

Apply to become Macaron's first friends