
Hey there, automation tinkerers. If you've been running OpenClaw locally and wondering how to add capabilities beyond the 100+ bundled skills—here's where I started.
I'm not here to tell you OpenClaw is perfect. I'm here because I've spent the past three weeks building, breaking, and rebuilding custom skills, and I wanted to see if this whole "teach your agent new tricks" thing actually holds up when you're not following a demo script.
The core question I kept asking: Can I build a skill that survives real use? Not a hello-world that works once. A skill that handles edge cases, respects permissions, and doesn't explode when I feed it weird input.
Here's what I found.

OpenClaw skills follow the AgentSkills specification—an open standard that Anthropic launched in December 2025 and that Microsoft, GitHub, Cursor, and others have adopted. This isn't OpenClaw-specific magic; it's a cross-platform format.
At minimum, a skill is a folder containing a SKILL.md file with:
That's it. No complex config files. No build step. Just one Markdown file.
When OpenClaw loads skills at startup, it:
~/.openclaw/skills, <workspace>/skills, and bundled skillsSKILL.mdThe official OpenClaw docs break down the precedence:
<workspace>/skills (highest priority)~/.openclaw/skills (managed/local)If the same skill name exists in multiple locations, workspace wins. This matters when you're testing—put your dev version in the workspace to override the bundled one.
Here's something the docs bury: every skill you enable costs tokens.
According to the OpenClaw skill documentation, the formula is:
Rough estimate: ~24 tokens per skill just for the listing, before any actual instructions.
This is why I don't enable every skill I build. I keep the workspace lean and only load what I'm actively using.

Let me show you the simplest skill I built that actually did something useful: a timestamp logger.
---
name: timestamp-log
description: Append timestamped entries to a daily log file in ~/logs/
metadata: {"openclaw":{"requires":{"env":["HOME"]}}}
---
# Timestamp Logger
Use this skill when the user wants to log an event, note, or reminder with a timestamp.
## How it works
1. Check if `~/logs/` exists. If not, create it.
2. Generate current date in YYYY-MM-DD format.
3. Create or append to `~/logs/<date>.md`.
4. Format: `HH:MM - <user message>`
## Example usage
User: "Log: finished the API integration"
Output: Appended to `~/logs/2026-01-31.md`:
14:23 - finished the API integration
User: "Log: need to follow up with client tomorrow"
Output: Appended to the same file.
## Notes
- One file per day, plain text Markdown
- No database, no complex parsing
- Just date-stamped entries
That's the entire skill. ~30 lines of Markdown.
When I tested this:
~/logs/ directory automaticallyThe first time I tried it, I assumed it would need more structure—maybe a JSON config or a shell script. Nope. The instructions in the Markdown were enough for the agent to execute via OpenClaw's exec tool.

Here's where it got real: OpenClaw runs with the permissions you give it.
If you're running in sandbox mode, skills execute in a Docker container with limited filesystem access. If you're running unsandboxed, the agent can read/write anywhere your user account can.
The OpenClaw security documentation is explicit about this: "There is no 'perfectly secure' setup." You're trading convenience for control.
You can require specific conditions before a skill loads. The metadata.openclaw.requires field supports:
Example from a real skill:
---
name: github-cli
description: Interact with GitHub repos via gh CLI
metadata: {"openclaw":{"requires":{"bins":["gh"],"env":["GITHUB_TOKEN"]}}}
---
If gh isn't installed or GITHUB_TOKEN isn't set, OpenClaw won't load this skill.
When I first tested skills, I kept hitting "command not found" errors. Turns out:
requires.bins checks the host system at load timeThe official docs explain this:
If an agent is sandboxed, the binary must also exist inside the container. Install it via
agents.defaults.sandbox.docker.setupCommand.
I ended up disabling sandbox for local testing and only enabling it for untrusted skills from ClawHub.

Testing skills locally without breaking your main workflow:
Create a test workspace:
mkdir ~/openclaw-test
cd ~/openclaw-test
mkdir skills
Put your SKILL.md in ~/openclaw-test/skills/<skill-name>/SKILL.md.
Configure OpenClaw to use this workspace in ~/.openclaw/openclaw.json:
{
"agents": {
"defaults": {
"workspace": "~/openclaw-test"
}
}
}
OpenClaw can auto-reload skills when you edit SKILL.md:
{
"skills": {
"load": {
"watch": true,
"watchDebounceMs": 250
}
}
}
This saved me hours. Edit the skill, wait 250ms, and the next agent turn picks up the changes.
OpenClaw logs everything to ~/.openclaw/logs/.
When testing, I keep a terminal open running:
tail -f ~/.openclaw/logs/gateway.log
Look for:
I don't build the entire skill upfront. I start with:
For the timestamp-log skill, my first version just echoed the user message. No file I/O. Once that worked, I added the date logic. Then the file writing.
Small steps. Less debugging.

Once a skill works locally, you have three options:
Leave it in ~/.openclaw/skills/ or your workspace. No sharing, no versioning, just personal use.
Push the skill folder to a GitHub repo. Others can clone and drop it into their workspace.
No formal packaging required. Just a folder with SKILL.md and any supporting scripts.
ClawHub is OpenClaw's public skill registry. It's like npm for agent skills.
To publish:
clawhub sync --all
This:
Anyone can then install via:
clawhub install <your-skill-slug>
The Cisco AI Threat Research team analyzed 31,000 agent skills in January 2026 and found that 26% contained at least one vulnerability.
Their main concerns:
Before installing third-party skills, I:
SKILL.md fullyClawHub's moderation catches some issues, but you're the final gatekeeper.
After three weeks of testing:
What held up:
What broke:
The best skills I built were the boring ones. Log a message. Format a date. Parse a file. Simple, testable, reliable.
At Macaron, we help teams test and run automations against real inputs and repeated ops tasks—without writing skills or managing manifests.Start with an actual workflow and see how your automation holds up when you run it again and again.Try it free and judge the results yourself.