
Hey fellow AI tinkerers — Most tutorials show you the "happy path" of installation, but they rarely tell you if the tool survives past the first impressive weekend. I’ve built enough automation to know that Week 1 performance means nothing. The real test is Week 3, when you’ve forgotten how it works and just need it to respond. That’s the filter I used for this guide.
I’m going to show you how to connect Moltbot to Telegram, but more importantly, I’ll share the specific configuration tweaks—from Docker setups to dual LLM providers—that turned a fragile experiment into a stable, daily driver for my research workflows.
Before diving into setup, let me be clear about what I was actually testing:
Can Moltbot on Telegram handle daily research workflows without constant babysitting?
Not "does it work?" — but "does it stay working when I'm not watching it?"
Because here's the thing: I've built enough automation to know that Week 1 performance means nothing. The real test is Week 3, when you've forgotten how it works and just need it to respond.
That's the filter I ran everything through.
The first step is purely administrative — you're just creating the Telegram entity that Moltbot will control. This part is stable. I've done it four times across different projects and it's never failed.
Open Telegram. Search for @BotFather. Start a chat.
Send /newbot.
It'll ask for two things:
That's it. Takes 90 seconds.
One thing I learned the hard way: pick a username you won't want to change. You can't rename bots later without creating a new one and losing the token. I initially called mine "test_moltbot" and regretted it within a week when I realized this was going into production.
For detailed information on bot capabilities and authentication, refer to the Telegram Bot API documentation.
The moment you confirm the username, BotFather sends you an API token. It looks like:
1234567890:ABCDEF1234567890abcdef
Copy it immediately. Don't close the chat. Don't assume you'll remember where it is.
I use 1Password for this, but any password manager works. The key thing: treat this like a password. Anyone with this token controls your bot.
If you lose it, you can retrieve it later (/mybots → select your bot → "API Token"), but why create unnecessary friction?
Store it. Move on.

This is where things get real. You're connecting the Telegram bot to Moltbot's actual AI engine.
I'm assuming you already have Moltbot running somewhere — a VPS, a local machine, Docker on a home server, whatever. If not, you'll need to handle the base installation first (Node.js 22+, API keys for your LLM provider, etc.).
Here's what worked for me:
I ran this on a DigitalOcean droplet (Ubuntu 24.04, 2GB RAM). For those new to deploying Moltbot, DigitalOcean provides an official Moltbot Marketplace catalog with pre-configured deployment options.
If you're using Docker:
bash
git clone https://github.com/openclaw/openclaw.git
cd openclaw
./docker-setup.sh
This launches the onboarding wizard. You'll pick your AI provider (I used Anthropic), set your mode, and configure basics. To integrate Claude AI capabilities into your bot, review the Claude API documentation for best practices on model selection and parameter configuration. Credentials go into ~/.clawdbot/.env.
For comprehensive container orchestration, familiarize yourself with Docker Compose documentation to understand multi-container deployment patterns.

This is the critical step:
bash
docker compose run --rm openclaw-cli providers add --provider telegram --token YOUR_BOT_TOKEN
Replace YOUR_BOT_TOKEN with what you got from BotFather.
One thing that confused me initially: this command doesn't give you much feedback. It just... completes. No "Success!" message. No confirmation screen. The first time I ran it, I thought it failed and ran it again (which created a duplicate entry I had to manually remove from the config later).
If the command finishes without errors, it worked.


If you want the bot to read messages in group chats, go back to BotFather:
Send /mybots → select your bot → Bot Settings → Group Privacy → disable it.
I skipped this initially because I only wanted private DMs. Added it later when I realized I wanted the bot in a team channel. If you're unsure, enable it now — you can always restrict access via Moltbot's config later.
bash
docker compose restart openclaw-gateway
Check status:
bash
docker compose ps
The openclaw-gateway container should say "Up" or "Running."
At this point, the connection exists. But it's not tested yet.
Here's where I hit the first real friction point.
I opened Telegram, searched for my bot, and sent "Hello."
Nothing happened.
Not an error. Not a "processing" indicator. Just silence.
I stared at it for about 30 seconds before realizing: pairing mode was enabled by default.
This is actually a good security feature — it prevents random people from accessing your bot — but the docs don't emphasize that you need to manually approve new users, even yourself.
Here's what I did:
bash
docker compose run --rm openclaw-cli pairing approve telegram JF4MSY23
This time, it responded.

One thing that surprised me: the bot doesn't show "typing" indicators. So if you send a complex query, there's just dead silence for 10 seconds, then a full response appears. This feels broken at first, but you get used to it.
Logs saved me here:
bash
docker compose logs openclaw-gateway
Whenever something felt off, I checked the logs. Most "failures" were actually me misconfiguring providers or hitting rate limits.
This is the section I wish existed when I started. The basic setup works, but it's fragile. Here's what I changed to make it stable for daily use.
Moltbot runs a web UI on port 18789. By default, it's open to the internet. I locked it down:
bash
sudo ufw allow 18789/tcp
Then added IP whitelisting. If you're on a VPS without a static IP, use Tailscale or a VPN.
I considered turning it off for convenience. Bad idea. The one time I disabled it (to test open access), I got a spam message from a random number within 6 hours. Re-enabled immediately.
If you regenerate it (via re-running onboarding), all existing sessions break. I learned this by accident when I was testing something else.
I started with Anthropic's Claude Sonnet. It worked, but I wanted to test fallback options.
Edit ~/.clawdbot/.env to add a secondary provider (OpenAI). Then restart:
bash
docker compose restart openclaw-gateway
If Claude's API goes down (happened once during my testing), Moltbot automatically switches to the backup. This single change made the system feel 10x more reliable.

This is where Moltbot gets interesting — or overwhelming, depending on how you approach it.
I installed skills via ClawdHub:
bash
clawdhub install web-browser
clawdhub install file-handler
The web browser skill broke twice during my testing. The first time, it was a dependency issue (needed Playwright installed). The second time, it just... stopped working after an update. I had to reinstall it.
My current approach: Only install skills you'll actually use. Every additional integration is a potential failure point.
Real-time logs:
bash
clawdbot logs --follow
I keep this open in a tmux session. Any time the bot feels slow, I check here first.
Updates:
bash
git pull
./docker-setup.sh
I run this weekly. The OpenClaw GitHub repository is moving fast right now, and some updates include critical stability fixes.

Backup conversation memory:
Moltbot stores context in ~/clawd/. I back this up daily via a cron job. Losing this means losing all conversation history, which breaks context-heavy workflows.
After two weeks of daily use, here's what survived:
Here's where I stopped and asked myself: Does this actually make my work faster, or is it just interesting?
For me, the answer is yes — but with conditions.
The biggest insight: Moltbot isn't a replacement for focused tools. It's a layer that sits between you and your work.
If your workflow is already stable, adding Moltbot might just create friction. But if you're constantly switching contexts — research, writing, file handling, quick queries — having one persistent AI agent that remembers everything starts to feel essential.
I currently run this setup inside Macaron, which is where I've consolidated most of my long-term automation. The advantage: I don't have to manage the VPS separately, and conversation memory persists across different tools.
If you're testing this for the first time, I'd recommend starting simple: one bot, one provider, no extra skills. Run it for a week inside a real task. See if you actually use it, or if it just sits there.
You can spin up a $6/month DigitalOcean droplet and kill it if it doesn't stick. For beginners, the DigitalOcean Moltbot Quickstart Guide provides step-by-step deployment instructions with minimal configuration overhead. That's what I did. Turned out I kept it.
If you're already in this problem space — juggling context across tools, wanting persistent AI memory, working primarily through messaging apps — you can test the same setup I'm using. We've built this workflow into Macaron specifically to remove the manual VPS management layer.
But whether you build it yourself or use our version, the core question stays the same: Does this tool survive past the demo phase?
For me, with Telegram + Moltbot, the answer was yes. Your real tasks will tell you if it's the same for you.
For those looking to dive deeper into specific implementation aspects:
These resources complement the hands-on approach detailed in this guide, providing additional perspectives on deployment strategies and troubleshooting common issues.
Macaron won’t replace every tool in your stack, but it can centralize persistent AI workflows like the one you’ve seen here. Explore Macaron, run it against your edge cases, and tell us where it holds and where it breaks. That feedback shapes the roadmap.