
Hey fellow AI tinkerers — I'm Hanks, and I've spent the past three years testing workflow tools inside real projects, not demos.
I've been running OpenAI's Codex through actual coding sessions since they launched it last year. My first question wasn't "what features does it have?" It was:
Can I actually access this without switching plans? And when I hit limits, what happens to my work?
That question matters because Codex doesn't live in one place. It's in the web interface, the CLI, IDE extensions, and now a standalone macOS app (launched February 2, 2026). Access rules differ depending on where you're working and which ChatGPT plan you're on.
Here's what I found after running Codex across all these surfaces for the past few weeks.


Codex shows up in four different places:

All four connect to your ChatGPT account. But here's the part that confused me at first: your usage limit is shared across local and cloud tasks within the same 5-hour window.
When I was testing a refactor in the CLI and then switched to the web interface for a different project, I noticed my remaining quota dropped faster than expected. Turns out local messages (CLI/IDE) and cloud tasks (web/app) pull from the same bucket.
This 5-hour rolling window resets continuously. If you use 20 messages at 2pm, those slots free up again at 7pm. But there's also a weekly cap that limits total usage across the entire week.
I hit the weekly limit on a Sunday evening after six full coding sessions. One "session" for me = roughly 2 hours of active work, which burned through the theoretical 5-hour limit in practice.
Reality check: If you're planning to use Codex across multiple surfaces simultaneously, track your quota using /status in the CLI or check the Codex usage dashboard.

As of February 2026, here's how Codex access breaks down by plan:
** Ranges reflect task complexity. A simple function edit = 1 message. Multi-file refactor with test runs = significantly more.
When I first saw "30-150 messages," I thought it was vague. Then I ran a real test: asking Codex to migrate a feature across 8 files with type checking. That single task consumed what felt like 15-20 message equivalents because it had to iterate, run tests, fix failures, and re-run.
The actual number you get depends on:
To stretch limits further, OpenAI suggests using GPT-5.1-Codex-Mini for simpler tasks. According to their pricing documentation, switching to Mini can extend your usage roughly 4x.
Here's where this gets interesting if you're using the new macOS app.
The Codex app lets you run multiple agents in parallel across different projects. Each agent thread operates independently. But here's what I discovered: every active agent draws from your shared 5-hour limit pool.
I tried supervising three agents simultaneously:
All three burned through my Plus tier limit in about 90 minutes of real work. Not 90 minutes of waiting — 90 minutes of active agent execution.
The app's worktree isolation is brilliant for keeping contexts separate. But the limit structure doesn't scale proportionally with the number of agents you're running.
If you're planning to use the multi-agent workflow seriously, Pro tier ($200/month) becomes non-optional. The 300-1,500 message range gives you room to actually supervise multiple long-running tasks without constant interruptions.
For reference, Anthropic's Claude Code (their competing product) offers 200-800 prompts on their $200 tier. OpenAI's higher ceiling here is real.
I've talked to several developers who signed up for Plus and couldn't find Codex. Here's the troubleshooting path I walked through with them:
Codex is available globally wherever ChatGPT operates. But there are temporary regional exceptions. Cambodia, Laos, and Nepal had iOS subscription issues for ChatGPT Go as of January 2026. If you're in a restricted region, web access might still work while mobile doesn't.
"Included with Plus" means your subscription must be active and paid. If you're in a trial period or payment failed, access gets restricted. Check your account status.
The initial Codex launch (April 2025) went to Pro/Enterprise first. Plus users got access in June 2025. The macOS app launched February 2026. If you signed up right after a major announcement, you might be in a staged rollout queue.
I had one case where the CLI was still configured with an old API key from when Codex required separate API authentication. Running codex logout then codex switched it to subscription-based auth and suddenly everything worked.
For the VS Code extension, uninstall and reinstall from the marketplace. The extension should auto-detect your ChatGPT login.
codex command after installing via npm/homebrewIf none of these surfaces show Codex options after verifying your plan is active, contact OpenAI support.
At Macaron, we've been watching how developers actually work with AI agents — not in demos, but in daily workflows where context switching and limit management become real friction points.We built our platform to handle exactly this kind of handoff: turning scattered conversations and file uploads into structured, executable workflows without hitting arbitrary quota walls or losing context between tools.Free tier to test with real tasks, then judge the results yourself.
Q: Does Codex work on Windows? The standalone app is macOS-only as of February 2026. Windows version is planned but no release date. CLI and IDE extensions work on Windows today.
Q: Can I use Codex and GitHub Copilot together? Yes. They operate independently. Copilot handles inline suggestions. Codex manages task-based workflows in separate threads. I use both — Copilot for auto-complete during active typing, Codex for delegating full features when I step away.
Q: What happens when I hit my limit mid-task? The agent stops accepting new commands until your 5-hour window resets. Any task already running completes. You can either wait (the timer shows remaining time) or purchase additional credits to continue immediately.
Q: Are API calls separate from subscription limits? Yes. If you configure the CLI with an API key instead of ChatGPT login, usage charges at standard API rates ($1.50/1M input tokens, $6/1M output for codex-mini-latest). This can be an escape valve if you hit subscription limits but need to keep working.
Q: Does the free tier promo mean unlimited access? No. Free and Go users get "limited" access — significantly lower quotas than Plus. Think 10-30 local messages per 5 hours vs. 30-150 for Plus. The promo is temporary (as of Feb 2026) and will eventually revert to paid-only access.
Q: Can I see my current usage in real-time?
Yes. CLI: type /status. Web/App: check the Codex dashboard. This shows remaining quota and next reset time.