Codex App Pricing & Access: ChatGPT Plans, Limits & Costs 2026

Hey fellow AI tinkerers — I'm Hanks, and I've spent the past three years testing workflow tools inside real projects, not demos.

I've been running OpenAI's Codex through actual coding sessions since they launched it last year. My first question wasn't "what features does it have?" It was:

Can I actually access this without switching plans? And when I hit limits, what happens to my work?

That question matters because Codex doesn't live in one place. It's in the web interface, the CLI, IDE extensions, and now a standalone macOS app (launched February 2, 2026). Access rules differ depending on where you're working and which ChatGPT plan you're on.

Here's what I found after running Codex across all these surfaces for the past few weeks.


Where Codex Exists (App vs Cloud vs IDE/CLI)

Codex shows up in four different places:

  1. ChatGPT Web — Cloud-based agent running in isolated sandboxes
  2. Codex CLI — Command-line tool for local terminal workflows
  3. IDE Extensions — VS Code, Cursor, Windsurf integrations

  1. Codex App (macOS only, Windows coming) — Multi-agent command center

All four connect to your ChatGPT account. But here's the part that confused me at first: your usage limit is shared across local and cloud tasks within the same 5-hour window.

When I was testing a refactor in the CLI and then switched to the web interface for a different project, I noticed my remaining quota dropped faster than expected. Turns out local messages (CLI/IDE) and cloud tasks (web/app) pull from the same bucket.

This 5-hour rolling window resets continuously. If you use 20 messages at 2pm, those slots free up again at 7pm. But there's also a weekly cap that limits total usage across the entire week.

I hit the weekly limit on a Sunday evening after six full coding sessions. One "session" for me = roughly 2 hours of active work, which burned through the theoretical 5-hour limit in practice.

Surface
Execution
Context Handling
Use Case
ChatGPT Web
Cloud sandbox
Strong, isolated per task
Delegating full features
CLI
Local machine
Fast, direct repo access
Quick edits, debugging
IDE Extension
Local editor
Inline, continuous
Pairing during active coding
macOS App
Cloud + local hybrid
Multi-project, parallel agents
Managing multiple long-running tasks

Reality check: If you're planning to use Codex across multiple surfaces simultaneously, track your quota using /status in the CLI or check the Codex usage dashboard.


Plans That Include Codex (What "Included" Means)

As of February 2026, here's how Codex access breaks down by plan:

Plan
Monthly Cost
Codex Access
Local Messages (5hr)
Cloud Tasks (5hr)
Additional Purchase
Free
$0
Limited-time promo*
~10-30
Limited
No
Go
$8
Limited-time promo*
~15-50
Limited
No
Plus
$20
✅ Included
30-150**
Generous
Yes (credits)
Pro
$200
✅ Included
300-1,500**
Extensive
Yes (credits)
Business
Custom
✅ Included
Custom limits
Custom
Yes (workspace credits)
Enterprise
Custom
✅ Included
Custom limits
Custom
Yes (workspace credits)
  • OpenAI is temporarily offering Codex to Free and Go users (announced Feb 2, 2026) while also doubling rate limits for paid plans. This is explicitly a limited-time promotion.

** Ranges reflect task complexity. A simple function edit = 1 message. Multi-file refactor with test runs = significantly more.

When I first saw "30-150 messages," I thought it was vague. Then I ran a real test: asking Codex to migrate a feature across 8 files with type checking. That single task consumed what felt like 15-20 message equivalents because it had to iterate, run tests, fix failures, and re-run.

The actual number you get depends on:

  • Prompt size (more context = more tokens)
  • Repository size loaded into memory
  • Number of files touched
  • Test execution rounds
  • MCP server connections (if using)

To stretch limits further, OpenAI suggests using GPT-5.1-Codex-Mini for simpler tasks. According to their pricing documentation, switching to Mini can extend your usage roughly 4x.

Why Limits Matter for Multi-Agent Workflows

Here's where this gets interesting if you're using the new macOS app.

The Codex app lets you run multiple agents in parallel across different projects. Each agent thread operates independently. But here's what I discovered: every active agent draws from your shared 5-hour limit pool.

I tried supervising three agents simultaneously:

  • Agent 1: Backend API refactor
  • Agent 2: Frontend component migration
  • Agent 3: Database schema update

All three burned through my Plus tier limit in about 90 minutes of real work. Not 90 minutes of waiting — 90 minutes of active agent execution.

The app's worktree isolation is brilliant for keeping contexts separate. But the limit structure doesn't scale proportionally with the number of agents you're running.

If you're planning to use the multi-agent workflow seriously, Pro tier ($200/month) becomes non-optional. The 300-1,500 message range gives you room to actually supervise multiple long-running tasks without constant interruptions.

For reference, Anthropic's Claude Code (their competing product) offers 200-800 prompts on their $200 tier. OpenAI's higher ceiling here is real.


Access Issues Checklist (Why You Don't See It)

I've talked to several developers who signed up for Plus and couldn't find Codex. Here's the troubleshooting path I walked through with them:

  1. Check Your Region

Codex is available globally wherever ChatGPT operates. But there are temporary regional exceptions. Cambodia, Laos, and Nepal had iOS subscription issues for ChatGPT Go as of January 2026. If you're in a restricted region, web access might still work while mobile doesn't.

  1. Verify Your Plan Status

"Included with Plus" means your subscription must be active and paid. If you're in a trial period or payment failed, access gets restricted. Check your account status.

  1. Wait for Rollout

The initial Codex launch (April 2025) went to Pro/Enterprise first. Plus users got access in June 2025. The macOS app launched February 2026. If you signed up right after a major announcement, you might be in a staged rollout queue.

  1. Update Your CLI/IDE Extension

I had one case where the CLI was still configured with an old API key from when Codex required separate API authentication. Running codex logout then codex switched it to subscription-based auth and suddenly everything worked.

For the VS Code extension, uninstall and reinstall from the marketplace. The extension should auto-detect your ChatGPT login.

  1. Look for the Right Interface Elements

  • Web: Sidebar should show a "Code" button when starting tasks
  • CLI: Run codex command after installing via npm/homebrew
  • App: Download from openai.com/codex-app (macOS only currently)
  • IDE: Extension icon appears in sidebar after installation

If none of these surfaces show Codex options after verifying your plan is active, contact OpenAI support.

At Macaron, we've been watching how developers actually work with AI agents — not in demos, but in daily workflows where context switching and limit management become real friction points.We built our platform to handle exactly this kind of handoff: turning scattered conversations and file uploads into structured, executable workflows without hitting arbitrary quota walls or losing context between tools.Free tier to test with real tasks, then judge the results yourself.


FAQ

Q: Does Codex work on Windows? The standalone app is macOS-only as of February 2026. Windows version is planned but no release date. CLI and IDE extensions work on Windows today.

Q: Can I use Codex and GitHub Copilot together? Yes. They operate independently. Copilot handles inline suggestions. Codex manages task-based workflows in separate threads. I use both — Copilot for auto-complete during active typing, Codex for delegating full features when I step away.

Q: What happens when I hit my limit mid-task? The agent stops accepting new commands until your 5-hour window resets. Any task already running completes. You can either wait (the timer shows remaining time) or purchase additional credits to continue immediately.

Q: Are API calls separate from subscription limits? Yes. If you configure the CLI with an API key instead of ChatGPT login, usage charges at standard API rates ($1.50/1M input tokens, $6/1M output for codex-mini-latest). This can be an escape valve if you hit subscription limits but need to keep working.

Q: Does the free tier promo mean unlimited access? No. Free and Go users get "limited" access — significantly lower quotas than Plus. Think 10-30 local messages per 5 hours vs. 30-150 for Plus. The promo is temporary (as of Feb 2026) and will eventually revert to paid-only access.

Q: Can I see my current usage in real-time? Yes. CLI: type /status. Web/App: check the Codex dashboard. This shows remaining quota and next reset time.

Hey, I’m Hanks — a workflow tinkerer and AI tool obsessive with over a decade of hands-on experience in automation, SaaS, and content creation. I spend my days testing tools so you don’t have to, breaking down complex processes into simple, actionable steps, and digging into the numbers behind “what actually works.”

Apply to become Macaron's first friends