← Tech Guides
Terminal AI Agent

OpenAI Codex CLI

The complete reference guide for OpenAI's open-source terminal coding agent. Interactive sessions, sandbox security, local models, MCP integration, and more.

npm i -g @openai/codex · v2026

Quick Reference

The commands and flags you will reach for most often

Core Commands
codexinteractive TUI session
codex exec "task"non-interactive run
codex e "task"exec alias
codex reviewcode review mode
codex resume --lastresume last session
codex forkbranch from session
codex apply diffapply a patch
Essential Flags
-m MODELchoose model
-s MODEsandbox mode
-a POLICYapproval policy
-p PROFILEconfig profile
-i IMAGEattach image
--full-autosafe automation
--osslocal model mode
/ Slash Commands
/modelswitch model
/reviewreview changes
/mcpMCP servers
/resumeresume session
/forkfork session
/statussession info
/exitquit session
Keyboard Shortcuts
@fuzzy file search
Esc Escedit prev message
Alt+Mcycle models
Alt+Ecycle reasoning
Enterinject instruction
Tabqueue follow-up
Ctrl+Cclose session
Sandbox Modes
read-onlyno writes at all
workspace-writewrite in repo only
danger-full-accesswrite anywhere
Approval Policies
untrustedask for everything
on-requestmodel decides
on-failureask only on error
neverfully automated
Quick Recipes
codex --full-autosafe auto mode
codex -a on-failurefast iteration
codex --oss -m ...private local AI
codex exec --jsonscripted output
codex review --base mainpre-PR review
codex -s read-onlyanalysis only
codex --add-dir ..multi-project

Getting Started

Codex CLI is OpenAI's open-source coding agent built in Rust. It runs locally from your terminal, reads your codebase, makes edits, and executes commands while you review in real time.

Installation

npm Recommended

npm i -g @openai/codex

Homebrew (macOS)

brew install --cask codex

You can also download platform-specific binaries directly from github.com/openai/codex/releases.

Platform Support

macOS Full
Native support via npm or Homebrew cask.
Linux Full
Full support including WSL2 environments.
Windows Experimental
Use WSL workspace for the best experience.

Authentication

The first time you run Codex, it will prompt you to sign in. You have two options:

Auth # Option 1: ChatGPT account (Plus, Pro, Business, Edu, Enterprise) codex login # Option 2: OpenAI API key codex login --api-key # Check status / Logout codex login --status codex logout

Interactive Mode

Codex launches into a full-screen terminal UI where you can converse with the agent, review its actions in real time, and iterate on tasks together.

Launch # Start in current directory codex # Start with specific model and profile codex --model gpt-5-codex --profile dev # Start with an image attachment codex -i screenshot.png

Slash Commands

Type / in the composer to open the slash command popup. These are available during interactive sessions:

CommandDescription
/modelSwitch between models (gpt-5-codex, gpt-4.1, gpt-4.1-mini, etc.)
/reviewLaunch the code reviewer on current changes or against a branch
/mcpView and manage active MCP servers
/resumeOpen picker to reload a saved session transcript
/forkBranch a new session from a previous one, leaving the original intact
/statusShow current session info (ID, model, sandbox, etc.)
/helpList all supported slash commands and shortcuts
/exitExit the interactive session (also /quit)

Keyboard Shortcuts

Navigation

KeyAction
@Fuzzy file search over workspace root
Up/DownNavigate draft history in composer
Esc x2Edit previous user message; keep pressing to walk back
EnterFork from selected point after Esc walkback
Tab/EnterSelect file from @ search results

During Active Tasks

KeyAction
EnterInject new instructions into current turn
TabQueue a follow-up prompt for next turn
Ctrl+CStop current task / close session
Alt+MCycle through available models
Alt+ECycle through reasoning effort levels

Non-Interactive Mode

Use codex exec for scripted or CI-style runs that finish without human interaction. Pipe results to stdout for downstream processing.

The exec Command

exec # Basic non-interactive execution codex exec "Add error handling to src/api.js" # Short alias codex e "Run tests and fix any failures" # With JSON Lines output for scripting codex exec --json "Generate changelog from git log" # With structured output schema codex exec --output-schema schema.json "Analyze test coverage" # Read-only analysis (default for exec) codex exec --sandbox read-only "Review code quality"
Default sandbox for exec is read-only. In automation, always set the least permissions needed. Use --sandbox workspace-write only when the task requires file changes.

The review Command

A dedicated code review mode that analyzes diffs and reports prioritized, actionable findings without modifying code.

review # Review uncommitted changes codex review # Review against a base branch (pre-PR) codex review --base main # Review specific files codex review src/api.js src/utils.js

CI/CD Integration

GitHub Action # In your workflow YAML: - uses: openai/codex-action@v1 with: task: "Review PR for security issues" sandbox: read-only

Model Selection

Switch between OpenAI's flagship models, lightweight variants, and open-source models running locally through Ollama or LM Studio.

Available Models

ModelTypeBest For
gpt-5-codexFlagshipComplex reasoning, large refactors, architecture decisions
gpt-5.3-codexEnhancedImproved code generation with latest capabilities
gpt-4.1GeneralBalanced performance for everyday coding tasks
gpt-4.1-miniFastQuick edits, simple tasks, low latency
gpt-oss:20bLocalLocal inference, privacy-sensitive codebases
gpt-oss:120bLocalHigh-quality local inference, larger codebases

Using Local Models

Pass --oss to route inference through a local provider like Ollama or LM Studio. All processing happens on your machine -- no data sent to the cloud.

Ollama

# Install and start Ollama curl -fsSL https://ollama.ai/install.sh | sh ollama serve # Pull a model ollama pull gpt-oss:120b # Use with Codex codex --oss -m gpt-oss:120b

LM Studio

# 1. Load model in LM Studio # 2. Start local server (port 1234) # Config: [providers.lmstudio] type = "openai_compatible" api_base = "http://localhost:1234/v1" # Use with Codex codex --oss -m local-model

Sandbox System

The OS-enforced sandbox limits what Codex can touch on your filesystem and network. Choose the right level of access for each task.

🔒
read-only
Can read files but cannot write anywhere, not even /tmp. Network blocked. Ideal for reviews and analysis.
📝
workspace-write
Write inside current repo and temp dirs. Network blocked by default. The standard development mode.
danger-full-access
Write anywhere on system. Network allowed. Use only when absolutely necessary.
Sandbox # Set sandbox via flag codex --sandbox read-only codex -s workspace-write # Add extra writable dirs (prefer this over danger-full-access) codex --add-dir ../backend --add-dir ../shared # Multi-project coordination codex --cd apps/frontend --add-dir ../backend
Prefer --add-dir over danger-full-access. When you need write access to additional directories, grant them individually rather than disabling the sandbox entirely.

Default Sandbox by Command

CommandDefault SandboxReasoning
codexworkspace-writeInteractive development needs file edits
codex execread-onlyScripted runs should be minimal-permission
codex reviewread-onlyReviews should never modify code

Approval Policies

Control when Codex must stop and ask for permission before executing commands. Match the policy to your trust level and workflow.

untrusted Only safe cmds auto-run
on-request Model decides when to ask
on-failure Auto-run, ask on error
never Fully automated
PolicyBehaviorBest For
untrustedOnly known-safe read-only commands (ls, cat, sed) auto-run; everything else promptsUnfamiliar codebases, high-security
on-requestModel decides when to ask for approval (default)Balanced interactive work
on-failureAuto-run all commands in sandbox; prompt only when a command failsFast iteration with a safety net
neverNever prompt for approval (risky)CI/CD pipelines, trusted automation

Full-Auto Mode

--full-auto is a convenience flag that combines the safest automatic settings:

# Equivalent to: -a on-request -s workspace-write codex --full-auto # Network stays blocked, agent stays within workspace # The safest way to let Codex run autonomously
Dangerous bypass: --dangerously-bypass-approvals-and-sandbox disables ALL safety checks. Only use when you fully understand the risks and trust the task completely.

MCP Integration

Model Context Protocol connects Codex to third-party tools and context -- documentation sites, browsers, databases, design tools, and more.

MCP CLI Commands

MCP # Manage MCP servers codex mcp list # List all servers codex mcp get <server-name> # Get server details codex mcp add <server-name> # Add new server codex mcp remove <server-name> # Remove server # MCP authentication codex mcp login <server-name> codex mcp logout <server-name> # In-session: use /mcp to view active servers

Server Configuration

STDIO Server

# In ~/.codex/config.toml [mcp.servers.browser] type = "stdio" command = "npx" args = ["@mcp/server-browser"]

HTTP Streaming Server

# In ~/.codex/config.toml [mcp.servers.remote-api] type = "http" url = "https://api.example.com/mcp"

Running Codex as an MCP Server

Codex can itself act as an MCP server, exposing the entire agent as a tool for external applications and multi-agent pipelines.

# Start Codex as MCP server codex mcp-server # Orchestrate with Agents SDK for multi-agent workflows

Popular Integrations

Browser Automation
Control browsers for web scraping, testing, or data extraction.
Figma
Access design specs and assets directly from design files.
Linear / GitHub
Interact with project management and issue tracking tools.
Database Tools
Query and analyze PostgreSQL, MySQL, and other databases.

Session Management

Codex stores session transcripts locally so you can resume, fork, or apply changes across multiple CLI invocations without losing context.

Resume

Resume codex resume # Interactive picker of recent sessions codex resume --last # Jump to most recent session (current dir) codex resume --all # Show sessions from all directories codex resume <SESSION_ID> # Target a specific session

Fork

Create a new session branched from a previous one. The original transcript stays untouched while you explore an alternative approach.

Fork codex fork # Open session picker to fork from codex fork --last # Fork the most recent session codex fork <SESSION_ID> # Fork a specific session

Apply

Apply generated patches and diffs from Codex output:

Apply codex apply changes.diff # Apply diff from file git diff | codex apply # Apply from stdin codex a changes.patch # Short alias

Cloud Tasks

Run tasks in the cloud for more compute, longer execution times, and access to larger models:

codex cloud exec "Analyze entire codebase for security issues"

Session Storage

Sessions are stored as JSONL files at ~/.codex/sessions/. You can inspect, back up, or clean up old sessions manually.

Configuration

Codex reads its configuration from ~/.codex/config.toml. Use profiles to maintain different setups for different workflows.

Sample config.toml

config.toml [defaults] model = "gpt-5-codex" sandbox = "workspace-write" approval_policy = "on-request" oss_provider = "ollama" [providers.openai] type = "openai" [providers.ollama] type = "ollama" api_base = "http://localhost:11434" [sandbox] allow_network = false [tui] alternate_screen = true [features] unified_exec = true web_search = true auto_context = true [mcp.servers.browser] type = "stdio" command = "npx" args = ["@mcp/server-browser"]

Profiles

Profiles let you define named configuration presets. Load them with --profile or -p.

Profiles [profiles.secure] sandbox = "read-only" approval_policy = "untrusted" model = "gpt-4.1-mini" [profiles.fast] model = "gpt-4.1-mini" approval_policy = "on-failure" sandbox = "workspace-write" [profiles.local] oss_provider = "ollama" model = "codellama:34b" sandbox = "workspace-write"
# Use a profile codex --profile secure codex -p fast

Feature Flags

Feature flags control optional and experimental capabilities. Manage them from the CLI or in config.toml:

Features codex features list # List all features codex features enable unified_exec # Enable a feature codex features disable shell_snapshot # Inline flags (session-only) codex --enable web_search --enable auto_context codex --disable feature_name

Shell Completion

Completion codex completion bash > /etc/bash_completion.d/codex codex completion zsh > "${fpath[1]}/_codex" codex completion fish > ~/.config/fish/completions/codex.fish

CLI Flags Reference

Complete reference of every command-line flag organized by category.

Global Flags

FlagShortDescription
--help-hShow help information
--version-VDisplay version
--model <MODEL>-mOverride model set in configuration
--profile <NAME>-pLoad configuration profile from config.toml
--config <KEY=VALUE>-cOverride configuration value inline
--cd <PATH>Change working directory before starting

Sandbox & Security

FlagShortDescription
--sandbox <MODE>-sSet sandbox: read-only, workspace-write, danger-full-access
--ask-for-approval <POLICY>-aSet approval: untrusted, on-request, on-failure, never
--full-autoConvenience: -a on-request -s workspace-write
--dangerously-bypass-approvals-and-sandboxDisable all safety checks (dangerous)
--add-dir <PATH>Add extra writable directory to sandbox

Model & Provider

FlagDescription
--ossUse local open-source provider (Ollama, LM Studio)
--provider <NAME>Specify model provider
--local-providerUse local model provider

Input & Output

FlagShortDescription
--image <PATH>-iAttach image(s) to prompt
--jsonOutput as JSON Lines (JSONL) for scripting
--output-schema <PATH>Request response conforming to JSON Schema
--searchEnable web search during execution
--no-alt-screenDisable alternate screen mode for TUI

Features & Session

FlagDescription
--enable <FEATURE>Enable a feature flag for this session
--disable <FEATURE>Disable a feature flag for this session
--lastResume or fork the most recent session
--allShow sessions from all directories (with resume)

All Commands

CommandAliasDescription
codexStart interactive TUI session
codex execcodex eNon-interactive execution
codex reviewCode review mode
codex resumeResume a previous session
codex forkFork a previous session
codex applycodex aApply diffs and patches
codex loginAuthenticate with OpenAI
codex logoutClear authentication
codex mcpManage MCP servers (list, get, add, remove)
codex mcp-serverRun Codex as an MCP server
codex featuresManage feature flags (list, enable, disable)
codex completionGenerate shell completion scripts
codex cloudCloud task management
codex sandboxcodex debugSandbox debugging
codex app-serverRun app server mode

Pro Tips

Patterns, aliases, and workflows used by power users to get the most out of Codex CLI.

Workflow Patterns

Safe Exploration
Start with -s read-only for analysis, then switch to --full-auto when ready to implement.
Multi-Project
Use --cd apps/frontend --add-dir ../backend to coordinate changes across repos.
Fast Iteration
Set -a on-failure to auto-run everything and only stop when something breaks.
Private Local AI
Use --oss -m codellama:34b for sensitive codebases. All inference stays on-device.

Shell Aliases

~/.bashrc # Quick codex with common settings alias cx='codex --full-auto' alias cxr='codex --sandbox read-only' alias cxl='codex --oss -m codellama:34b' # Auto-commit after Codex runs cxdo() { codex exec --full-auto "$@" && \ git add -A && \ git commit -m "codex: $*" }

CI/CD Recipes

CI/CD # Pre-commit: Check for issues codex exec -s read-only "Review staged changes for bugs" # Generate commit message from diff msg=$(codex exec --json "Generate commit message from git diff --staged" | jq -r .message) git commit -m "$msg" # PR review automation codex review --base main --json > review-results.json # Release preparation codex exec -a never -s workspace-write "Update changelog for v2.0"

Security Best Practices

Security Checklist
• Start reviews with --sandbox read-only
• Use --add-dir instead of danger-full-access
• Set approval policy appropriate for trust level
• Audit MCP servers before adding them
• Use profiles to enforce security settings per project
• Keep sensitive data out of prompts
• Clean up old sessions in ~/.codex/sessions/

Performance Tips

Speed # Use faster model for simple tasks codex -m gpt-4.1-mini "Fix typo in README" # Local model to avoid API latency codex --oss -m codellama:13b "Quick formatting" # Resume sessions instead of rebuilding context codex resume --last