Deep technical map of OpenCode, the open-source terminal AI coding agent by the SST team. Client-server architecture, multi-provider LLM integration via Vercel AI SDK, MCP tool protocol, subagent orchestration, and a rich terminal UI.
OpenCode is an open-source terminal AI coding agent built by the SST team (the creators of SST, Ion, and Terminal). It runs as a client-server application where the backend handles LLM inference, tool execution, session persistence, and MCP servers, while multiple frontend clients (terminal TUI, desktop app, web, mobile) connect over HTTP + SSE.
The project is a TypeScript monorepo managed with Turbo and powered by Bun as the runtime. It uses the Vercel AI SDK for unified LLM provider access, pulling model metadata from models.dev to support 75+ providers without hardcoded integrations. The agent system features built-in agents (build, plan, explore, general) plus user-defined custom agents via configuration.
Built by terminal enthusiasts (neovim community roots). The TUI uses Ink (React for CLI) with 20+ bundled color themes from popular editors like Catppuccin, Dracula, Gruvbox, and Tokyo Night.
Not coupled to any single LLM vendor. Supports Anthropic, OpenAI, Google, local models, and 75+ providers via the models.dev registry. Switch models with a single keypress.
The Hono HTTP server exposes REST + SSE endpoints. This enables the terminal TUI, Electron desktop app, web client, and VS Code extension to all drive the same backend.
First-class Model Context Protocol support for extending the agent with external tool servers. Supports local (stdio) and remote (HTTP/SSE) transports with OAuth authentication.
Built-in Language Server Protocol client gives the agent real-time diagnostics, type information, and symbol resolution without relying on the LLM provider.
Six built-in agents (build, plan, explore, general, compaction, title) plus custom agents definable via opencode.json with per-agent model, prompt, tools, and permissions.
OpenCode follows a client-server architecture where the server process manages all state, LLM communication, and tool execution. Clients connect via HTTP REST endpoints and receive real-time updates through Server-Sent Events (SSE). The server is built on Hono running on Bun, with SQLite (via Drizzle ORM) for session persistence.
OpenCode is organized as a Turbo monorepo with 19 packages under packages/. The core logic lives in packages/opencode, while the other packages handle the desktop app, web frontend, SDK, documentation, plugins, and enterprise features.
| Package | Purpose | Key Tech |
|---|---|---|
| opencode | Core engine: agents, sessions, tools, providers, LLM, MCP, LSP, storage | Bun, Hono, Drizzle, Vercel AI SDK |
| sdk | TypeScript SDK for building custom clients and integrations | TypeScript |
| app | Web application frontend | Astro / React |
| console | CLI entry point and terminal TUI | Ink (React for CLI) |
| desktop-electron | Electron wrapper for desktop distribution | Electron |
| ui | Shared UI component library | React |
| plugin | Plugin system (Codex, Copilot integrations) | TypeScript |
| enterprise | Enterprise features (SSO, audit, team management) | TypeScript |
| identity | Authentication and identity management | OAuth |
| extensions/zed | Zed editor extension | Zed API |
packages/opencode/src)The core engine contains 38 modules organized by domain. Each module is a self-contained subsystem with its own types, business logic, and SQL schemas where applicable. The engine is designed around an event bus pattern for loose coupling between modules.
| Module | Responsibility |
|---|---|
| agent/ | Agent definitions, built-in agents, prompt templates, agent generation |
| session/ | Session lifecycle, message history, LLM streaming, compaction, system prompts |
| provider/ | LLM provider abstraction, models.dev integration, auth, SDK transforms |
| tool/ | 20+ built-in tools (read, write, edit, bash, grep, glob, etc.) and tool registry |
| mcp/ | MCP client: stdio/HTTP transports, tool discovery, OAuth authentication |
| lsp/ | LSP client: multi-language server management, diagnostics, symbols |
| skill/ | Skill discovery and execution from local folders and URLs |
| permission/ | Declarative permission system (allow/deny/ask per tool per agent) |
| server/ | Hono HTTP server, REST routes, SSE streaming, CORS, auth middleware |
| bus/ | Event bus for decoupled inter-module communication |
| config/ | Configuration loading from opencode.json, schema validation |
| storage/ | SQLite via Drizzle ORM, schema migrations, session persistence |
| cli/ | CLI commands, TUI app (Ink/React), debug tools |
| plugin/ | Plugin system with Codex and GitHub Copilot integrations |
| snapshot/ | Git-based session snapshots for undo/revert |
| pty/ | Pseudo-terminal management for shell command execution |
| worktree/ | Git worktree management for parallel operations |
| file/ | File operations with safety checks |
| shell/ | Shell integration and command execution |
The agent system is the orchestration layer that determines what the AI can do. Agents are defined with a name, mode (primary/subagent), model preference, system prompt, tool permissions, and generation parameters. Users switch between agents with Tab or invoke subagents via the task tool.
| Agent | Mode | Tools | Purpose |
|---|---|---|---|
| build | Primary | All tools (read, write, edit, bash, multiedit, task, plan, etc.) | Default agent with full development access. Can spawn subagents and enter plan mode. |
| plan | Primary | Read-only (read, grep, glob, ls, codesearch, websearch) | Read-only exploration with safety guardrails. Used for analysis before making changes. |
| explore | Subagent | grep, glob, bash, codesearch, read, ls | Fast agent for codebase exploration. Spawned by primary agents for search tasks. |
| general | Subagent | Most tools except todo operations | Multi-step complex task executor. Handles tasks requiring multiple tool calls. |
| compaction | Hidden | None (system) | Condenses long conversation context to stay within token limits. |
| title | Hidden | None (system) | Generates session titles from conversation content. |
| summary | Hidden | None (system) | Creates session summaries for the session list. |
opencode.json under the agent key. Each custom agent can specify its own model, system prompt, temperature, max steps, tool permissions, and description. You can also override or disable any built-in agent.
OpenCode uses the Vercel AI SDK (ai package) as its unified LLM interface. Rather than hardcoding provider integrations, it pulls model metadata from models.dev, an open registry of AI model providers and their capabilities. This gives it access to 75+ providers without maintaining per-provider code.
The provider system resolves models in this priority order: (1) Flag.OPENCODE_MODELS_PATH file override, (2) bundled models snapshot (built at compile time), (3) live fetch from models.dev API.
The session module maintains provider-specific prompt templates to optimize for each model family:
plugin/ module provides hooks for providers requiring special integration. Currently ships with codex.ts (OpenAI Codex OAuth flow) and copilot.ts (GitHub Copilot authentication and parameter adjustments).
OpenCode ships with 20+ built-in tools that agents use to interact with the filesystem, execute code, search the web, and manage tasks. Each tool is defined as a TypeScript file paired with a .txt description file that serves as the tool's system prompt for the LLM. Tools are registered via the tool registry and filtered per-agent based on permissions.
| Tool | Description | Permission Key |
|---|---|---|
| read | Read file contents with optional line range | read |
| write | Create or overwrite entire files | edit |
| edit | Surgical line-level edits to existing files | edit |
| multiedit | Edit multiple files in a single tool call | edit |
| apply_patch | Apply unified diff patches to files | edit |
| bash | Execute shell commands with timeout and output capture | bash |
| grep | Regex pattern search across files (uses ripgrep) | grep |
| glob | File pattern matching for discovery | glob |
| ls | List directory contents | list |
| lsp | Query language servers for diagnostics, symbols, definitions | lsp |
| codesearch | Semantic code search across the project | codesearch |
| websearch | Search the web for information | websearch |
| webfetch | Fetch and extract content from URLs | webfetch |
| task | Spawn a subagent to handle a complex subtask | task |
| skill | Execute registered skills | skill |
| batch | Execute multiple operations in batch | bash |
| plan | Enter or exit plan mode (read-only thinking) | plan_enter / plan_exit |
| todo | Read and write todo items for task tracking | todoread / todowrite |
| question | Ask the user a clarifying question | question |
The Model Context Protocol (MCP) allows OpenCode to connect to external tool servers that expose additional capabilities. MCP servers are configured in opencode.json and can run as local processes (stdio transport) or remote services (HTTP with SSE). OpenCode includes OAuth support for authenticated remote MCP servers.
Sessions are the core abstraction for a conversation between the user and the agent. Each session persists messages, tool calls, and metadata in SQLite via Drizzle ORM. The session module orchestrates the LLM inference loop, tool execution, context compaction, and git snapshots for undo/revert.
When conversation context grows too long, the hidden compaction agent automatically summarizes earlier messages. This keeps the conversation within token limits while preserving key context.
After each agent turn that modifies files, OpenCode creates a git snapshot. Users can revert to any previous snapshot, providing a safety net for multi-file edits.
The session/retry.ts and session/revert.ts modules enable retrying failed LLM calls and reverting file changes to a previous snapshot state.
System prompts are assembled from agent prompt + provider-specific template + user instructions + plugin transformations. Provider templates optimize for each model family.
The terminal UI is built with Ink (React for CLI) and lives in cli/cmd/tui/. It features a component-based architecture with dialogs, a prompt system with autocomplete and frecency-based history, 20+ color themes, and keybinding management. The TUI connects to the server as just another client.
Key TUI features:
OpenCode is configured via opencode.json (or .opencode/config.json) at the project root. The configuration schema covers model selection, agent customization, MCP servers, permissions, LSP servers, formatters, commands, skills, plugins, and experimental features.
This diagram traces a complete user interaction from typing a message through to receiving the final response, showing how all subsystems collaborate.
/session, /agent, /provider, /mcp, /config, /permission, /question, /pty, /file, /lsp, /skill, /tui, /global, /experimental. Real-time updates flow via SSE at GET /event with 10-second heartbeats.
All state changes emit typed events on the global bus. The SSE endpoint subscribes to the bus and forwards events to connected clients with type + properties payloads.
SQLite database managed by Drizzle ORM with typed schemas. Stores sessions, messages, tool calls, shares, and metadata. Supports migrations via drizzle-kit.
The acp/ module provides agent-level and session-level access control policies. Works with the permission system to enforce tool access boundaries per agent context.
Git worktree support enables parallel operations on different branches without conflicts, useful for multi-session workflows on the same repository.