Architecture Maps

> OpenCode Architecture

Deep technical map of OpenCode, the open-source terminal AI coding agent by the SST team. Client-server architecture, multi-provider LLM integration via Vercel AI SDK, MCP tool protocol, subagent orchestration, and a rich terminal UI.

TypeScript / Bun 112k+ Stars 75+ Model Providers MCP Integration Client/Server Monorepo (Turbo)
01

Project Overview

OpenCode is an open-source terminal AI coding agent built by the SST team (the creators of SST, Ion, and Terminal). It runs as a client-server application where the backend handles LLM inference, tool execution, session persistence, and MCP servers, while multiple frontend clients (terminal TUI, desktop app, web, mobile) connect over HTTP + SSE.

The project is a TypeScript monorepo managed with Turbo and powered by Bun as the runtime. It uses the Vercel AI SDK for unified LLM provider access, pulling model metadata from models.dev to support 75+ providers without hardcoded integrations. The agent system features built-in agents (build, plan, explore, general) plus user-defined custom agents via configuration.

$ Terminal-Native

Built by terminal enthusiasts (neovim community roots). The TUI uses Ink (React for CLI) with 20+ bundled color themes from popular editors like Catppuccin, Dracula, Gruvbox, and Tokyo Night.

< Provider-Agnostic

Not coupled to any single LLM vendor. Supports Anthropic, OpenAI, Google, local models, and 75+ providers via the models.dev registry. Switch models with a single keypress.

# Client/Server Split

The Hono HTTP server exposes REST + SSE endpoints. This enables the terminal TUI, Electron desktop app, web client, and VS Code extension to all drive the same backend.

@ MCP Protocol

First-class Model Context Protocol support for extending the agent with external tool servers. Supports local (stdio) and remote (HTTP/SSE) transports with OAuth authentication.

~ LSP Integration

Built-in Language Server Protocol client gives the agent real-time diagnostics, type information, and symbol resolution without relying on the LLM provider.

* Extensible Agents

Six built-in agents (build, plan, explore, general, compaction, title) plus custom agents definable via opencode.json with per-agent model, prompt, tools, and permissions.

02

High-Level Architecture

OpenCode follows a client-server architecture where the server process manages all state, LLM communication, and tool execution. Clients connect via HTTP REST endpoints and receive real-time updates through Server-Sent Events (SSE). The server is built on Hono running on Bun, with SQLite (via Drizzle ORM) for session persistence.

Fig 2.1 — High-Level System Architecture
graph TB subgraph Clients["Clients"] TUI["Terminal TUI
(Ink/React)"] Desktop["Desktop App
(Electron)"] Web["Web Client"] VSCode["VS Code
Extension"] SDK["TypeScript SDK"] end subgraph Server["Hono Server (Bun Runtime)"] Router["HTTP Router
+ Middleware"] SSE["SSE Event
Stream"] Auth["Auth
(Basic + OAuth)"] end subgraph Engine["Core Engine"] SessionMgr["Session
Manager"] AgentSys["Agent
System"] ToolReg["Tool
Registry"] LLMLayer["LLM Layer
(Vercel AI SDK)"] PermSys["Permission
System"] end subgraph External["External Services"] Providers["LLM Providers
(75+ via models.dev)"] MCPServers["MCP Servers
(stdio / HTTP)"] LSPServers["LSP Servers
(per-language)"] end subgraph Storage["Storage"] SQLite["SQLite
(Drizzle ORM)"] FileSystem["Project
Filesystem"] Snapshots["Git
Snapshots"] end TUI & Desktop & Web & VSCode & SDK -->|HTTP + SSE| Router Router --> Auth Router --> SSE Router --> SessionMgr SessionMgr --> AgentSys AgentSys --> LLMLayer AgentSys --> ToolReg AgentSys --> PermSys LLMLayer -->|streamText| Providers ToolReg -->|stdio/HTTP| MCPServers ToolReg -->|LSP protocol| LSPServers SessionMgr --> SQLite ToolReg --> FileSystem SessionMgr --> Snapshots style Clients fill:#0d1117,stroke:#22c55e,stroke-width:2px,color:#e2e8f0 style Server fill:#0d1117,stroke:#06b6d4,stroke-width:2px,color:#e2e8f0 style Engine fill:#0d1117,stroke:#f59e0b,stroke-width:2px,color:#e2e8f0 style External fill:#0d1117,stroke:#a855f7,stroke-width:2px,color:#e2e8f0 style Storage fill:#0d1117,stroke:#3b82f6,stroke-width:2px,color:#e2e8f0
03

Monorepo Structure

OpenCode is organized as a Turbo monorepo with 19 packages under packages/. The core logic lives in packages/opencode, while the other packages handle the desktop app, web frontend, SDK, documentation, plugins, and enterprise features.

Fig 3.1 — Monorepo Package Map
graph LR subgraph Core["Core"] OC["opencode
(core engine)"] SDK2["sdk
(TS client SDK)"] Util["util
(shared helpers)"] end subgraph Frontend["Frontends"] App["app
(web app)"] Console["console
(CLI entry)"] DesktopE["desktop-electron"] Desktop2["desktop"] Web2["web"] UI["ui
(component lib)"] Storybook["storybook"] end subgraph Infra["Infrastructure"] Plugin["plugin"] Enterprise["enterprise"] Identity["identity"] Containers["containers"] Function["function"] Slack["slack"] end subgraph Editor["Editor Extensions"] Zed["extensions/zed"] VSCode2["sdks/vscode"] end OC --> SDK2 OC --> Util App --> OC Console --> OC DesktopE --> OC Web2 --> UI Plugin --> OC Enterprise --> OC Zed --> SDK2 VSCode2 --> SDK2 style Core fill:#0d1117,stroke:#22c55e,stroke-width:2px,color:#e2e8f0 style Frontend fill:#0d1117,stroke:#06b6d4,stroke-width:2px,color:#e2e8f0 style Infra fill:#0d1117,stroke:#f59e0b,stroke-width:2px,color:#e2e8f0 style Editor fill:#0d1117,stroke:#a855f7,stroke-width:2px,color:#e2e8f0
PackagePurposeKey Tech
opencodeCore engine: agents, sessions, tools, providers, LLM, MCP, LSP, storageBun, Hono, Drizzle, Vercel AI SDK
sdkTypeScript SDK for building custom clients and integrationsTypeScript
appWeb application frontendAstro / React
consoleCLI entry point and terminal TUIInk (React for CLI)
desktop-electronElectron wrapper for desktop distributionElectron
uiShared UI component libraryReact
pluginPlugin system (Codex, Copilot integrations)TypeScript
enterpriseEnterprise features (SSO, audit, team management)TypeScript
identityAuthentication and identity managementOAuth
extensions/zedZed editor extensionZed API
04

Core Engine (packages/opencode/src)

The core engine contains 38 modules organized by domain. Each module is a self-contained subsystem with its own types, business logic, and SQL schemas where applicable. The engine is designed around an event bus pattern for loose coupling between modules.

Fig 4.1 — Core Engine Module Map
graph TB subgraph AgentLayer["Agent Layer"] Agent["agent/"] Session["session/"] Permission["permission/"] end subgraph LLMLayer["LLM Layer"] Provider["provider/"] LLM["session/llm.ts"] Plugin3["plugin/"] end subgraph ToolLayer["Tool Layer"] Tool["tool/"] MCP2["mcp/"] LSP2["lsp/"] Skill["skill/"] end subgraph InfraLayer["Infrastructure"] Server2["server/"] CLI["cli/"] Config["config/"] Bus["bus/"] Storage2["storage/"] end subgraph UtilLayer["Utilities"] File["file/"] Shell["shell/"] PTY["pty/"] Snapshot2["snapshot/"] Worktree["worktree/"] Format["format/"] end Agent --> Session Agent --> Permission Session --> LLM LLM --> Provider LLM --> Plugin3 Agent --> Tool Tool --> MCP2 Tool --> LSP2 Tool --> Skill Server2 --> Session Server2 --> Config Server2 --> Bus Session --> Storage2 Tool --> File Tool --> Shell Tool --> PTY style AgentLayer fill:#0d1117,stroke:#22c55e,stroke-width:2px,color:#e2e8f0 style LLMLayer fill:#0d1117,stroke:#f59e0b,stroke-width:2px,color:#e2e8f0 style ToolLayer fill:#0d1117,stroke:#06b6d4,stroke-width:2px,color:#e2e8f0 style InfraLayer fill:#0d1117,stroke:#a855f7,stroke-width:2px,color:#e2e8f0 style UtilLayer fill:#0d1117,stroke:#3b82f6,stroke-width:2px,color:#e2e8f0
ModuleResponsibility
agent/Agent definitions, built-in agents, prompt templates, agent generation
session/Session lifecycle, message history, LLM streaming, compaction, system prompts
provider/LLM provider abstraction, models.dev integration, auth, SDK transforms
tool/20+ built-in tools (read, write, edit, bash, grep, glob, etc.) and tool registry
mcp/MCP client: stdio/HTTP transports, tool discovery, OAuth authentication
lsp/LSP client: multi-language server management, diagnostics, symbols
skill/Skill discovery and execution from local folders and URLs
permission/Declarative permission system (allow/deny/ask per tool per agent)
server/Hono HTTP server, REST routes, SSE streaming, CORS, auth middleware
bus/Event bus for decoupled inter-module communication
config/Configuration loading from opencode.json, schema validation
storage/SQLite via Drizzle ORM, schema migrations, session persistence
cli/CLI commands, TUI app (Ink/React), debug tools
plugin/Plugin system with Codex and GitHub Copilot integrations
snapshot/Git-based session snapshots for undo/revert
pty/Pseudo-terminal management for shell command execution
worktree/Git worktree management for parallel operations
file/File operations with safety checks
shell/Shell integration and command execution
05

Agent System

The agent system is the orchestration layer that determines what the AI can do. Agents are defined with a name, mode (primary/subagent), model preference, system prompt, tool permissions, and generation parameters. Users switch between agents with Tab or invoke subagents via the task tool.

Fig 5.1 — Agent Architecture
graph TB subgraph BuiltIn["Built-in Agents"] Build["BUILD
Primary / Full access
All tools enabled"] Plan["PLAN
Primary / Read-only
No edit tools"] Explore["EXPLORE
Subagent / Fast
grep, glob, bash, search"] General["GENERAL
Subagent / Multi-step
Complex searches"] end subgraph System["System Agents (Hidden)"] Compact["COMPACTION
Context condensation"] Title["TITLE
Session title gen"] Summary["SUMMARY
Session summaries"] end subgraph Custom["Custom Agents (opencode.json)"] UserAgent["User-Defined
Custom model, prompt,
tools, permissions"] end Config2["opencode.json
agent config"] --> UserAgent Config2 -->|override| Build Config2 -->|override| Plan Build -->|spawns via task tool| Explore Build -->|spawns via task tool| General Build -->|spawns via task tool| UserAgent Plan -->|spawns via task tool| Explore style BuiltIn fill:#0d1117,stroke:#22c55e,stroke-width:2px,color:#e2e8f0 style System fill:#0d1117,stroke:#64748b,stroke-width:1px,color:#94a3b8 style Custom fill:#0d1117,stroke:#f59e0b,stroke-width:2px,color:#e2e8f0
AgentModeToolsPurpose
buildPrimaryAll tools (read, write, edit, bash, multiedit, task, plan, etc.)Default agent with full development access. Can spawn subagents and enter plan mode.
planPrimaryRead-only (read, grep, glob, ls, codesearch, websearch)Read-only exploration with safety guardrails. Used for analysis before making changes.
exploreSubagentgrep, glob, bash, codesearch, read, lsFast agent for codebase exploration. Spawned by primary agents for search tasks.
generalSubagentMost tools except todo operationsMulti-step complex task executor. Handles tasks requiring multiple tool calls.
compactionHiddenNone (system)Condenses long conversation context to stay within token limits.
titleHiddenNone (system)Generates session titles from conversation content.
summaryHiddenNone (system)Creates session summaries for the session list.
Custom agents are defined in opencode.json under the agent key. Each custom agent can specify its own model, system prompt, temperature, max steps, tool permissions, and description. You can also override or disable any built-in agent.
06

Provider System

OpenCode uses the Vercel AI SDK (ai package) as its unified LLM interface. Rather than hardcoding provider integrations, it pulls model metadata from models.dev, an open registry of AI model providers and their capabilities. This gives it access to 75+ providers without maintaining per-provider code.

The provider system resolves models in this priority order: (1) Flag.OPENCODE_MODELS_PATH file override, (2) bundled models snapshot (built at compile time), (3) live fetch from models.dev API.

Fig 6.1 — Provider Resolution Pipeline
graph LR subgraph Resolution["Model Resolution"] Flag["CLI Flag
(OPENCODE_MODELS_PATH)"] Snapshot["Built-in Snapshot
(models-snapshot)"] Remote["models.dev API
(api.json)"] end subgraph SDK3["Vercel AI SDK"] Wrap["wrapLanguageModel()"] Stream["streamText()"] end subgraph Providers["Provider Registry"] direction TB Anthropic["Anthropic
Claude 4, Sonnet, Haiku"] OpenAI2["OpenAI
GPT-4.1, o3, o4-mini"] Google["Google
Gemini 2.5 Pro/Flash"] Copilot["GitHub Copilot
(via plugin)"] Local["Local Models
Ollama, LM Studio"] More["75+ more via
models.dev registry"] end subgraph SpecialHandling["Provider-Specific Logic"] CodexAuth["OpenAI Codex
(OAuth, instructions field)"] CopilotPlugin["Copilot Plugin
(omit maxOutputTokens)"] LiteLLM["LiteLLM Proxy
(_noop tool compat)"] QwenPrompt["Qwen
(custom prompt template)"] end Flag -->|priority 1| Wrap Snapshot -->|priority 2| Wrap Remote -->|priority 3| Wrap Wrap --> Stream Stream --> Providers Stream --> SpecialHandling style Resolution fill:#0d1117,stroke:#f59e0b,stroke-width:2px,color:#e2e8f0 style SDK3 fill:#0d1117,stroke:#22c55e,stroke-width:2px,color:#e2e8f0 style Providers fill:#0d1117,stroke:#06b6d4,stroke-width:2px,color:#e2e8f0 style SpecialHandling fill:#0d1117,stroke:#a855f7,stroke-width:2px,color:#e2e8f0

The session module maintains provider-specific prompt templates to optimize for each model family:

anthropic.txt anthropic-20250930.txt gemini.txt codex_header.txt copilot-gpt-5.txt qwen.txt beast.txt trinity.txt
Plugin system: The plugin/ module provides hooks for providers requiring special integration. Currently ships with codex.ts (OpenAI Codex OAuth flow) and copilot.ts (GitHub Copilot authentication and parameter adjustments).
07

Built-in Tools

OpenCode ships with 20+ built-in tools that agents use to interact with the filesystem, execute code, search the web, and manage tasks. Each tool is defined as a TypeScript file paired with a .txt description file that serves as the tool's system prompt for the LLM. Tools are registered via the tool registry and filtered per-agent based on permissions.

Fig 7.1 — Tool Categories
graph TB subgraph FileOps["File Operations"] Read2["read
Read file contents"] Write2["write
Create/overwrite files"] Edit2["edit
Surgical line edits"] MultiEdit["multiedit
Edit multiple files"] ApplyPatch["apply_patch
Apply unified diffs"] LS["ls
List directories"] end subgraph Search["Search & Analysis"] Grep2["grep
Pattern search (ripgrep)"] Glob2["glob
File pattern matching"] CodeSearch["codesearch
Semantic code search"] LSP3["lsp
Language server queries"] WebSearch2["websearch
Web search"] WebFetch2["webfetch
Fetch URL content"] end subgraph Execution["Execution"] Bash2["bash
Shell commands"] Batch2["batch
Batch operations"] Task2["task
Spawn subagent"] SkillTool["skill
Run skills"] end subgraph Planning["Planning & Control"] Plan2["plan
Enter/exit plan mode"] Todo2["todo
Read/write todos"] Question2["question
Ask user questions"] end style FileOps fill:#0d1117,stroke:#22c55e,stroke-width:2px,color:#e2e8f0 style Search fill:#0d1117,stroke:#06b6d4,stroke-width:2px,color:#e2e8f0 style Execution fill:#0d1117,stroke:#f59e0b,stroke-width:2px,color:#e2e8f0 style Planning fill:#0d1117,stroke:#a855f7,stroke-width:2px,color:#e2e8f0
ToolDescriptionPermission Key
readRead file contents with optional line rangeread
writeCreate or overwrite entire filesedit
editSurgical line-level edits to existing filesedit
multieditEdit multiple files in a single tool calledit
apply_patchApply unified diff patches to filesedit
bashExecute shell commands with timeout and output capturebash
grepRegex pattern search across files (uses ripgrep)grep
globFile pattern matching for discoveryglob
lsList directory contentslist
lspQuery language servers for diagnostics, symbols, definitionslsp
codesearchSemantic code search across the projectcodesearch
websearchSearch the web for informationwebsearch
webfetchFetch and extract content from URLswebfetch
taskSpawn a subagent to handle a complex subtasktask
skillExecute registered skillsskill
batchExecute multiple operations in batchbash
planEnter or exit plan mode (read-only thinking)plan_enter / plan_exit
todoRead and write todo items for task trackingtodoread / todowrite
questionAsk the user a clarifying questionquestion
08

MCP Integration

The Model Context Protocol (MCP) allows OpenCode to connect to external tool servers that expose additional capabilities. MCP servers are configured in opencode.json and can run as local processes (stdio transport) or remote services (HTTP with SSE). OpenCode includes OAuth support for authenticated remote MCP servers.

Fig 8.1 — MCP Architecture
graph LR subgraph OpenCodeProcess["OpenCode Process"] MCPClient["MCP Client
(mcp/index.ts)"] MCPAuth["MCP OAuth
(mcp/auth.ts)"] MCPCallback["OAuth Callback
(mcp/oauth-callback.ts)"] ToolReg2["Tool Registry"] end subgraph LocalMCP["Local MCP Servers (stdio)"] FS["Filesystem
Server"] Git2["Git
Server"] DB["Database
Server"] Custom2["Custom
Servers"] end subgraph RemoteMCP["Remote MCP Servers (HTTP)"] Cloud["Cloud API
Server"] SaaS["SaaS Tool
Server"] Auth2["OAuth-Protected
Server"] end MCPClient -->|spawn process + stdio| LocalMCP MCPClient -->|HTTP + SSE| RemoteMCP MCPAuth --> MCPCallback MCPAuth --> Auth2 MCPClient --> ToolReg2 style OpenCodeProcess fill:#0d1117,stroke:#22c55e,stroke-width:2px,color:#e2e8f0 style LocalMCP fill:#0d1117,stroke:#06b6d4,stroke-width:2px,color:#e2e8f0 style RemoteMCP fill:#0d1117,stroke:#f59e0b,stroke-width:2px,color:#e2e8f0
// opencode.json - MCP server configuration { "mcp": { "my-local-server": { "type": "local", "command": ["npx", "my-mcp-server"], "environment": { "API_KEY": "..." }, "enabled": true, "timeout": 30000 }, "my-remote-server": { "type": "remote", "url": "https://api.example.com/mcp", "headers": { "Authorization": "Bearer ..." }, "oauth": { ... } } } }
09

Session Lifecycle

Sessions are the core abstraction for a conversation between the user and the agent. Each session persists messages, tool calls, and metadata in SQLite via Drizzle ORM. The session module orchestrates the LLM inference loop, tool execution, context compaction, and git snapshots for undo/revert.

Fig 9.1 — Session Inference Loop
sequenceDiagram participant User participant TUI as Terminal UI participant Server as Hono Server participant Session as Session Manager participant LLM as LLM Layer participant Provider as AI Provider participant Tools as Tool Registry participant FS as Filesystem User->>TUI: Type message TUI->>Server: POST /session/chat Server->>Session: Create message loop Agent Loop (max steps) Session->>LLM: Build prompt + history LLM->>Provider: streamText() Provider-->>LLM: Stream tokens LLM-->>Server: SSE events Server-->>TUI: Stream to UI alt Tool Call Requested LLM->>Tools: Execute tool Tools->>FS: File/shell operation FS-->>Tools: Result Tools-->>LLM: Tool result Note over Session: Continue loop end alt Context Too Long Session->>Session: Trigger compaction Note over Session: Compaction agent
summarizes context end end Session->>Session: Generate title/summary Session->>Session: Create git snapshot Server-->>TUI: Final response TUI-->>User: Display result

C Compaction

When conversation context grows too long, the hidden compaction agent automatically summarizes earlier messages. This keeps the conversation within token limits while preserving key context.

S Snapshots

After each agent turn that modifies files, OpenCode creates a git snapshot. Users can revert to any previous snapshot, providing a safety net for multi-file edits.

R Retry / Revert

The session/retry.ts and session/revert.ts modules enable retrying failed LLM calls and reverting file changes to a previous snapshot state.

P System Prompts

System prompts are assembled from agent prompt + provider-specific template + user instructions + plugin transformations. Provider templates optimize for each model family.

10

Terminal UI

The terminal UI is built with Ink (React for CLI) and lives in cli/cmd/tui/. It features a component-based architecture with dialogs, a prompt system with autocomplete and frecency-based history, 20+ color themes, and keybinding management. The TUI connects to the server as just another client.

Fig 10.1 — TUI Component Architecture
graph TB subgraph App["app.tsx (Root)"] Router["Route Context"] Theme["Theme Context"] SDK4["SDK Context"] Keybind["Keybind Context"] end subgraph Dialogs["Dialog Components"] DAgent["dialog-agent"] DModel["dialog-model"] DProvider["dialog-provider"] DMCP["dialog-mcp"] DSession["dialog-session-list"] DSkill["dialog-skill"] DCommand["dialog-command"] DTheme["dialog-theme-list"] DStash["dialog-stash"] DStatus["dialog-status"] end subgraph Prompt["Prompt System"] PromptIdx["prompt/index"] Autocomplete["autocomplete"] Frecency["frecency"] History["history"] Stash["stash"] end subgraph Contexts["Context Providers"] ArgsCtx["args"] ExitCtx["exit"] HelperCtx["helper"] KVCtx["kv"] LocalCtx["local"] SyncCtx["sync"] DirCtx["directory"] end subgraph Themes["20+ Bundled Themes"] Catppuccin["catppuccin"] Dracula["dracula"] Gruvbox["gruvbox"] TokyoNight["tokyo-night"] Nord["nord"] Solarized["solarized"] MoreThemes["aura, ayu, carbonfox,
cobalt2, cursor, everforest,
flexoki, github, monokai..."] end App --> Dialogs App --> Prompt App --> Contexts Theme --> Themes style App fill:#0d1117,stroke:#22c55e,stroke-width:2px,color:#e2e8f0 style Dialogs fill:#0d1117,stroke:#06b6d4,stroke-width:2px,color:#e2e8f0 style Prompt fill:#0d1117,stroke:#f59e0b,stroke-width:2px,color:#e2e8f0 style Contexts fill:#0d1117,stroke:#a855f7,stroke-width:2px,color:#e2e8f0 style Themes fill:#0d1117,stroke:#3b82f6,stroke-width:2px,color:#e2e8f0

Key TUI features:

  • Frecency-based autocomplete — Commands and file paths ranked by frequency + recency
  • Stash system — Save and recall message drafts
  • Agent/model switching — Quick dialogs to change agent or model mid-session
  • Session management — List, rename, and switch between sessions
  • Theme engine — 20+ themes loaded from JSON configs, hot-swappable
  • Custom keybindings — Configurable keybind context with vim-style defaults
  • MCP status dialog — Monitor connected MCP servers and their tools
11

Configuration System

OpenCode is configured via opencode.json (or .opencode/config.json) at the project root. The configuration schema covers model selection, agent customization, MCP servers, permissions, LSP servers, formatters, commands, skills, plugins, and experimental features.

Fig 11.1 — Configuration Schema
graph TB subgraph ConfigRoot["opencode.json"] Model["model
provider/model-id"] SmallModel["small_model
fast model for aux tasks"] DefaultAgent["default_agent"] Username["username"] LogLevel["logLevel"] end subgraph AgentConfig["agent: { }"] AgentModel["model"] AgentPrompt["prompt"] AgentTemp["temperature / top_p"] AgentSteps["steps (max iterations)"] AgentPerm["permission"] AgentMode["mode (primary/subagent)"] AgentDisable["disable"] end subgraph ProviderConfig["provider: { }"] Whitelist["whitelist / blacklist"] Models2["models (per-model config)"] Options["options (apiKey, baseURL)"] end subgraph MCPConfig["mcp: { }"] LocalType["type: local"] RemoteType["type: remote"] MCPCmd["command, environment"] MCPUrl["url, headers, oauth"] end subgraph PermConfig["permission: { }"] Allow["allow"] Deny["deny"] Ask["ask"] end subgraph Other["Other Config"] LSPConfig["lsp: { }"] Formatter["formatter: { }"] Commands["command: { }"] Skills["skills: [ ]"] Plugins["plugin: [ ]"] Watcher["watcher: { }"] ServerConfig["server: { }"] Experimental["experimental: { }"] end ConfigRoot --> AgentConfig ConfigRoot --> ProviderConfig ConfigRoot --> MCPConfig ConfigRoot --> PermConfig ConfigRoot --> Other style ConfigRoot fill:#0d1117,stroke:#22c55e,stroke-width:2px,color:#e2e8f0 style AgentConfig fill:#0d1117,stroke:#06b6d4,stroke-width:2px,color:#e2e8f0 style ProviderConfig fill:#0d1117,stroke:#f59e0b,stroke-width:2px,color:#e2e8f0 style MCPConfig fill:#0d1117,stroke:#a855f7,stroke-width:2px,color:#e2e8f0 style PermConfig fill:#0d1117,stroke:#ec4899,stroke-width:2px,color:#e2e8f0 style Other fill:#0d1117,stroke:#3b82f6,stroke-width:2px,color:#e2e8f0
// Example opencode.json { "$schema": "https://opencode.ai/config.json", "model": "anthropic/claude-sonnet-4-20250514", "small_model": "anthropic/claude-haiku-3-5", "agent": { "my-reviewer": { "model": "openai/gpt-4.1", "prompt": "You are a code reviewer...", "mode": "subagent", "permission": { "edit": "deny" } } }, "permission": { "bash": "ask", "edit": "allow" } }
12

End-to-End Data Flow

This diagram traces a complete user interaction from typing a message through to receiving the final response, showing how all subsystems collaborate.

Fig 12.1 — Complete Request Flow
graph TB Input["User Input
(TUI prompt)"] --> Parse["Parse Command
(slash commands, @mentions)"] Parse --> Session2["Session Manager
(create/resume session)"] Session2 --> AgentResolve["Resolve Agent
(build, plan, or custom)"] AgentResolve --> PermCheck["Permission Check
(tool allowlist)"] PermCheck --> PromptBuild["Build System Prompt
(agent + provider + user + plugins)"] PromptBuild --> ToolResolve["Resolve Available Tools
(built-in + MCP + skills)"] ToolResolve --> LLMCall["LLM.stream()
(Vercel AI SDK streamText)"] LLMCall --> StreamTokens["Stream Tokens
(SSE to client)"] LLMCall --> ToolCall["Tool Call?"] ToolCall -->|Yes| ToolExec["Execute Tool
(permission gate)"] ToolExec --> ToolResult["Tool Result"] ToolResult --> LLMCall ToolCall -->|No| Done["Response Complete"] StreamTokens --> Display["Render in TUI
(markdown + syntax highlighting)"] Done --> Persist["Persist to SQLite
(messages + tool calls)"] Done --> Snapshot3["Create Git Snapshot"] Done --> TitleGen["Generate Title
(hidden agent)"] Done --> EventBus["Emit Bus Events"] EventBus --> SSE2["SSE to All Clients"] style Input fill:#0d1117,stroke:#22c55e,stroke-width:2px,color:#e2e8f0 style LLMCall fill:#0d1117,stroke:#f59e0b,stroke-width:2px,color:#e2e8f0 style Done fill:#0d1117,stroke:#06b6d4,stroke-width:2px,color:#e2e8f0 style ToolExec fill:#0d1117,stroke:#a855f7,stroke-width:2px,color:#e2e8f0
Server API Endpoints: The Hono server exposes subrouted endpoints for each domain: /session, /agent, /provider, /mcp, /config, /permission, /question, /pty, /file, /lsp, /skill, /tui, /global, /experimental. Real-time updates flow via SSE at GET /event with 10-second heartbeats.

E Event Bus

All state changes emit typed events on the global bus. The SSE endpoint subscribes to the bus and forwards events to connected clients with type + properties payloads.

D Storage Layer

SQLite database managed by Drizzle ORM with typed schemas. Stores sessions, messages, tool calls, shares, and metadata. Supports migrations via drizzle-kit.

A ACP (Access Control)

The acp/ module provides agent-level and session-level access control policies. Works with the permission system to enforce tool access boundaries per agent context.

W Worktree

Git worktree support enables parallel operations on different branches without conflicts, useful for multi-session workflows on the same repository.

Diagram
100%
Scroll to zoom · Drag to pan · Esc to close