Architecture Maps

Continue

Open-source AI code assistant with deep IDE integration. Extensible context providers, pluggable model backends, slash commands, autocomplete, and agentic workflows across VS Code and JetBrains.

TypeScript 84% Kotlin 4% Apache 2.0 31.7k Stars 451 Contributors continuedev/continue
01

Project Overview

Continue is an open-source AI code assistant that connects any LLM to any IDE. It provides chat, autocomplete, inline editing, and agentic code generation -- all running inside VS Code, JetBrains, or the CLI with zero vendor lock-in.

The architecture follows a core + extensions pattern: a shared TypeScript core handles all AI logic (model routing, context retrieval, indexing, tool execution), while thin IDE-specific extensions implement the IDE interface to bridge editor APIs. A React-based GUI (webview) provides the chat/agent UI, communicating with the extension host via a typed message protocol.

Chat & Agent Mode

Conversational code assistance with plan-based autonomous execution. Streams responses from any configured model with full context awareness.

core/

Autocomplete

Real-time code suggestions using fill-in-the-middle (FIM) models. Debounced, token-budget-aware, with inline ghost text rendering.

core/autocomplete/

Inline Edit

Select code and describe changes in natural language. The model generates a diff that gets applied in-place with a review UI.

core/edit/

Context System

31+ built-in context providers surface files, codebase search, docs, git history, terminals, databases, and web content to the model.

core/context/

Tool Use

Agentic tool execution for file editing, terminal commands, web search, and MCP server integration. Policy-controlled access.

core/tools/

Code Indexing

Four-layer indexing pipeline: code snippets (tree-sitter), full-text search (SQLite FTS5), chunk embeddings, and vector search (LanceDB).

core/indexing/
02

High-Level Architecture

The system is split into four layers: the IDE extension (VS Code / JetBrains / CLI), the core engine (shared TypeScript), the GUI webview (React + Redux), and external model providers. The extension implements the IDE interface, the GUI communicates via a typed webview protocol, and the core orchestrates everything.

System Architecture Overview
graph TB subgraph IDEs["IDE Extensions"] VSC["VS Code Extension
TypeScript"] JB["JetBrains Plugin
Kotlin"] CLI["CLI Extension
TypeScript"] end subgraph GUI["GUI Layer (React Webview)"] Chat["Chat Interface"] Agent["Agent Mode"] Edit["Inline Edit UI"] AC["Autocomplete Ghost Text"] end subgraph Core["Core Engine (TypeScript)"] direction TB Config["Config Manager"] LLM["LLM Abstraction"] CTX["Context System"] CMD["Commands & Tools"] IDX["Indexing Pipeline"] DIFF["Diff Engine"] end subgraph Providers["Model Providers"] Cloud["Cloud APIs
Anthropic, OpenAI, Gemini"] Local["Local Models
Ollama, LM Studio, llama.cpp"] Proxy["Proxy / Gateway
OpenRouter, Azure, Bedrock"] end subgraph External["External Services"] MCP["MCP Servers"] Docs["Documentation Sites"] Git["Git / GitHub / GitLab"] DB["Databases"] end VSC <-->|"IDE Interface"| Core JB <-->|"IDE Interface"| Core CLI <-->|"IDE Interface"| Core GUI <-->|"Webview Protocol"| VSC GUI <-->|"JCEF Bridge"| JB Core -->|"API Calls"| Providers Core <-->|"Tools & Context"| External
03

Core Engine

The core/ package is the brain of Continue. It is a pure TypeScript library with no IDE dependencies, making it portable across VS Code, JetBrains, and CLI. It defines the key abstractions (ILLM, IContextProvider, IDE) and orchestrates all AI interactions.

Core Module Structure
graph LR subgraph core["core/"] direction TB IDX["index.d.ts
Type Definitions"] CORE_TS["core.ts
Entry Point"] subgraph modules["Modules"] direction TB llm["llm/
Model Abstraction"] context["context/
Context Providers"] commands["commands/
Slash Commands"] tools["tools/
Agent Tools"] autocomplete["autocomplete/
FIM Completion"] indexing["indexing/
Code Indexing"] edit["edit/
Inline Editing"] diff["diff/
Diff Processing"] config["config/
YAML Config"] protocol["protocol/
Message Types"] end subgraph support["Support"] direction TB promptFiles["promptFiles/"] continueServer["continueServer/"] controlPlane["control-plane/"] data["data/"] end end IDX --> modules CORE_TS --> modules modules --> support

Key architectural decisions:

  • IDE interface abstraction -- The IDE interface defines 40+ methods for file I/O, navigation, subprocess execution, git operations, and workspace access. Each extension implements this interface.
  • Protocol-driven communication -- The protocol/ module defines typed messages for all GUI-to-core interactions, ensuring type safety across the webview boundary.
  • Plugin architecture -- Context providers, LLM backends, and slash commands are all registered dynamically from configuration, supporting both built-in and custom implementations.
04

Context Provider System

Context providers are the mechanism by which Continue surfaces relevant information to the LLM. Users invoke them with @ mentions in chat (e.g., @file, @codebase, @docs). Each provider implements the IContextProvider interface with two key methods: getContextItems() to fetch content and loadSubmenuItems() for dynamic option lists.

Context Provider Architecture
graph TB subgraph Interface["IContextProvider Interface"] desc["description(): ContextProviderDescription"] get["getContextItems(query, extras): ContextItem[]"] sub["loadSubmenuItems(): ContextSubmenuItem[]"] end subgraph Providers["Built-in Providers (31+)"] subgraph Files["File System"] file["@file
FileContextProvider"] folder["@folder
FolderContextProvider"] open["@open
OpenFilesContextProvider"] current["@currentFile
CurrentFileContextProvider"] tree["@tree
FileTreeContextProvider"] end subgraph Code["Code Intelligence"] codebase["@codebase
CodebaseContextProvider"] code["@code
CodeContextProvider"] search["@search
SearchContextProvider"] repomap["@repo-map
RepoMapContextProvider"] end subgraph Dev["Development"] diff["@diff
DiffContextProvider"] terminal["@terminal
TerminalContextProvider"] problems["@problems
ProblemsContextProvider"] debug["@debugLocals
DebugLocalsProvider"] git["@commit
GitCommitContextProvider"] end subgraph External["External Sources"] docs["@docs
DocsContextProvider"] web["@web
WebContextProvider"] url["@url
URLContextProvider"] google["@google
GoogleContextProvider"] clipboard["@clipboard
ClipboardContextProvider"] end subgraph Integrations["Service Integrations"] github["@github
GitHubIssuesContextProvider"] gitlab["@gitlab
GitLabMergeRequestContextProvider"] jira["@jira
JiraIssuesContextProvider"] postgres["@postgres
PostgresContextProvider"] db["@database
DatabaseContextProvider"] discord_ctx["@discord
DiscordContextProvider"] end subgraph Extensibility["Extension Points"] mcp_ctx["@mcp
MCPContextProvider"] http["@http
HttpContextProvider"] custom["CustomContextProvider
User-defined"] end end subgraph Output["ContextItem"] name_out["name: string"] desc_out["description: string"] content_out["content: string"] uri_out["uri?: FileURI"] end Interface --> Providers Providers --> Output

The retrieval subsystem (context/retrieval/) powers the @codebase provider, combining vector similarity search from the indexing pipeline with full-text search for hybrid retrieval. The @docs provider crawls and indexes documentation sites, storing embeddings for semantic search.

The MCP Context Provider bridges the Model Context Protocol, allowing any MCP server to surface context items. The HTTP Context Provider enables fetching context from arbitrary REST endpoints.

05

Model Abstraction Layer

Continue's LLM abstraction layer decouples AI capabilities from specific providers. The ILLM interface defines methods for chat, completion, FIM, embeddings, and reranking. Each provider implements this interface, and models are assigned to specific roles in the config.

Model Role Routing
graph TB subgraph Roles["Model Roles"] chat["Chat
Conversational assistance"] edit_role["Edit
Code transformations"] apply["Apply
Targeted modifications"] ac["Autocomplete
FIM suggestions"] embed["Embedding
Vector representations"] rerank["Reranker
Result ordering"] end subgraph ILLM["ILLM Interface"] streamChat["streamChat()"] complete["complete() / streamComplete()"] fim["streamFim()"] embedMethod["embed()"] rerankMethod["rerank()"] caps["supportsImages()
supportsFim()
supportsCompletions()"] end subgraph Providers["60+ Provider Implementations"] subgraph Frontier["Frontier APIs"] anthropic["Anthropic
Claude"] openai["OpenAI
GPT"] gemini["Gemini
Google"] xai["xAI
Grok"] mistral["Mistral"] deepseek["DeepSeek"] end subgraph Local["Local Inference"] ollama["Ollama"] lmstudio["LM Studio"] llamacpp["llama.cpp"] llamafile["Llamafile"] vllm["vLLM"] tgwebui["TextGenWebUI"] end subgraph Cloud["Cloud Platforms"] azure["Azure OpenAI"] bedrock["AWS Bedrock"] vertexai["Vertex AI"] sagemaker["SageMaker"] watsonx["WatsonX"] end subgraph Gateway["Gateways & Aggregators"] openrouter["OpenRouter"] together["Together AI"] fireworks["Fireworks AI"] groq["Groq"] replicate["Replicate"] nvidia["NVIDIA NIM"] end end subgraph Support["Supporting Infrastructure"] tokens["Token Counting
tiktoken + llama tokenizer"] templates["Chat Templates
Message formatting"] converters["Type Converters
OpenAI format bridge"] autodetect["Auto-Detection
Provider inference"] toolSupport["Tool Support
Function calling"] end Roles --> ILLM ILLM --> Providers ILLM --> Support
Role Interface Method Description
Chat streamChat() Powers conversational interactions, agent mode, and code explanation
Edit streamChat() Handles complex code transformations and refactoring tasks
Apply streamChat() Executes targeted, surgical code modifications
Autocomplete streamFim() Real-time fill-in-the-middle code suggestions
Embedding embed() Transforms code into vectors for semantic search and indexing
Reranker rerank() Re-orders search results by semantic relevance
06

Commands & Tools

Slash commands are user-invoked actions triggered by typing / in the chat input. They extend the assistant's capabilities beyond conversation. Tools are model-invoked actions that enable agentic behavior -- the LLM decides when and how to use them during execution.

Commands & Tools Architecture
graph TB subgraph SlashCommands["Slash Commands (User-Invoked)"] direction TB prompt_file["/prompt
promptFileSlashCommand"] prompt_block["/block
promptBlockSlashCommand"] mcp_cmd["/mcp
mcpSlashCommand"] custom_cmd["/custom
customSlashCommand"] rule_cmd["/rule
ruleBlockSlashCommand"] legacy["Legacy Built-ins
built-in-legacy/"] end subgraph Tools["Agent Tools (Model-Invoked)"] direction TB subgraph Defs["Tool Definitions"] file_tools["Read / Write / Edit File"] terminal_tool["Run Terminal Command"] search_tool["Search Codebase"] web_tool["Search Web"] create_tool["Create New File"] end subgraph Impl["Implementation Layer"] callTool["callTool.ts
Dispatch & execution"] parseArgs["parseArgs.ts
Argument validation"] builtIn["builtIn.ts
Built-in registry"] end subgraph Policy["Access Control"] policies["Tool Policies
policies/"] overrides["Tool Overrides
applyToolOverrides.ts"] end end subgraph MCP["MCP Integration"] mcp_tools["MCP Tool Bridge
mcpToolName.ts"] mcp_server["External MCP Servers"] end User["User Input"] -->|"/ prefix"| SlashCommands LLM["LLM Decision"] -->|"function call"| Tools Tools --> MCP SlashCommands -->|"generates prompt"| LLM

The tool system uses a policy layer to control which tools the model can invoke. Policies can be set per-tool to allow, deny, or require confirmation. The MCP bridge maps external MCP server tools into Continue's tool namespace, enabling seamless integration with any MCP-compatible service.

07

Indexing Pipeline

Continue indexes the entire codebase to power @codebase search, autocomplete context, and agent retrieval. The pipeline uses a content-addressed tagging system so switching branches only re-indexes changed files.

Four-Layer Indexing Pipeline
graph TB subgraph Input["Source Input"] walkDir["walkDir.ts
Directory traversal"] ignore["continueignore.ts
.continueignore rules"] shouldIgnore["shouldIgnore.ts
File filtering"] end subgraph Orchestrator["CodebaseIndexer.ts"] refresh["refreshIndex.ts
Diff computation"] ops["Generate Operations
compute / delete / addTag / removeTag"] end subgraph Indexes["Four Index Types"] snippets["CodeSnippetsIndex
Tree-sitter extraction
Functions, classes, methods"] fts["FullTextSearchCodebaseIndex
SQLite FTS5
Keyword search"] chunks["ChunkCodebaseIndex
Recursive chunking
Overlapping segments"] vectors["LanceDbIndex
Vector embeddings
Semantic similarity"] end subgraph Storage["Storage Layer"] sqlite["SQLite
Snippets + FTS"] lance["LanceDB
Vector store"] end subgraph Retrieval["Retrieval"] hybrid["Hybrid Search
FTS + Vector + Reranking"] end Input --> Orchestrator Orchestrator --> Indexes snippets --> sqlite fts --> sqlite chunks --> vectors vectors --> lance Indexes --> Retrieval
Index Source Storage Use Case
CodeSnippetsIndex Tree-sitter AST queries SQLite Function/class lookup, symbol navigation
FullTextSearchCodebaseIndex Raw file content SQLite FTS5 Keyword search, grep-like queries
ChunkCodebaseIndex Recursive code chunking References Embedding input preparation
LanceDbIndex Chunk embeddings LanceDB (vector) Semantic similarity search
08

IDE Integration

Continue supports three IDE targets through the shared IDE interface. Each extension is a thin adapter that translates IDE-specific APIs into the common interface, allowing the core engine to remain completely IDE-agnostic.

IDE Extension Architecture
graph TB subgraph IDE_IF["IDE Interface (40+ methods)"] direction LR files_if["File I/O
readFile, writeFile,
readRangeInFile
"] nav_if["Navigation
gotoDefinition,
getReferences,
getDocumentSymbols
"] exec_if["Execution
subprocess,
runCommand
"] git_if["Git
getBranch,
getRepoName,
getGitRootPath
"] workspace_if["Workspace
getWorkspaceDirs,
getOpenFiles,
getCurrentFile
"] end subgraph VSCode["VS Code Extension"] vsc_ide["VsCodeIde.ts
IDE implementation"] vsc_activate["activation/
Extension lifecycle"] vsc_webview["ContinueGUIWebview
ViewProvider.ts"] vsc_commands["commands.ts
Command palette"] vsc_autocomplete["autocomplete/
IntelliSense provider"] vsc_protocol["webviewProtocol.ts
Message bridge"] vsc_langserver["lang-server/
LSP integration"] vsc_terminal["terminal/
Terminal capture"] end subgraph JetBrains["JetBrains Plugin (Kotlin)"] jb_services["services/
Application services"] jb_activities["activities/
IDE lifecycle hooks"] jb_actions["actions/
Menu actions"] jb_protocol["protocol/
Core communication"] jb_proxy["proxy/
Core proxy layer"] jb_browser["browser/
JCEF webview"] jb_autocomplete["autocomplete/
Inline completion"] jb_editor["editor/
Editor integration"] end subgraph CLIExt["CLI Extension"] cli_ide["CLI IDE adapter"] cli_tui["TUI Mode
Terminal interface"] cli_headless["Headless Mode
CI/CD integration"] end IDE_IF --> vsc_ide IDE_IF --> jb_services IDE_IF --> cli_ide

VS Code Extension

Implements IDE via VsCodeIde.ts. Uses VS Code's webview API for the GUI panel. Registers commands, IntelliSense providers, diff viewers, and terminal listeners. Communication flows through webviewProtocol.ts.

extensions/vscode/

JetBrains Plugin

Written in Kotlin, following JetBrains platform conventions. Uses JCEF (Chromium Embedded) for the webview. Communicates with the TypeScript core through a proxy layer. Hooks into IDE lifecycle via activities and services.

extensions/intellij/

CLI Extension

Runs Continue outside any IDE. Supports both an interactive TUI mode for terminal-based chat and a headless mode for CI/CD pipelines and automated checks. Powered by the cn CLI tool.

extensions/cli/
09

GUI Layer

The GUI is a React application rendered inside the IDE's webview panel. It uses Redux for global state management and React Context for localized state. The same codebase is shared across VS Code and JetBrains (via JCEF).

GUI Webview Architecture
graph TB subgraph WebviewHost["IDE Webview Host"] vsc_wv["VS Code Webview API"] jb_wv["JetBrains JCEF"] end subgraph GUI["gui/src/"] main["main.tsx
Entry point"] app["App.tsx
Root component"] subgraph State["State Management"] redux["redux/
Store, reducers, actions"] context_react["context/
React Context providers"] hooks["hooks/
Custom React hooks"] end subgraph Pages["Pages"] chat_page["Chat Page"] agent_page["Agent Page"] history_page["History Page"] settings_page["Settings Page"] end subgraph Components["components/"] input["Chat Input
@ mentions, / commands"] messages["Message Stream
Markdown + code blocks"] toolbar["Toolbar
Model selector, actions"] context_ui["Context Display
Attached items"] diff_ui["Diff Viewer
Apply/reject changes"] end end subgraph Protocol["Webview Protocol"] post["postMessage() / onMessage()"] types["Typed Message Definitions
webviewProtocol.ts"] end subgraph Core["Core Engine"] handlers["Message Handlers"] end WebviewHost --> main main --> app app --> State app --> Pages Pages --> Components Components <-->|"User actions"| Protocol Protocol <-->|"Typed messages"| Core
10

Configuration System

Continue uses a layered YAML configuration system. The primary config file lives at ~/.continue/config.yaml (global) or .continue/config.yaml (workspace). Configuration covers model selection, context providers, rules, MCP servers, and tool permissions.

Configuration Loading
graph LR subgraph Sources["Config Sources"] global["~/.continue/config.yaml
Global settings"] workspace[".continue/config.yaml
Workspace overrides"] checks[".continue/checks/
CI check definitions"] rules_dir[".continue/rules/
Behavioral rules"] prompts_dir[".continue/prompts/
Custom prompts"] end subgraph ConfigMgr["Config Manager (core/config/)"] load["Load & Merge"] validate["Validate Schema"] resolve["Resolve Model Providers"] register["Register Providers"] end subgraph Runtime["Runtime Configuration"] models_rt["Model Assignments
chat, edit, autocomplete, embed"] ctx_rt["Active Context Providers"] tools_rt["Tool Policies"] rules_rt["Active Rules"] mcp_rt["MCP Server Connections"] end Sources --> ConfigMgr ConfigMgr --> Runtime

Configuration supports environment variable references for secrets (e.g., $ANTHROPIC_API_KEY), workspace-level overrides that merge with global settings, and Mission Control -- a web interface for managing configurations across teams.

11

End-to-End Data Flow

This diagram traces a complete interaction from user input to streamed response, showing how all subsystems coordinate.

Chat Request Lifecycle
sequenceDiagram participant User participant GUI as GUI (React) participant Ext as IDE Extension participant Core as Core Engine participant Ctx as Context Providers participant Idx as Indexing Pipeline participant LLM as LLM Provider User->>GUI: Type message with @codebase GUI->>Ext: postMessage(chatRequest) Ext->>Core: handleChatRequest() Core->>Ctx: getContextItems("codebase", query) Ctx->>Idx: hybridSearch(query) Idx-->>Ctx: ranked code chunks Ctx-->>Core: ContextItem[] Core->>Core: Build prompt (system + context + history + user) Core->>LLM: streamChat(messages) loop Streaming LLM-->>Core: token chunk Core-->>Ext: stream update Ext-->>GUI: postMessage(streamToken) GUI-->>User: Render incrementally end Note over Core,LLM: If agent mode with tools: LLM-->>Core: tool_call(editFile, args) Core->>Ext: IDE.writeFile() Ext-->>Core: success Core->>LLM: tool_result
Autocomplete Flow
sequenceDiagram participant Editor as IDE Editor participant Ext as Extension participant Core as Core Engine participant LLM as FIM Model Editor->>Ext: Cursor position changed Ext->>Ext: Debounce (configurable) Ext->>Core: getCompletions(prefix, suffix, file) Core->>Core: Build FIM context (token budget) Core->>LLM: streamFim(prefix, suffix) LLM-->>Core: completion text Core-->>Ext: InlineCompletion Ext-->>Editor: Ghost text overlay Editor->>Ext: Tab to accept Ext->>Editor: Insert completion
Diagram
100%