Architecture Maps

Docker MCP Gateway

Open-source MCP server orchestration platform. Container-based isolation, secrets management, call tracing, and a centralized proxy for AI tool access via the Model Context Protocol.

Go 1.24+ Docker CLI Plugin MCP 2025-11-25 OCI Distribution MIT License
01

System Overview

Docker MCP Gateway is a Docker CLI plugin (docker mcp) that manages the lifecycle of Model Context Protocol servers running in isolated Docker containers. It acts as a centralized proxy, aggregating tools from multiple MCP servers behind a single endpoint that AI clients connect to.

30+
Go Packages
3
Transport Modes
4
Server Ref Types
1
CLI Binary
High-Level Architecture
graph TB subgraph Clients["AI Clients"] C1["Claude Desktop"] C2["VS Code / Cursor"] C3["Custom MCP Client"] end subgraph Gateway["Docker MCP Gateway"] GW["Gateway Server
(Protocol Router)"] PM["Profile Manager"] TD["Tool Discovery
& Aggregation"] SM["Secrets Manager"] INT["Interceptors
(Logging / Tracing)"] end subgraph Containers["Docker Engine"] S1["MCP Server A
(Container)"] S2["MCP Server B
(Container)"] S3["MCP Server C
(Container)"] end subgraph External["External Services"] CAT["OCI Registry
(Catalogs)"] OAUTH["OAuth Providers"] API["Third-Party APIs"] end C1 & C2 & C3 -->|"MCP Protocol"| GW GW --> TD GW --> INT PM --> GW SM --> GW TD -->|"tools/list"| S1 & S2 & S3 GW -->|"tools/call"| S1 & S2 & S3 SM -.->|"inject secrets"| S1 & S2 & S3 CAT -.->|"pull images"| Containers OAUTH -.->|"tokens"| SM S1 & S2 & S3 -->|"API calls"| API style Gateway fill:#0f1f38,stroke:#1D63ED,stroke-width:2px style Containers fill:#0d1c34,stroke:#0DB7ED,stroke-width:2px style Clients fill:#152a4a,stroke:#68B8F8,stroke-width:1px style External fill:#152a4a,stroke:#506882,stroke-width:1px
Design Philosophy

The gateway follows a "no build step" philosophy for MCP servers -- each server is a self-contained Docker image with embedded metadata. The gateway handles all lifecycle management, secret injection, and protocol routing so individual servers remain simple and focused on tool implementation.

02

Gateway Proxy Architecture

The gateway is the central nervous system of the platform. It presents a unified MCP interface to clients while internally managing a fleet of containerized MCP servers. Tool calls are routed to the correct backend server, and responses are aggregated transparently.

Proxy Routing Model
graph LR subgraph Client["MCP Client"] REQ["tools/call
github.create_issue"] end subgraph GW["Gateway Internals"] ROUTER["Request Router"] TOOL_REG["Tool Registry
(merged from all servers)"] INTERCEPT["Interceptor Chain
log -> trace -> validate"] ALLOW["Allowlist Filter"] end subgraph Servers["Container Pool"] GH["GitHub Server"] SLACK["Slack Server"] FS["Filesystem Server"] end REQ --> INTERCEPT INTERCEPT --> ROUTER ROUTER --> TOOL_REG TOOL_REG -->|"resolve server"| ALLOW ALLOW -->|"permitted"| GH ALLOW -.->|"blocked"| SLACK ALLOW -.->|"blocked"| FS style GW fill:#0f1f38,stroke:#1D63ED,stroke-width:2px style Client fill:#152a4a,stroke:#68B8F8,stroke-width:1px style Servers fill:#0d1c34,stroke:#0DB7ED,stroke-width:2px

Tool Aggregation

When a client sends tools/list, the gateway fans out to every connected server, collects their tool inventories, and returns a single merged list. Clients see one flat namespace.

MCP

Request Routing

The tool registry maps each tool name to its owning server. When tools/call arrives, the router resolves the target container and forwards the request via stdio.

Go

Interceptor Chain

Requests and responses pass through a configurable interceptor pipeline. Built-in interceptors handle logging, call tracing, and request validation before routing.

Observability

Tool Allowlisting

Profiles can restrict which tools are exposed per server. This prevents accidental exposure of dangerous tools and enables least-privilege configurations.

Security
03

Container Isolation Model

Each MCP server runs in its own Docker container with minimal host privileges. The gateway manages the full container lifecycle including resource allocation, network policy, and cleanup. Containers are ephemeral by default and can be retained with the --keep flag.

Container Isolation Boundaries
graph TB subgraph Host["Host System"] DE["Docker Engine"] SOCK["Docker Socket"] SEC["Secrets Store"] end subgraph C1["Container: github-server"] P1["Process: npx @github/mcp"] N1["Network: restricted"] R1["Resources: 1 CPU, 2GB RAM"] V1["Volumes: none"] end subgraph C2["Container: postgres-server"] P2["Process: uvx pg-mcp"] N2["Network: db-net only"] R2["Resources: 1 CPU, 2GB RAM"] V2["Volumes: /data (read-only)"] end subgraph C3["Container: filesystem-server"] P3["Process: mcp-fs"] N3["Network: none"] R3["Resources: 0.5 CPU, 1GB RAM"] V3["Volumes: /workspace (rw)"] end DE --> C1 & C2 & C3 SEC -.->|"inject at runtime"| C1 & C2 SOCK -.->|"lifecycle mgmt"| DE style Host fill:#152a4a,stroke:#506882,stroke-width:1px style C1 fill:#0f1f38,stroke:#1D63ED,stroke-width:2px style C2 fill:#0f1f38,stroke:#0DB7ED,stroke-width:2px style C3 fill:#0f1f38,stroke:#68B8F8,stroke-width:2px
Isolation Layer Mechanism Configuration
Process Each server runs in its own PID namespace inside a Docker container Default: isolated. No host PID sharing.
Network Containers get restricted network access; can be fully blocked with --block-network Per-server network policy via profiles
Filesystem Read-only root filesystem by default. Explicit volume mounts for data access. Configurable mounts via server metadata labels
Resources CPU and memory limits enforced by Docker cgroups Default: 1 CPU, 2GB RAM. Customizable per server.
Secrets Injected at runtime, never baked into images or environment variables Docker Desktop secrets store or .env fallback
Image Integrity Signature verification for server images before execution --block-unsigned flag for strict mode
Self-Describing Images

MCP server images can embed their configuration via the io.docker.server.metadata Docker label. This label contains the server name, description, command, environment variables, required secrets, and config schemas. The gateway reads this at startup -- no external catalog entry is required.

04

Secrets & Authentication

The gateway provides a layered secrets management system that keeps credentials out of environment variables and image layers. It supports Docker Desktop's native secrets store, local .env file fallback, and built-in OAuth token flows for services that require browser-based authentication.

Secrets Flow
sequenceDiagram participant Dev as Developer participant GW as Gateway participant DD as Docker Desktop
Secrets Store participant ENV as .env Fallback participant OAuth as OAuth Provider participant Server as MCP Server
(Container) Dev->>GW: docker mcp gateway start GW->>DD: Request secrets for profile alt Secrets found in DD DD-->>GW: API keys, tokens else Fallback GW->>ENV: Read .env file ENV-->>GW: Environment values end alt Server requires OAuth GW->>OAuth: Initiate OAuth flow OAuth-->>Dev: Browser redirect Dev->>OAuth: Authorize OAuth-->>GW: Access token end GW->>Server: Start container with
injected secrets Note over Server: Secrets available as
runtime environment vars
(never in image layers)

Docker Desktop Store

Primary secrets backend. Credentials are stored encrypted in Docker Desktop's credential store and injected into containers at startup time.

Docker Desktop

.env Fallback

For environments without Docker Desktop (Docker CE, WSL2), the gateway reads secrets from a local .env file as a fallback mechanism.

Standalone

OAuth Token Flow

Built-in OAuth support for services like GitHub and Slack. The gateway manages the browser-based auth flow and stores tokens for subsequent container starts.

OAuth 2.0

Secrets Scanning

The secretsscan package detects accidentally exposed credentials in server configurations and image metadata before containers launch.

Security
05

Message Flow & Call Tracing

The MCP protocol defines a request/response pattern for tool discovery and invocation. The gateway acts as an MCP server to clients and an MCP client to backend servers, performing protocol translation and request fan-out transparently.

Tool Discovery Flow
sequenceDiagram participant Client as AI Client participant GW as Gateway participant Log as Interceptor
(Logger/Tracer) participant S1 as Server A participant S2 as Server B participant S3 as Server C Client->>GW: tools/list GW->>Log: log request par Fan-out to all servers GW->>S1: tools/list GW->>S2: tools/list GW->>S3: tools/list end S1-->>GW: [create_issue, list_repos] S2-->>GW: [send_message, list_channels] S3-->>GW: [read_file, write_file] GW->>GW: Merge & deduplicate GW->>Log: log aggregated response GW-->>Client: [create_issue, list_repos,
send_message, list_channels,
read_file, write_file] Note over Client,S3: Tool list change notifications
trigger automatic re-discovery
Tool Invocation Flow
sequenceDiagram participant LLM as LLM participant Client as MCP Client participant GW as Gateway participant Log as Interceptor
(Logger/Tracer) participant Server as Target Server LLM->>Client: Use tool: create_issue Client->>GW: tools/call {name: "create_issue", args: {...}} GW->>Log: trace_start(call_id, tool, args) GW->>GW: Resolve tool -> server mapping GW->>Server: tools/call {name: "create_issue", args: {...}} Server-->>GW: {result: "Issue #42 created"} GW->>Log: trace_end(call_id, result, duration) GW-->>Client: {result: "Issue #42 created"} Client-->>LLM: Tool result

Call Tracing & Observability

The gateway's interceptor system provides built-in call tracing. Every tool invocation is logged with its call ID, tool name, arguments, result, and duration. The telemetry package provides structured metrics, while the logs package handles persistent log storage and retrieval.

List Changed Notifications

When a backend server's tool inventory changes (new tools added, tools removed), it sends a tools/list_changed notification. The gateway catches this, re-queries the affected server, updates its merged registry, and propagates the change notification upstream to all connected clients.

06

Profiles & Catalogs

Profiles are the primary organizational unit -- named collections of MCP servers with per-server configuration and tool allowlists. Catalogs are OCI-based registries that distribute pre-configured server definitions. Together they enable shareable, reproducible AI tool environments.

Profile & Catalog System
graph TB subgraph Sources["Server Reference Types"] CAT_REF["catalog://
mcp/catalog/github"] OCI_REF["docker://
my-server:latest"] REG_REF["https://registry.
modelcontextprotocol.io/..."] FILE_REF["file://
./my-server.yaml"] end subgraph Profile["Profile: my-dev-env"] S1["Server: github
tools: [create_issue, list_repos]"] S2["Server: postgres
tools: [query, describe_table]"] S3["Server: filesystem
tools: [read_file, write_file]"] CONF["Config:
github.timeout=30
postgres.host=localhost"] end subgraph Distribution["OCI Distribution"] PUSH["docker mcp catalog push"] PULL["docker mcp catalog pull"] REG["OCI Registry
(Docker Hub, GHCR)"] end subgraph DB["Local Database"] PROFILES["Profiles Store"] CATALOGS["Catalogs Store"] SETTINGS["Server Settings"] end CAT_REF & OCI_REF & REG_REF & FILE_REF --> Profile Profile --> DB Profile -->|"export"| PUSH --> REG REG --> PULL -->|"import"| Profile style Profile fill:#0f1f38,stroke:#1D63ED,stroke-width:2px style Sources fill:#152a4a,stroke:#68B8F8,stroke-width:1px style Distribution fill:#152a4a,stroke:#0DB7ED,stroke-width:1px style DB fill:#0d1c34,stroke:#506882,stroke-width:1px
Reference Type Format Description
Catalog catalog://mcp/docker-mcp-catalog/github References a server definition from a registered catalog. Default catalog ships with Docker Desktop.
OCI Image docker://my-server:latest Direct reference to a Docker image. Image must include MCP server metadata label.
MCP Registry https://registry.modelcontextprotocol.io/v0/servers/<id> Fetches server definition from the official MCP registry. Resolved to a Docker image at runtime.
Local File file://./server.yaml YAML file defining the server locally. Useful for development and testing.
07

Package Map

The codebase is organized as a Go module under cmd/docker-mcp (CLI entry point) and pkg/ (library packages). Below is the internal dependency graph showing how the major packages compose into the gateway system.

Package Dependency Graph
graph TB subgraph CLI["cmd/docker-mcp"] MAIN["main.go
(Docker CLI Plugin)"] end subgraph Core["Core Packages"] GATEWAY["pkg/gateway
Gateway server & routing"] MCP_PKG["pkg/mcp
MCP protocol impl"] CLIENT["pkg/client
MCP client for backends"] SERVER["pkg/server
Server lifecycle mgmt"] end subgraph Config["Configuration"] CONFIG["pkg/config
Config loading & validation"] PROFILE["pkg/policy
Profile & policy engine"] DB["pkg/db
Local SQLite store"] MIGRATE["pkg/migrate
Schema migrations"] end subgraph Security["Security"] OAUTH["pkg/oauth
OAuth token flows"] SIGS["pkg/signatures
Image verification"] SECRETS["pkg/secretsscan
Credential detection"] end subgraph Distribution["Distribution"] CATALOG["pkg/catalog
Catalog management"] OCI["pkg/oci
OCI registry client"] MCPREG["pkg/mcpregistry
MCP registry client"] FETCH["pkg/fetch
HTTP fetch utilities"] end subgraph Observability["Observability"] LOG["pkg/log
Structured logging"] LOGS["pkg/logs
Log storage & query"] TEL["pkg/telemetry
Metrics & tracing"] INTERCEPT["pkg/interceptors
Request/response hooks"] end subgraph Infra["Infrastructure"] DOCKER["pkg/docker
Docker API client"] SOCKETS["pkg/sockets
Transport layer"] HEALTH["pkg/health
Health checks"] RETRY["pkg/retry
Retry logic"] VALIDATE["pkg/validate
Input validation"] end MAIN --> GATEWAY & CONFIG & PROFILE GATEWAY --> MCP_PKG & CLIENT & SERVER & INTERCEPT SERVER --> DOCKER & SOCKETS & HEALTH CLIENT --> MCP_PKG & SOCKETS CONFIG --> DB & MIGRATE & VALIDATE PROFILE --> CATALOG & MCPREG CATALOG --> OCI & FETCH GATEWAY --> OAUTH & SECRETS INTERCEPT --> LOG & LOGS & TEL DOCKER --> RETRY SERVER --> SIGS style CLI fill:#152a4a,stroke:#68B8F8,stroke-width:2px style Core fill:#0f1f38,stroke:#1D63ED,stroke-width:2px style Config fill:#0d1c34,stroke:#0DB7ED,stroke-width:1px style Security fill:#0d1c34,stroke:#F1C40F,stroke-width:1px style Distribution fill:#0d1c34,stroke:#9B59B6,stroke-width:1px style Observability fill:#0d1c34,stroke:#2ECC71,stroke-width:1px style Infra fill:#0d1c34,stroke:#506882,stroke-width:1px

pkg/gateway

Core gateway server implementation. Handles client connections, tool aggregation, request routing, and the interceptor pipeline. The central orchestrator.

Go

pkg/mcp

Pure MCP protocol implementation. JSON-RPC message types, tool schemas, resource definitions, and prompt templates conforming to MCP spec 2025-11-25.

MCP

pkg/docker

Docker Engine API client. Container creation, lifecycle management, log streaming, resource limits, and network configuration.

Docker

pkg/catalog

Catalog management for server discovery. Supports multiple catalog sources, OCI push/pull, and the Docker MCP Catalog as a default source.

OCI

pkg/interceptors

Middleware-style request/response interceptors. Pluggable chain for logging, tracing, rate limiting, and custom transformations on MCP messages.

Middleware

pkg/oauth

OAuth 2.0 client with browser-based authorization code flow. Manages token refresh, storage, and injection for servers requiring OAuth credentials.

OAuth
08

Transport Modes

The gateway supports three transport modes for client-to-gateway communication. The transport between the gateway and backend servers is always stdio (stdin/stdout of the container process). Client-facing transport is configurable.

Transport Architecture
graph LR subgraph stdio_mode["stdio Mode (Default)"] C1["Single Client"] -->|"stdin/stdout"| GW1["Gateway"] end subgraph sse_mode["SSE Mode"] C2a["Client A"] -->|"HTTP GET /sse"| GW2["Gateway
:PORT"] C2b["Client B"] -->|"HTTP GET /sse"| GW2 end subgraph stream_mode["Streaming Mode"] C3a["Client A"] -->|"HTTP POST"| GW3["Gateway
:PORT"] C3b["Client B"] -->|"HTTP POST"| GW3 C3c["Client C"] -->|"HTTP POST"| GW3 end subgraph backends["Backend Servers (always stdio)"] GW1 & GW2 & GW3 -->|"stdio"| S["Container
stdin/stdout"] end style stdio_mode fill:#0f1f38,stroke:#1D63ED,stroke-width:1px style sse_mode fill:#0f1f38,stroke:#0DB7ED,stroke-width:1px style stream_mode fill:#0f1f38,stroke:#68B8F8,stroke-width:1px style backends fill:#0d1c34,stroke:#506882,stroke-width:1px
Transport Clients Protocol Use Case
stdio Single stdin/stdout pipes Default mode. Direct integration with Claude Desktop, VS Code, Cursor via MCP client config.
SSE Multiple HTTP Server-Sent Events Multi-client access. Gateway listens on a configurable port. Clients subscribe to event streams.
Streaming Multiple HTTP request/response HTTP-based transport with configurable ports. Supports concurrent multi-client access.
Docker Compose Integration

The gateway can run as a Docker Compose service itself, with the Docker socket mounted as a volume (/var/run/docker.sock). This enables deployment alongside other services in a development stack, with the gateway managing sibling containers on the same Docker engine.

09

CLI Command Structure

The gateway ships as a Docker CLI plugin (docker mcp) built from cmd/docker-mcp. It also works as a standalone binary for environments without Docker Desktop. Commands organize around profiles, catalogs, gateways, and tools.

Command Hierarchy
graph TB ROOT["docker mcp"] subgraph ProfileCmds["Profile Management"] P_CREATE["profile create"] P_LIST["profile list"] P_SHOW["profile show"] P_ADD["profile add-server"] P_TOOLS["profile tools"] P_CONFIG["profile config set"] end subgraph CatalogCmds["Catalog Operations"] C_LIST["catalog list"] C_PUSH["catalog push"] C_PULL["catalog pull"] C_ADD["catalog add"] end subgraph GatewayCmds["Gateway Runtime"] G_START["gateway start
--transport stdio|sse|streaming
--profile my-profile
--port 8080"] G_STATUS["gateway status"] G_STOP["gateway stop"] end subgraph ToolCmds["Tool Management"] T_LIST["tool list"] T_INSPECT["tool inspect"] T_CALL["tool call"] end ROOT --> ProfileCmds & CatalogCmds & GatewayCmds & ToolCmds style ROOT fill:#152a4a,stroke:#68B8F8,stroke-width:2px style ProfileCmds fill:#0f1f38,stroke:#1D63ED,stroke-width:1px style CatalogCmds fill:#0f1f38,stroke:#9B59B6,stroke-width:1px style GatewayCmds fill:#0f1f38,stroke:#2ECC71,stroke-width:1px style ToolCmds fill:#0f1f38,stroke:#0DB7ED,stroke-width:1px

Profile Commands

Create, list, and configure profiles. Add servers with specific reference types. Enable/disable individual tools per server. Set server-specific config via dot notation.

docker mcp profile

Catalog Commands

Manage server catalogs. Push profiles to OCI registries for team sharing. Pull shared catalogs. Add third-party catalog sources beyond Docker's default.

docker mcp catalog

Gateway Commands

Start the gateway with a specific profile and transport mode. Monitor running gateway status. Resource limits, network blocking, and watch mode for config auto-reload.

docker mcp gateway

Tool Commands

List all available tools across running servers. Inspect tool schemas and documentation. Directly invoke tools from the command line for testing and debugging.

docker mcp tool
Installation

The plugin installs to ~/.docker/cli-plugins/docker-mcp via make docker-mcp. Requires Go 1.24+ for development builds. Docker Desktop 4.59+ includes the MCP Toolkit with pre-built binaries. The environment variable CLAUDE_CONFIG_DIR can override the default Claude Code config path for separate installations.

Diagram
100%
Scroll to zoom · Drag to pan