Architecture Maps

E2B — Sandboxed Cloud for AI

Interactive architecture map of E2B's open-source infrastructure for running AI-generated code in secure Firecracker microVM sandboxes. Built from public documentation, GitHub repos, and API specs.

Open Source Go + TypeScript + Python Firecracker microVMs GCP + AWS Updated: Mar 2026
01

Platform Overview

E2B is open-source infrastructure that lets AI agents execute code in secure, isolated cloud sandboxes. Each sandbox is a Firecracker microVM with its own filesystem, networking, and process tree. Sandboxes boot in ~150ms and can run for up to 24 hours.

~150ms
Sandbox Boot
24h
Max Runtime (Pro)
83.8%
Go Codebase
2
Cloud Providers
4
SDKs (JS/PY/CLI/Connect)
Two Repositories

E2B is split across two primary repos: e2b-dev/E2B contains the SDKs, CLI, and API specs. e2b-dev/infra contains the Go backend services, Terraform IaC, Firecracker integration, and orchestration layer. Both are open source under Apache 2.0.

02

High-Level Architecture

The platform follows a layered architecture: SDK clients communicate through a client proxy to the API server, which coordinates with the orchestrator to provision Firecracker microVMs. Each VM runs an envd daemon that handles in-sandbox operations.

E2B Service Communication Flow
graph LR
    subgraph Client["SDK / CLI"]
        JS["JS/TS SDK"]
        PY["Python SDK"]
        CLI["CLI"]
    end

    subgraph Edge["Edge Layer"]
        CP["Client Proxy
(Consul discovery)"] end subgraph Control["Control Plane"] API["API Server
(Gin REST)"] PG["PostgreSQL"] RD["Redis"] CH["ClickHouse"] end subgraph Compute["Compute Plane"] ORCH["Orchestrator
(gRPC)"] FC1["Firecracker VM 1"] FC2["Firecracker VM 2"] FCN["Firecracker VM N"] end JS --> CP PY --> CP CLI --> CP CP --> API CP --> ORCH API --> PG API --> RD API --> CH API --> ORCH ORCH --> FC1 ORCH --> FC2 ORCH --> FCN style JS fill:#1A472A,stroke:#33CC66,color:#F5F5F0 style PY fill:#1A472A,stroke:#33CC66,color:#F5F5F0 style CLI fill:#1A472A,stroke:#33CC66,color:#F5F5F0 style CP fill:#1A472A,stroke:#D4956A,color:#F5F5F0 style API fill:#0F2D1A,stroke:#4169E1,color:#F5F5F0 style PG fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0 style RD fill:#0F2D1A,stroke:#FF3333,color:#F5F5F0 style CH fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0 style ORCH fill:#0F2D1A,stroke:#B87333,color:#F5F5F0 style FC1 fill:#2D2D2D,stroke:#FF3333,color:#F5F5F0 style FC2 fill:#2D2D2D,stroke:#FF3333,color:#F5F5F0 style FCN fill:#2D2D2D,stroke:#FF3333,color:#F5F5F0

API API Server

REST API built with the Gin framework (Go). Handles authentication via Supabase JWT, sandbox CRUD, template management, and usage tracking.

Port 80 | OpenAPI codegen

ORCH Orchestrator

Core VM lifecycle manager. Provisions Firecracker microVMs, manages networking via iptables/netlink, handles template caching, and exposes a gRPC interface.

Requires sudo | gRPC server

PROXY Client Proxy

Edge routing layer that uses Consul for service discovery to route SDK requests to the correct orchestrator node. Redis-backed state management.

Consul SD | Redis state

VM Envd

In-VM daemon running inside each Firecracker sandbox. Exposes Connect RPC APIs for process management and filesystem operations on port 49983.

Port 49983 | Connect RPC
03

API Server

The API server (packages/api/) is the primary control plane entry point. It implements a REST API defined via OpenAPI spec, with code-generated handlers, types, and specs. Authentication flows through Supabase JWT verification.

API Server Internal Architecture
graph TD
    subgraph Inbound["Inbound"]
        REQ["HTTP Request"]
    end

    subgraph Middleware["Middleware Stack"]
        AUTH["JWT Auth
(Supabase)"] RATE["Rate Limiting"] OTEL["OpenTelemetry
Instrumentation"] end subgraph Handlers["Handler Layer"] STORE["APIStore
(store.go)"] SBX["Sandbox Handlers
(create, list, kill)"] TPL["Template Handlers
(build, list)"] USR["User / Team
Handlers"] end subgraph Data["Data Layer"] ENT["Ent ORM
(PostgreSQL)"] SQLC["sqlc Queries
(type-safe)"] RCACHE["Redis Cache"] CHINS["ClickHouse
(analytics insert)"] end subgraph Downstream["Downstream"] ORCHG["Orchestrator
(gRPC call)"] end REQ --> AUTH AUTH --> RATE RATE --> OTEL OTEL --> STORE STORE --> SBX STORE --> TPL STORE --> USR SBX --> ENT SBX --> ORCHG TPL --> ENT USR --> ENT ENT --> SQLC STORE --> RCACHE STORE --> CHINS style REQ fill:#1A472A,stroke:#D4956A,color:#F5F5F0 style AUTH fill:#0F2D1A,stroke:#9370DB,color:#F5F5F0 style RATE fill:#0F2D1A,stroke:#9370DB,color:#F5F5F0 style OTEL fill:#0F2D1A,stroke:#9370DB,color:#F5F5F0 style STORE fill:#0F2D1A,stroke:#4169E1,color:#F5F5F0 style SBX fill:#1A472A,stroke:#4169E1,color:#F5F5F0 style TPL fill:#1A472A,stroke:#4169E1,color:#F5F5F0 style USR fill:#1A472A,stroke:#4169E1,color:#F5F5F0 style ENT fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0 style SQLC fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0 style RCACHE fill:#0F2D1A,stroke:#FF3333,color:#F5F5F0 style CHINS fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0 style ORCHG fill:#0F2D1A,stroke:#B87333,color:#F5F5F0

Code Generation Pipeline

The API server relies heavily on code generation to keep interfaces consistent and type-safe across the stack.

Generator Input Output Purpose
oapi-codegen spec/openapi.yml api/*.gen.go REST handlers, types, specs from OpenAPI
protoc spec/*.proto shared/pkg/grpc/ gRPC stubs for orchestrator + envd
sqlc db/queries/*.sql db/internal/db/ Type-safe database queries
mockery .mockery.yaml mocks/ dirs Test mocks for interfaces
04

Orchestrator

The orchestrator (packages/orchestrator/) is the core VM lifecycle manager. It runs on bare-metal or nested-virtualization nodes, requires root privileges, and directly manages Firecracker processes, Linux networking, and block device storage.

Orchestrator Component Breakdown
graph TD
    subgraph Server["gRPC Server"]
        GRPC["gRPC Interface
(internal/server/)"] end subgraph Sandbox["Sandbox Management"] SBMGR["Sandbox Manager
(internal/sandbox/)"] FCINTR["Firecracker Integration
(internal/sandbox/fc/)"] NETMGR["Network Manager
(internal/sandbox/network/)"] NBDMGR["NBD Storage
(internal/sandbox/nbd/)"] end subgraph Template["Template Layer"] TCACHE["Template Cache
(internal/sandbox/template/)"] GCS["GCS / S3 Bucket
(template storage)"] end subgraph Tools["Utilities"] CLEAN["NFS Cache Cleaner
(cmd/clean-nfs-cache/)"] BUILD["Template Builder
(cmd/build-template/)"] end GRPC --> SBMGR SBMGR --> FCINTR SBMGR --> NETMGR SBMGR --> NBDMGR SBMGR --> TCACHE TCACHE --> GCS BUILD --> GCS style GRPC fill:#0F2D1A,stroke:#B87333,color:#F5F5F0 style SBMGR fill:#1A472A,stroke:#B87333,color:#F5F5F0 style FCINTR fill:#2D2D2D,stroke:#FF3333,color:#F5F5F0 style NETMGR fill:#1A472A,stroke:#D4956A,color:#F5F5F0 style NBDMGR fill:#1A472A,stroke:#DAA520,color:#F5F5F0 style TCACHE fill:#1A472A,stroke:#FFBF00,color:#F5F5F0 style GCS fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0 style CLEAN fill:#0F2D1A,stroke:#7A8A70,color:#F5F5F0 style BUILD fill:#0F2D1A,stroke:#FFBF00,color:#F5F5F0
Root Required

The orchestrator requires sudo/root privileges because Firecracker uses KVM for hardware-level virtualization, and Linux networking (iptables, netlink) requires elevated permissions for tap device and namespace management.

05

Firecracker MicroVMs

Firecracker is Amazon's open-source VMM (Virtual Machine Monitor) designed for serverless workloads. E2B uses it to create lightweight, isolated Linux VMs that boot in milliseconds. Each sandbox is a complete microVM with its own kernel, rootfs, networking, and process space.

Firecracker Sandbox Isolation Model
graph TD
    subgraph Host["Host Machine (Bare Metal / Nested Virt)"]
        KVM["Linux KVM
(hardware virtualization)"] subgraph VM1["Sandbox A (microVM)"] K1["Guest Kernel"] R1["Rootfs
(from template)"] E1["Envd Daemon
:49983"] P1["User Processes"] end subgraph VM2["Sandbox B (microVM)"] K2["Guest Kernel"] R2["Rootfs
(from template)"] E2["Envd Daemon
:49983"] P2["User Processes"] end ORCH2["Orchestrator
(manages lifecycle)"] end ORCH2 --> KVM KVM --> VM1 KVM --> VM2 E1 --> P1 K1 --> R1 E2 --> P2 K2 --> R2 style KVM fill:#2D2D2D,stroke:#FF3333,color:#F5F5F0 style K1 fill:#0F2D1A,stroke:#B87333,color:#F5F5F0 style R1 fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0 style E1 fill:#1A472A,stroke:#33CC66,color:#F5F5F0 style P1 fill:#1A472A,stroke:#D4956A,color:#F5F5F0 style K2 fill:#0F2D1A,stroke:#B87333,color:#F5F5F0 style R2 fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0 style E2 fill:#1A472A,stroke:#33CC66,color:#F5F5F0 style P2 fill:#1A472A,stroke:#D4956A,color:#F5F5F0 style ORCH2 fill:#0F2D1A,stroke:#B87333,color:#F5F5F0

Sandbox Lifecycle

Sandbox State Machine
graph LR
    CREATE["create()"] --> BOOTING["Booting
(~150ms)"] BOOTING --> RUNNING["Running
(active sandbox)"] RUNNING --> PAUSED["Paused
(state preserved)"] PAUSED --> RUNNING RUNNING --> TIMEOUT["Timeout
(auto-pause)"] TIMEOUT --> PAUSED RUNNING --> KILLED["Killed
(manual kill)"] PAUSED --> RESUMED["resume()"] RESUMED --> RUNNING style CREATE fill:#1A472A,stroke:#33CC66,color:#F5F5F0 style BOOTING fill:#0F2D1A,stroke:#FFBF00,color:#F5F5F0 style RUNNING fill:#0F2D1A,stroke:#33CC66,color:#F5F5F0 style PAUSED fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0 style TIMEOUT fill:#0F2D1A,stroke:#D4956A,color:#F5F5F0 style KILLED fill:#2D2D2D,stroke:#FF3333,color:#F5F5F0 style RESUMED fill:#1A472A,stroke:#33CC66,color:#F5F5F0

Pause / Resume

Full VM state (memory, processes, filesystem) is preserved indefinitely when paused. Enables long-running workloads that exceed continuous runtime limits by resuming on demand.

State: snapshot to storage

Timeout Management

Configurable per-sandbox timeouts (JS: timeoutMs, Python: timeout). Can be dynamically extended via setTimeout() / set_timeout() during runtime. Base tier: 1h max. Pro tier: 24h max.

Default: varies by plan

NBD Storage

Network Block Device (NBD) protocol provides the rootfs disk to each microVM. Templates are cached locally from GCS/S3 buckets and mounted as block devices for fast VM startup.

internal/sandbox/nbd/
06

Envd — In-VM Daemon

Envd (packages/envd/) is the daemon that runs inside every Firecracker sandbox. It serves as the bridge between the external SDK/API and the isolated VM environment, exposing Connect RPC APIs for process and filesystem management.

Envd RPC Interface
graph TD
    subgraph External["External (via Orchestrator)"]
        SDK2["SDK Client"]
    end

    subgraph Envd["Envd Daemon (:49983)"]
        CRPC["Connect RPC
(chi router)"] subgraph ProcessAPI["Process API"] PSTART["Start Process"] PSIGNAL["Signal Process"] PLIST["List Processes"] PSTREAM["Stream Output
(stdout/stderr)"] end subgraph FSAPI["Filesystem API"] FREAD["Read File"] FWRITE["Write File"] FLIST["List Directory"] FWATCH["Watch Changes"] end end subgraph Kernel["Guest OS"] PROC["Process Tree"] FS["Filesystem"] end SDK2 --> CRPC CRPC --> ProcessAPI CRPC --> FSAPI PSTART --> PROC PSIGNAL --> PROC PLIST --> PROC PSTREAM --> PROC FREAD --> FS FWRITE --> FS FLIST --> FS FWATCH --> FS style SDK2 fill:#1A472A,stroke:#33CC66,color:#F5F5F0 style CRPC fill:#0F2D1A,stroke:#4169E1,color:#F5F5F0 style PSTART fill:#1A472A,stroke:#B87333,color:#F5F5F0 style PSIGNAL fill:#1A472A,stroke:#B87333,color:#F5F5F0 style PLIST fill:#1A472A,stroke:#B87333,color:#F5F5F0 style PSTREAM fill:#1A472A,stroke:#B87333,color:#F5F5F0 style FREAD fill:#1A472A,stroke:#DAA520,color:#F5F5F0 style FWRITE fill:#1A472A,stroke:#DAA520,color:#F5F5F0 style FLIST fill:#1A472A,stroke:#DAA520,color:#F5F5F0 style FWATCH fill:#1A472A,stroke:#DAA520,color:#F5F5F0 style PROC fill:#2D2D2D,stroke:#FF3333,color:#F5F5F0 style FS fill:#2D2D2D,stroke:#DAA520,color:#F5F5F0
Proto Definitions

Envd APIs are defined in Protocol Buffers: spec/process/process.proto for process management and spec/filesystem/filesystem.proto for file operations. Both use Connect RPC (a modern gRPC-compatible protocol) served via the chi HTTP router.

07

Sandbox Templates

Templates define the base environment for sandboxes -- pre-installed packages, system configuration, and filesystem layout. Custom templates are built from Dockerfiles and stored as rootfs images in cloud storage buckets.

Template Build & Deploy Pipeline
graph LR
    subgraph Author["Developer"]
        DF["Dockerfile
(custom env)"] CLITPL["e2b CLI
template build"] end subgraph Build["Build Node"] DOCKER["Docker Build
(image creation)"] CONVERT["Image to Rootfs
Conversion"] SNAP["Snapshot
Creation"] end subgraph Store["Cloud Storage"] BUCKET["GCS / S3 Bucket
(template storage)"] end subgraph Runtime["Orchestrator Nodes"] CACHE["Local Template
Cache"] MOUNT["NBD Mount
(block device)"] FCBOOT["Firecracker
Boot"] end DF --> CLITPL CLITPL --> DOCKER DOCKER --> CONVERT CONVERT --> SNAP SNAP --> BUCKET BUCKET --> CACHE CACHE --> MOUNT MOUNT --> FCBOOT style DF fill:#1A472A,stroke:#33CC66,color:#F5F5F0 style CLITPL fill:#1A472A,stroke:#33CC66,color:#F5F5F0 style DOCKER fill:#0F2D1A,stroke:#FFBF00,color:#F5F5F0 style CONVERT fill:#0F2D1A,stroke:#FFBF00,color:#F5F5F0 style SNAP fill:#0F2D1A,stroke:#FFBF00,color:#F5F5F0 style BUCKET fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0 style CACHE fill:#1A472A,stroke:#B87333,color:#F5F5F0 style MOUNT fill:#1A472A,stroke:#B87333,color:#F5F5F0 style FCBOOT fill:#2D2D2D,stroke:#FF3333,color:#F5F5F0

Base Template

Default sandbox environment with Ubuntu, Python, Node.js, and common system tools. Created during cluster setup via make prep-cluster.

templates/base/ in e2b-dev/E2B

Custom Templates

Built from standard Dockerfiles. Install any packages, configure services, set environment variables. The CLI converts Docker images to Firecracker-compatible rootfs snapshots.

e2b template build

Template Caching

Orchestrator nodes cache templates locally from cloud storage. The NFS cache cleaner utility manages cache eviction to prevent disk exhaustion on compute nodes.

cmd/clean-nfs-cache/
08

SDKs & CLI

E2B provides official SDKs for JavaScript/TypeScript and Python, plus a CLI for template management. The Code Interpreter SDK adds Jupyter kernel integration on top of the base sandbox SDK.

SDK Ecosystem & API Surface
graph TD
    subgraph CoreSDK["Core Sandbox SDK"]
        JSCORE["@e2b/sdk
(JS/TS)"] PYCORE["e2b
(Python)"] end subgraph CodeInterp["Code Interpreter SDK"] JSCI["@e2b/code-interpreter
(JS/TS)"] PYCI["e2b-code-interpreter
(Python)"] end subgraph Connect["Connect SDK"] PYCONN["e2b-connect
(Python)"] end subgraph CLITool["CLI"] CLICMD["e2b CLI
(template mgmt)"] end subgraph Operations["SDK Operations"] SBCREATE["Sandbox.create()"] SBCONNECT["Sandbox.connect()"] SBLIST["Sandbox.list()"] RUNCODE["sandbox.runCode()"] FILESYSTEM["sandbox.files.*"] PROCESS["sandbox.commands.*"] TIMEOUT["sandbox.setTimeout()"] KILL["sandbox.kill()"] end JSCI --> JSCORE PYCI --> PYCORE JSCORE --> Operations PYCORE --> Operations PYCONN --> PYCORE style JSCORE fill:#1A472A,stroke:#33CC66,color:#F5F5F0 style PYCORE fill:#1A472A,stroke:#33CC66,color:#F5F5F0 style JSCI fill:#0F2D1A,stroke:#4169E1,color:#F5F5F0 style PYCI fill:#0F2D1A,stroke:#4169E1,color:#F5F5F0 style PYCONN fill:#0F2D1A,stroke:#9370DB,color:#F5F5F0 style CLICMD fill:#1A472A,stroke:#FFBF00,color:#F5F5F0 style SBCREATE fill:#0F2D1A,stroke:#B87333,color:#F5F5F0 style SBCONNECT fill:#0F2D1A,stroke:#B87333,color:#F5F5F0 style SBLIST fill:#0F2D1A,stroke:#B87333,color:#F5F5F0 style RUNCODE fill:#0F2D1A,stroke:#B87333,color:#F5F5F0 style FILESYSTEM fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0 style PROCESS fill:#0F2D1A,stroke:#D4956A,color:#F5F5F0 style TIMEOUT fill:#0F2D1A,stroke:#D4956A,color:#F5F5F0 style KILL fill:#0F2D1A,stroke:#FF3333,color:#F5F5F0

SDK Packages

Package Language Purpose Registry
@e2b/code-interpreter JS/TS Code execution with Jupyter kernels npm
e2b-code-interpreter Python Code execution with Jupyter kernels PyPI
@e2b/sdk JS/TS Core sandbox management npm
e2b Python Core sandbox management PyPI
e2b-connect-python Python Direct VM connection PyPI
e2b CLI Node.js Template build, list, delete npm
09

Networking

E2B's networking layer connects isolated microVMs to the outside world while maintaining strict per-sandbox isolation. The orchestrator manages Linux tap devices, iptables rules, and netlink configuration for each VM.

Network Architecture
graph TD
    subgraph Internet["Internet"]
        EXT["External Traffic"]
    end

    subgraph EdgeNet["Edge"]
        LB["Load Balancer"]
        CPROXY["Client Proxy
(Consul SD)"] end subgraph HostNet["Orchestrator Host"] IPTABLES["iptables Rules
(per-VM routing)"] NETLINK["netlink
(interface mgmt)"] subgraph TAP1["TAP Device 1"] VM1NET["VM 1 Network
(isolated)"] end subgraph TAP2["TAP Device 2"] VM2NET["VM 2 Network
(isolated)"] end end subgraph Discovery["Service Discovery"] CONSUL["Consul
(orchestrator registry)"] REDIS2["Redis
(routing state)"] end EXT --> LB LB --> CPROXY CPROXY --> CONSUL CONSUL --> IPTABLES CPROXY --> REDIS2 IPTABLES --> TAP1 IPTABLES --> TAP2 NETLINK --> TAP1 NETLINK --> TAP2 style EXT fill:#0F2D1A,stroke:#D4956A,color:#F5F5F0 style LB fill:#1A472A,stroke:#D4956A,color:#F5F5F0 style CPROXY fill:#1A472A,stroke:#D4956A,color:#F5F5F0 style IPTABLES fill:#2D2D2D,stroke:#FF3333,color:#F5F5F0 style NETLINK fill:#2D2D2D,stroke:#B87333,color:#F5F5F0 style VM1NET fill:#0F2D1A,stroke:#33CC66,color:#F5F5F0 style VM2NET fill:#0F2D1A,stroke:#33CC66,color:#F5F5F0 style CONSUL fill:#1A472A,stroke:#9370DB,color:#F5F5F0 style REDIS2 fill:#0F2D1A,stroke:#FF3333,color:#F5F5F0
10

Observability

E2B uses the OpenTelemetry standard for traces, metrics, and logs across all services. Data flows through an OTEL collector to the Grafana stack (Loki for logs, Tempo for traces, Mimir for metrics).

Observability Pipeline
graph LR
    subgraph Services["E2B Services"]
        SAPI["API Server"]
        SORCH["Orchestrator"]
        SENVD["Envd"]
        SPROXY["Client Proxy"]
    end

    subgraph Collection["Collection"]
        OTELC["OTEL Collector
(packages/otel-collector/)"] end subgraph Storage2["Grafana Stack"] LOKI["Loki
(logs)"] TEMPO["Tempo
(traces)"] MIMIR["Mimir
(metrics)"] GRAFANA["Grafana
(dashboards)"] end subgraph Analytics["Analytics"] CH2["ClickHouse
(usage analytics)"] PH["PostHog
(product analytics)"] end SAPI --> OTELC SORCH --> OTELC SENVD --> OTELC SPROXY --> OTELC OTELC --> LOKI OTELC --> TEMPO OTELC --> MIMIR LOKI --> GRAFANA TEMPO --> GRAFANA MIMIR --> GRAFANA SAPI --> CH2 SAPI --> PH style SAPI fill:#0F2D1A,stroke:#4169E1,color:#F5F5F0 style SORCH fill:#0F2D1A,stroke:#B87333,color:#F5F5F0 style SENVD fill:#0F2D1A,stroke:#33CC66,color:#F5F5F0 style SPROXY fill:#0F2D1A,stroke:#D4956A,color:#F5F5F0 style OTELC fill:#1A472A,stroke:#9370DB,color:#F5F5F0 style LOKI fill:#0F2D1A,stroke:#9370DB,color:#F5F5F0 style TEMPO fill:#0F2D1A,stroke:#9370DB,color:#F5F5F0 style MIMIR fill:#0F2D1A,stroke:#9370DB,color:#F5F5F0 style GRAFANA fill:#1A472A,stroke:#9370DB,color:#F5F5F0 style CH2 fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0 style PH fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0

TEL OpenTelemetry

All services export traces, metrics, and structured logs via the shared telemetry package (packages/shared/pkg/telemetry/). Uses Zap logger with OTEL integration.

packages/shared/pkg/telemetry/

PROF Profiling

The API server exposes pprof endpoints at /debug/pprof/ for CPU, memory, goroutine, and mutex profiling in development and staging environments.

Go pprof

CH ClickHouse

Column-oriented analytics database tracks sandbox usage, execution metrics, and billing data. Runs on dedicated node pool in production deployments.

Dedicated node pool
11

Infrastructure & Deployment

E2B infrastructure is deployed via Terraform and orchestrated with HashiCorp Nomad. The platform supports self-hosting on GCP (production-ready) and AWS (beta). Each deployment provisions multiple node pools for different workload types.

Cloud Deployment Architecture (GCP / AWS)
graph TD
    subgraph Terraform["Terraform IaC"]
        TF["iac/provider-gcp/
iac/provider-aws/"] end subgraph ControlPool["Control Server Pool"] NOMAD["Nomad Server
(3x t3.medium)"] CONSULSVR["Consul Server"] end subgraph APIPool["API Pool"] APINODE["API Server"] INGRESS["Ingress / Proxy"] CPROXY2["Client Proxy"] OTELNODE["OTEL Collector"] LOKINODE["Loki / Logs"] end subgraph ClientPool["Client Pool (Compute)"] ORCHNODE["Orchestrator
(m8i.4xlarge)"] FCVMS["Firecracker VMs"] end subgraph BuildPool["Build Pool"] TMGR["Template Manager
(m8i.2xlarge)"] end subgraph CHPool["ClickHouse Pool"] CHNODE["ClickHouse
(t3.xlarge)"] end subgraph External2["External Services"] PGEXT["PostgreSQL
(Supabase)"] REDISEXT["Redis / ElastiCache"] CFEXT["Cloudflare
(DNS + TLS)"] GCSEXT["GCS / S3
(template storage)"] end TF --> ControlPool TF --> APIPool TF --> ClientPool TF --> BuildPool TF --> CHPool NOMAD --> APIPool NOMAD --> ClientPool NOMAD --> BuildPool ORCHNODE --> FCVMS APINODE --> PGEXT APINODE --> REDISEXT CPROXY2 --> CONSULSVR TMGR --> GCSEXT style TF fill:#1A472A,stroke:#FFBF00,color:#F5F5F0 style NOMAD fill:#0F2D1A,stroke:#9370DB,color:#F5F5F0 style CONSULSVR fill:#0F2D1A,stroke:#9370DB,color:#F5F5F0 style APINODE fill:#0F2D1A,stroke:#4169E1,color:#F5F5F0 style INGRESS fill:#0F2D1A,stroke:#D4956A,color:#F5F5F0 style CPROXY2 fill:#0F2D1A,stroke:#D4956A,color:#F5F5F0 style OTELNODE fill:#0F2D1A,stroke:#9370DB,color:#F5F5F0 style LOKINODE fill:#0F2D1A,stroke:#9370DB,color:#F5F5F0 style ORCHNODE fill:#2D2D2D,stroke:#B87333,color:#F5F5F0 style FCVMS fill:#2D2D2D,stroke:#FF3333,color:#F5F5F0 style TMGR fill:#0F2D1A,stroke:#FFBF00,color:#F5F5F0 style CHNODE fill:#0F2D1A,stroke:#DAA520,color:#F5F5F0 style PGEXT fill:#1A472A,stroke:#DAA520,color:#F5F5F0 style REDISEXT fill:#1A472A,stroke:#FF3333,color:#F5F5F0 style CFEXT fill:#1A472A,stroke:#D4956A,color:#F5F5F0 style GCSEXT fill:#1A472A,stroke:#DAA520,color:#F5F5F0

Technology Stack

Layer Technology Purpose Status
Language Go 1.25 All backend services Active
Virtualization Firecracker + KVM MicroVM isolation Active
Orchestration HashiCorp Nomad Service scheduling & deployment Active
Service Discovery HashiCorp Consul Orchestrator registration & routing Active
IaC Terraform v1.5.7 Infrastructure provisioning Active
Primary DB PostgreSQL (Supabase) Users, teams, templates, sandboxes Active
Cache Redis Routing state, auth tokens, caching Active
Analytics DB ClickHouse Usage analytics & billing Active
API Framework Gin (Go) REST API server Active
RPC gRPC + Connect RPC Inter-service & in-VM communication Active
Feature Flags LaunchDarkly Gradual rollouts Optional
Cloud (Primary) GCP Production hosting Active
Cloud (Beta) AWS Alternative hosting Beta
Cloud (Planned) Azure Future support Planned
Self-Hosting

E2B supports full self-hosting. The infrastructure is provisioned via Terraform, with Packer building machine images for node pools. Key prerequisites: Cloudflare (DNS + TLS), PostgreSQL (Supabase DB), and bare-metal or nested-virt-capable instances for the compute pool. See self-host.md in the infra repo.

12

Acronym Reference

APIApplication Programming Interface
CLICommand-Line Interface
DNSDomain Name System
E2BEnvironments to Bots (Code Sandbox Platform)
GCSGoogle Cloud Storage
gRPCGoogle Remote Procedure Call
IaCInfrastructure as Code
JWTJSON Web Token
KVMKernel-based Virtual Machine
NBDNetwork Block Device
NFSNetwork File System
OTELOpenTelemetry
RPCRemote Procedure Call
S3Simple Storage Service (AWS)
SDService Discovery
SDKSoftware Development Kit
TLSTransport Layer Security
VMMVirtual Machine Monitor
Diagram
100%
Scroll to zoom · Drag to pan · Esc to close