Interactive architecture map of Cloudflare's global edge network, serverless compute platform, and storage services — compiled from publicly available sources.
Cloudflare operates a homogeneous global edge network where every server in every data center runs every service simultaneously. Higher-level products are composed from lower-level primitives — D1 is built on Durable Objects, R2 metadata uses Durable Objects, Pages Functions compile to Workers.
graph TD
subgraph Edge["Homogeneous Edge Network"]
AN["Anycast Routing
(BGP + SR-MPLS)"]
end
subgraph Runtime["Compute Layer"]
WK["Workers Runtime
(V8 Isolates)"]
DO["Durable Objects
(Actor Model)"]
end
subgraph Storage["Storage Layer"]
KV["Workers KV
(Edge Cache)"]
R2["R2 Object Storage
(Zero Egress)"]
D1["D1 Database
(SQLite)"]
end
subgraph Apps["Application Layer"]
PG["Pages
(Static + Functions)"]
end
AN --> WK
WK --> DO
WK --> KV
WK --> R2
DO --> D1
DO --> R2
KV --> R2
PG --> WK
style AN fill:#1a0030,stroke:#6366f1,color:#e9e0ff
style WK fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
style DO fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
style KV fill:#1a0030,stroke:#14b8a6,color:#e9e0ff
style R2 fill:#1a0030,stroke:#14b8a6,color:#e9e0ff
style D1 fill:#1a0030,stroke:#14b8a6,color:#e9e0ff
style PG fill:#1a0030,stroke:#a78bfa,color:#e9e0ff
Cloudflare builds higher-level services on lower-level ones: D1 is SQLite inside a Durable Object. R2 metadata is stored in Durable Objects. Pages Functions compile to Workers. KV overflows large objects to R2. Durable Objects are specialized Workers with persistent storage. Everything runs on the homogeneous anycast edge.
Every server in every data center is homogeneous — each runs every Cloudflare service simultaneously, eliminating service-chaining. Anycast routing via BGP directs each request to the nearest data center by network distance.
graph LR
subgraph Users["Global Users"]
U1["User
(Americas)"]
U2["User
(Europe)"]
U3["User
(Asia)"]
end
subgraph BGP["Anycast BGP"]
IP["Same IP
Advertised
Globally"]
end
subgraph DCs["Edge Data Centers"]
DC1["DC Americas
(all services)"]
DC2["DC Europe
(all services)"]
DC3["DC Asia
(all services)"]
end
subgraph Backbone["Private Backbone"]
FIBER["Dark Fiber +
DWDM + SR-MPLS"]
end
U1 --> IP
U2 --> IP
U3 --> IP
IP --> DC1
IP --> DC2
IP --> DC3
DC1 <--> FIBER
DC2 <--> FIBER
DC3 <--> FIBER
style U1 fill:#22143d,stroke:#7c6faa,color:#e9e0ff
style U2 fill:#22143d,stroke:#7c6faa,color:#e9e0ff
style U3 fill:#22143d,stroke:#7c6faa,color:#e9e0ff
style IP fill:#1a0030,stroke:#6366f1,color:#e9e0ff
style DC1 fill:#1a0030,stroke:#22c55e,color:#e9e0ff
style DC2 fill:#1a0030,stroke:#22c55e,color:#e9e0ff
style DC3 fill:#1a0030,stroke:#22c55e,color:#e9e0ff
style FIBER fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
Standard internet routing protocol. Multiple data centers advertise the same IPs — routers pick the nearest by network distance.
Segment Routing with MPLS for predetermined forwarding paths through label-switched tunnels. No intermediate route lookups needed.
Dense Wavelength Division Multiplexing enables multiple simultaneous data streams on different light wavelengths over dark fiber.
Automatic network self-healing that detects and avoids degraded paths in real time. Backbone capacity grew 500%+ since 2021.
Workers uses the V8 JavaScript engine but runs code in isolates instead of containers or VMs. A single runtime instance runs hundreds of isolates with complete memory isolation, starting ~100x faster and using an order of magnitude less memory than containers.
graph TD
subgraph Server["Edge Server Process"]
V8["V8 Runtime Instance"]
subgraph Isolates["Concurrent Isolates"]
I1["Isolate A
(Worker 1)"]
I2["Isolate B
(Worker 2)"]
I3["Isolate C
(Worker 3)"]
I4["Isolate D
(Worker 1)"]
end
end
REQ["Incoming
Request"] --> FETCH["fetch() handler"]
FETCH --> V8
V8 --> I1
V8 --> I2
V8 --> I3
V8 --> I4
subgraph Bindings["Platform Bindings"]
BKV["KV"]
BR2["R2"]
BD1["D1"]
BDO["Durable
Objects"]
end
I1 --> BKV
I2 --> BR2
I3 --> BD1
I4 --> BDO
style V8 fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
style I1 fill:#22143d,stroke:#eab308,color:#e9e0ff
style I2 fill:#22143d,stroke:#eab308,color:#e9e0ff
style I3 fill:#22143d,stroke:#eab308,color:#e9e0ff
style I4 fill:#22143d,stroke:#eab308,color:#e9e0ff
style REQ fill:#1a0030,stroke:#6366f1,color:#e9e0ff
style FETCH fill:#1a0030,stroke:#6366f1,color:#e9e0ff
style BKV fill:#1a0030,stroke:#14b8a6,color:#e9e0ff
style BR2 fill:#1a0030,stroke:#14b8a6,color:#e9e0ff
style BD1 fill:#1a0030,stroke:#14b8a6,color:#e9e0ff
style BDO fill:#1a0030,stroke:#14b8a6,color:#e9e0ff
| Property | Description |
|---|---|
| Startup | ~100x faster than Node.js container processes. No traditional cold starts. |
| Memory | Order of magnitude less memory than containers. Hundreds of isolates per process. |
| Isolation | V8 prevents code from accessing memory outside its boundary, even within the same OS process. |
| Lifecycle | Not permanent. May be evicted for resource limits, suspected sandbox escapes, or inactivity. |
| Concurrency | Single-threaded event loop with cooperative multitasking via async operations. |
| APIs | Standard Web APIs (Fetch, Streams, Web Crypto, Cache) plus Cloudflare-specific bindings. |
Specialized Workers following the Actor model — each instance is a globally unique, single-threaded actor with its own persistent storage. Ideal for real-time collaboration, chat, game servers, and coordination without distributed consensus.
graph TD
subgraph Clients["Distributed Requests"]
C1["Client A
(US-East)"]
C2["Client B
(EU-West)"]
C3["Client C
(AP-South)"]
end
subgraph Edge["Edge Workers"]
W1["Worker
(US-East)"]
W2["Worker
(EU-West)"]
W3["Worker
(AP-South)"]
end
subgraph DO["Durable Object Instance"]
ACTOR["Single-Threaded
Actor"]
GATE["Input/Output
Gates"]
STORE["Persistent Storage
(SQLite or KV)"]
end
C1 --> W1
C2 --> W2
C3 --> W3
W1 --> ACTOR
W2 --> ACTOR
W3 --> ACTOR
ACTOR --> GATE
GATE --> STORE
style C1 fill:#22143d,stroke:#7c6faa,color:#e9e0ff
style C2 fill:#22143d,stroke:#7c6faa,color:#e9e0ff
style C3 fill:#22143d,stroke:#7c6faa,color:#e9e0ff
style W1 fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
style W2 fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
style W3 fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
style ACTOR fill:#1a0030,stroke:#ef4444,color:#e9e0ff
style GATE fill:#1a0030,stroke:#ef4444,color:#e9e0ff
style STORE fill:#1a0030,stroke:#14b8a6,color:#e9e0ff
No concurrent execution. Async/await allows interleaved I/O but synchronous code blocks the input gate, preventing data races.
Each object ID maps to exactly one instance running in one location. Auto-provisions geographically close to where first requested.
SQLite backend (transactional, up to 1-10 GB per object) or legacy Key-Value API. Storage is private to each object.
Objects stay active during requests, then hibernate. WebSocket Hibernation API eliminates billing during idle connections.
S3-compatible object storage with zero egress fees. Built on a four-layer architecture using Durable Objects for strongly consistent metadata and Cloudflare's CDN for tiered read caching.
graph TD
subgraph Access["Access Methods"]
BIND["Workers Binding
(in-process)"]
S3["S3-Compatible
API"]
REST["REST API
(Dashboard/CLI)"]
end
subgraph Gateway["Layer 1: R2 Gateway"]
GW["Edge Workers
(Auth + S3 Translation)"]
end
subgraph Meta["Layer 2: Metadata Service"]
MDO["Durable Objects
(Keys, Checksums, Versions)"]
end
subgraph Cache["Layer 3: Tiered Read Cache"]
CDN["CDN Cache
Infrastructure"]
end
subgraph Persist["Layer 4: Distributed Storage"]
ENC["Encrypted +
Erasure-Coded
Regional Storage"]
end
BIND --> GW
S3 --> GW
REST --> GW
GW --> MDO
GW --> CDN
CDN --> ENC
MDO --> ENC
style BIND fill:#22143d,stroke:#f59e0b,color:#e9e0ff
style S3 fill:#22143d,stroke:#6366f1,color:#e9e0ff
style REST fill:#22143d,stroke:#6366f1,color:#e9e0ff
style GW fill:#1a0030,stroke:#14b8a6,color:#e9e0ff
style MDO fill:#1a0030,stroke:#ef4444,color:#e9e0ff
style CDN fill:#1a0030,stroke:#3b82f6,color:#e9e0ff
style ENC fill:#1a0030,stroke:#22c55e,color:#e9e0ff
Request arrives at R2 Gateway for authentication. Metadata Service provides an encryption key. Gateway determines the storage cluster within the bucket's designated region. Encrypted data is written and replicated within that region. Objects are invisible to reads until the metadata commit completes.
Request arrives at R2 Gateway for authentication. Metadata Service returns object metadata. Tiered read cache is checked first. On cache miss, the request falls through to distributed storage in the object's region.
A global, low-latency key-value store optimized for read-heavy workloads. Eventually consistent — writes propagate globally within ~60 seconds. Rearchitected in 2025 with a hybrid dual-backend design after a major outage.
graph TD
subgraph Workers["Workers Runtime"]
W["Worker
KV Binding"]
end
subgraph Proxy["KV Storage Proxy (KVSP)"]
KVSP["HTTP-to-Binary
Protocol Bridge"]
HASH["Consistent Hashing
(Namespace Striping)"]
end
subgraph Primary["Primary Backend"]
DB["Cloudflare
Distributed DB"]
REP["3-Way
Replication"]
end
subgraph Secondary["Secondary Backend"]
R2S["R2
(Large Objects)"]
end
subgraph Consistency["Consistency Mechanisms"]
WR["Write-Phase
Reconciliation"]
RD["Read-Phase
Detection"]
CR["Background
Crawlers"]
end
W --> KVSP
KVSP --> HASH
HASH --> DB
HASH --> R2S
DB --> REP
WR --> DB
RD --> DB
CR --> DB
style W fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
style KVSP fill:#1a0030,stroke:#6366f1,color:#e9e0ff
style HASH fill:#1a0030,stroke:#6366f1,color:#e9e0ff
style DB fill:#1a0030,stroke:#14b8a6,color:#e9e0ff
style REP fill:#1a0030,stroke:#14b8a6,color:#e9e0ff
style R2S fill:#1a0030,stroke:#22c55e,color:#e9e0ff
style WR fill:#22143d,stroke:#eab308,color:#e9e0ff
style RD fill:#22143d,stroke:#eab308,color:#e9e0ff
style CR fill:#22143d,stroke:#eab308,color:#e9e0ff
| Metric | Old (3rd Party) | New (2025) |
|---|---|---|
| P99 Read Latency | 200ms | < 5ms |
| Median Object Size | 288 bytes (optimized for small values) | |
| Max Value Size | 25 MB (large objects overflow to R2) | |
| Consistency | Eventually consistent (~60s propagation) | |
| Deletes | Tombstones with timestamps; both backends must have tombstone before removal | |
Managed serverless SQL database built on SQLite running inside Durable Objects. Each D1 database is a single Durable Object in one location — all writes serialize through one actor, providing strong consistency without distributed consensus.
graph LR
subgraph Workers["Edge Workers"]
W1["Worker
(US-West)"]
W2["Worker
(EU-Central)"]
W3["Worker
(AP-East)"]
end
subgraph Replicas["Read Replicas (Edge)"]
RR1["SQLite Replica
(US-West)"]
RR2["SQLite Replica
(EU-Central)"]
RR3["SQLite Replica
(AP-East)"]
end
subgraph Primary["Primary (Single Location)"]
DO["Durable Object"]
SQL["SQLite Primary
(10 GB max)"]
end
W1 -.->|reads| RR1
W2 -.->|reads| RR2
W3 -.->|reads| RR3
W1 -->|writes| DO
W2 -->|writes| DO
W3 -->|writes| DO
DO --> SQL
SQL -->|replication| RR1
SQL -->|replication| RR2
SQL -->|replication| RR3
style W1 fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
style W2 fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
style W3 fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
style RR1 fill:#22143d,stroke:#14b8a6,color:#e9e0ff
style RR2 fill:#22143d,stroke:#14b8a6,color:#e9e0ff
style RR3 fill:#22143d,stroke:#14b8a6,color:#e9e0ff
style DO fill:#1a0030,stroke:#ef4444,color:#e9e0ff
style SQL fill:#1a0030,stroke:#14b8a6,color:#e9e0ff
D1 is designed for horizontal scale-out across many small databases (10 GB max each). Per-user, per-tenant, or per-entity databases rather than one giant shared database. A database processing 10ms queries handles ~100 QPS; 100ms queries yield ~10 QPS due to sequential execution through the single-threaded Durable Object.
Workers access D1 through a binding — no connection strings, no connection pooling, no cold-start connection overhead.
Inherent because all operations run single-threaded through one SQLite instance. No distributed consensus needed.
Built-in with automatic backups and point-in-time restore.
Cloudflare's DDoS protection operates across OSI layers 3, 4, and 7 using a fully decentralized, autonomous system. Each server decides independently — no centralized consensus needed for mitigation.
graph TD
subgraph Traffic["Incoming Traffic"]
ATK["Attack +
Legitimate Traffic"]
end
subgraph L4["L4 Mitigation (Linux Stack)"]
XDP["L4Drop / XDP
(eBPF at wire speed)"]
IPT["iptables
(L3/L4 rules)"]
JAIL["IP Jails
(L7 attacks at L4)"]
end
subgraph Detect["Detection Daemons"]
DOSD["dosd
(every server)"]
FT["flowtrackd
(flow analysis)"]
GB["Gatebot
(centralized)"]
end
subgraph Clean["Clean Traffic"]
APP["Application
Layer"]
end
ATK --> XDP
XDP --> IPT
IPT --> JAIL
JAIL --> APP
DOSD -->|"signatures"| XDP
DOSD -->|"rules"| IPT
FT -->|"rules"| IPT
GB -->|"cross-edge rules"| IPT
style ATK fill:#1a0030,stroke:#ef4444,color:#e9e0ff
style XDP fill:#1a0030,stroke:#dc2626,color:#e9e0ff
style IPT fill:#1a0030,stroke:#ef4444,color:#e9e0ff
style JAIL fill:#1a0030,stroke:#ef4444,color:#e9e0ff
style DOSD fill:#22143d,stroke:#f59e0b,color:#e9e0ff
style FT fill:#22143d,stroke:#f59e0b,color:#e9e0ff
style GB fill:#22143d,stroke:#6366f1,color:#e9e0ff
style APP fill:#1a0030,stroke:#22c55e,color:#e9e0ff
| Component | Deployment | Coverage |
|---|---|---|
| dosd | Every server, every data center | 98.6% of L3/4 attacks, 81% of L7 attacks. Samples at 81x Gatebot's rate. |
| Gatebot | Centralized (network core) | Large distributed volumetric attacks requiring cross-edge coordination. |
| flowtrackd | Decentralized | Complements dosd and Gatebot with additional flow-level analysis. |
Attack traffic is automatically distributed across 330+ data centers and 405+ Tbps of capacity, preventing any single location from being overwhelmed. Advanced DNS Protection handles fully randomized subdomain attacks (random prefix / DNS water torture).
Argo Smart Routing optimizes traffic using real-time network intelligence from ~93 million HTTP requests per second. Cloudflare Tunnel provides private, outbound-only connections from origin servers to the edge.
graph LR
subgraph Client["Client"]
USER["User
Request"]
end
subgraph Edge["Cloudflare Edge"]
EDGE["Edge Server"]
ARGO["Argo Smart
Routing"]
SRTE["SR-TE Path
Selection"]
end
subgraph Backbone["Cloudflare Backbone"]
BB["Optimized Path
(Transit + Peering)"]
end
subgraph Origin["Origin Infrastructure"]
CFD["cloudflared
Daemon"]
ORI["Origin Server"]
end
USER --> EDGE
EDGE --> ARGO
ARGO --> SRTE
SRTE --> BB
BB --> CFD
CFD --> ORI
style USER fill:#22143d,stroke:#7c6faa,color:#e9e0ff
style EDGE fill:#1a0030,stroke:#6366f1,color:#e9e0ff
style ARGO fill:#1a0030,stroke:#22c55e,color:#e9e0ff
style SRTE fill:#1a0030,stroke:#22c55e,color:#e9e0ff
style BB fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
style CFD fill:#1a0030,stroke:#14b8a6,color:#e9e0ff
style ORI fill:#22143d,stroke:#7c6faa,color:#e9e0ff
Runs on top of BGP as an overlay optimization. Analyzes latency, packet loss, and congestion across all network paths in real time.
Explicitly prioritizes backbone paths when they offer the best performance. Uses Segment Routing Traffic Engineering for optimal path selection.
Outbound-only connections from cloudflared daemon to edge — no inbound ports need to be opened. Supports HTTP, WebSocket, TCP, and SSH.
Cloudflare Pages is a JAMstack deployment platform converging with Workers into a unified developer experience. Pages Functions are file-based routing that compiles to first-class Workers.
graph TD
subgraph Source["Source Control"]
GH["GitHub /
GitLab"]
end
subgraph Build["Workers Builds (CI/CD)"]
CI["Build System
(on Workers)"]
PREV["Preview Deploy
(per PR)"]
PROD["Production
Deploy"]
end
subgraph Runtime["Edge Runtime"]
STATIC["Static Assets
(CDN Edge)"]
FUNC["Pages Functions
(compiled to Workers)"]
end
subgraph Bindings["Platform Bindings"]
BKV["KV"]
BR2["R2"]
BD1["D1"]
BDO["Durable
Objects"]
end
GH -->|push| CI
CI --> PREV
CI --> PROD
PROD --> STATIC
PROD --> FUNC
FUNC --> BKV
FUNC --> BR2
FUNC --> BD1
FUNC --> BDO
style GH fill:#22143d,stroke:#a78bfa,color:#e9e0ff
style CI fill:#1a0030,stroke:#a78bfa,color:#e9e0ff
style PREV fill:#22143d,stroke:#6366f1,color:#e9e0ff
style PROD fill:#1a0030,stroke:#22c55e,color:#e9e0ff
style STATIC fill:#1a0030,stroke:#3b82f6,color:#e9e0ff
style FUNC fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
style BKV fill:#22143d,stroke:#14b8a6,color:#e9e0ff
style BR2 fill:#22143d,stroke:#14b8a6,color:#e9e0ff
style BD1 fill:#22143d,stroke:#14b8a6,color:#e9e0ff
style BDO fill:#22143d,stroke:#14b8a6,color:#e9e0ff
Pages projects can bind to KV, Durable Objects, R2, D1, and other Workers platform resources. Static assets, serverless functions, and storage all in one project. Each project gets a *.pages.dev subdomain with custom domain support via Cloudflare DNS.
How a user request traverses the entire Cloudflare stack, from DNS resolution through DDoS mitigation, edge compute, storage access, and origin connectivity.
graph TD
subgraph Request["User Request"]
DNS["Anycast DNS
(nearest DC)"]
end
subgraph Security["DDoS Mitigation"]
L4D["L4Drop / XDP"]
EBPF["eBPF Programs"]
FW["iptables + dosd"]
end
subgraph Edge["Edge Server (Homogeneous)"]
WR["Workers Runtime
(V8 Isolate)"]
end
subgraph Services["Platform Services"]
KVS["KV
(edge cache + central)"]
R2S["R2
(Gateway + DO + Storage)"]
D1S["D1
(replica or primary DO)"]
DOS["Durable Objects
(actor at location)"]
end
subgraph Static["Static"]
PGS["Pages
(edge cache + Functions)"]
end
subgraph Origin["Origin Path"]
ARGOS["Argo Smart Routing"]
TUN["Cloudflare Tunnel"]
end
DNS --> L4D
L4D --> EBPF
EBPF --> FW
FW --> WR
WR --> KVS
WR --> R2S
WR --> D1S
WR --> DOS
WR --> PGS
WR --> ARGOS
ARGOS --> TUN
style DNS fill:#1a0030,stroke:#3b82f6,color:#e9e0ff
style L4D fill:#1a0030,stroke:#ef4444,color:#e9e0ff
style EBPF fill:#1a0030,stroke:#ef4444,color:#e9e0ff
style FW fill:#1a0030,stroke:#ef4444,color:#e9e0ff
style WR fill:#1a0030,stroke:#f59e0b,color:#e9e0ff
style KVS fill:#22143d,stroke:#14b8a6,color:#e9e0ff
style R2S fill:#22143d,stroke:#14b8a6,color:#e9e0ff
style D1S fill:#22143d,stroke:#14b8a6,color:#e9e0ff
style DOS fill:#22143d,stroke:#f59e0b,color:#e9e0ff
style PGS fill:#22143d,stroke:#a78bfa,color:#e9e0ff
style ARGOS fill:#1a0030,stroke:#22c55e,color:#e9e0ff
style TUN fill:#1a0030,stroke:#22c55e,color:#e9e0ff