Architecture Maps

Linux Kernel

The foundation of everything from smartphones to supercomputers — a monolithic kernel with modular capabilities powering over 90% of the world's infrastructure.

Created 1991 C + Rust (since 6.1) GPLv2 Monolithic + Modules
Section 01

Summit View

The Linux kernel is the core of the GNU/Linux operating system. Since Linus Torvalds' first release in 1991, it has grown into the most widely deployed OS kernel in history, running on everything from embedded IoT devices to the world's fastest supercomputers.

~40M
Lines of Code
28,000+
Contributors
22
Architectures
7.x
In Development
Fig 1.1 — High-Level Layer Model
graph TD
    UA["Userspace Applications"]
    SCI["System Call Interface
(glibc / musl → syscall)"] PM["Process Management"] MM["Memory Management"] VFS["Virtual File System"] NET["Networking Stack"] DD["Device Drivers"] SEC["Security (LSM)"] HAL["Hardware Abstraction
(arch/)"] HW["Hardware
CPU · RAM · Storage · Network · Peripherals"] UA --> SCI SCI --> PM SCI --> MM SCI --> VFS SCI --> NET SCI --> SEC PM --> HAL MM --> HAL VFS --> DD NET --> DD DD --> HAL HAL --> HW style UA fill:#e8d9a8,stroke:#8B6914,color:#2c3e50 style SCI fill:#d4bc7a,stroke:#6B4F10,color:#2c3e50 style PM fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style MM fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style VFS fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style NET fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style DD fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0 style SEC fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style HAL fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0 style HW fill:#8B6914,stroke:#6B4F10,color:#f5f0e0
Section 02

Kernel Layer Model

The kernel operates as topographic contours — from the high-elevation userspace (Ring 3) down through the syscall treeline into the lowland kernel subsystems (Ring 0), finally reaching the hardware bedrock. Understanding these elevation bands is key to navigating the codebase.

Fig 2.1 — Ring Model (Elevation Contours)
graph TD
    APPS["User Applications
(Ring 3 — Summit)"] CLIB["C Library
glibc / musl / bionic"] SYSCALL["System Call Interface
≈ 450 syscalls (x86_64)"] SCHED["Process Scheduler
EEVDF (6.6+)"] MEMMGR["Memory Manager
Buddy + SLUB"] VFSL["VFS Layer
Unified FS Interface"] NETSTACK["Network Stack
TCP/IP + Netfilter"] DEVMODEL["Device Model
Unified bus/driver/device"] ARCH["Architecture Code
arch/{x86,arm64,riscv,...}"] HWBED["Hardware (Bedrock)"] APPS --> CLIB CLIB --> SYSCALL SYSCALL --> SCHED SYSCALL --> MEMMGR SYSCALL --> VFSL SYSCALL --> NETSTACK SCHED --> ARCH MEMMGR --> ARCH VFSL --> DEVMODEL NETSTACK --> DEVMODEL DEVMODEL --> ARCH ARCH --> HWBED style APPS fill:#e8d9a8,stroke:#8B6914,color:#2c3e50 style CLIB fill:#d4bc7a,stroke:#8B6914,color:#2c3e50 style SYSCALL fill:#bfa462,stroke:#8B6914,color:#2c3e50 style SCHED fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style MEMMGR fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style VFSL fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style NETSTACK fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style DEVMODEL fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0 style ARCH fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0 style HWBED fill:#8B6914,stroke:#6B4F10,color:#f5f0e0
KPTI (Kernel Page Table Isolation): Since Meltdown (2018), the kernel maintains separate page tables for user and kernel space. On syscall entry, the CPU switches from the minimal user page tables to the full kernel page tables, adding a small performance cost but closing the speculative execution side channel.
Section 03

Process Management

Process management is the heartbeat of the kernel. The scheduler decides which task runs on which CPU, namespaces provide isolation for containers, and cgroups enforce resource limits. Linux 6.6 replaced the venerable CFS with the EEVDF scheduler for fairer latency distribution.

EEVDF Scheduler

Earliest Eligible Virtual Deadline First — replaced CFS in kernel 6.6. Provides better latency guarantees by assigning virtual deadlines to tasks based on their weight and requested time slice.

Active (6.6+)

task_struct

The central process descriptor (~8KB). Contains PID, state, scheduling info, memory maps, file descriptors, credentials, signal handlers, and pointers to parent/child/sibling tasks.

Namespaces (8 types)

Isolation primitives for containers: mount, UTS, IPC, net, PID, user, cgroup, time. Each namespace gives a process its own view of a global resource.

cgroups v2

Unified hierarchy for resource control. Controllers: cpu, memory, io, pids, cpuset. A single tree mounted at /sys/fs/cgroup replaces the fragmented v1 design.

sched_ext (6.12)

BPF-extensible scheduler class — allows loading custom scheduling policies as BPF programs at runtime without recompiling the kernel. Enables rapid experimentation with scheduling algorithms.

New (6.12+)
Fig 3.1 — Scheduler Architecture
graph TD
    TS["task_struct"]
    CG["cgroups v2
Resource Limits"] NS["Namespaces
Isolation"] EEVDF["EEVDF Scheduler"] SEXT["sched_ext
(BPF Programs)"] RQ["Per-CPU Run Queues"] CPU["CPU Cores"] CTX["Context Switch
switch_to()"] TS --> EEVDF TS --> SEXT CG --> EEVDF NS -.->|isolates| TS EEVDF --> RQ SEXT --> RQ RQ --> CTX CTX --> CPU style TS fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style CG fill:#2980b9,stroke:#1a5276,color:#f5f0e0 style NS fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style EEVDF fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0 style SEXT fill:#d4bc7a,stroke:#8B6914,color:#2c3e50 style RQ fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0 style CPU fill:#8B6914,stroke:#6B4F10,color:#f5f0e0 style CTX fill:#bfa462,stroke:#8B6914,color:#2c3e50
Section 04

Memory Management

The memory subsystem maps virtual addresses to physical pages, manages page caches for file I/O, and balances memory pressure across NUMA nodes. It uses a layered allocator stack: the buddy system for physical pages, SLUB for kernel objects, and the page cache to unify file and memory semantics.

Virtual Memory

4-level (or 5-level on modern x86_64) page tables translate virtual addresses to physical frames. Each process gets its own mm_struct and page table hierarchy.

Buddy Allocator

Manages physical page frames in power-of-2 blocks (order 0-10). Splits and coalesces buddies to minimize fragmentation. Operates per-zone (DMA, Normal, HighMem).

SLUB Allocator

The default slab allocator for kernel objects. Caches frequently allocated structures (task_struct, inode, dentry) in per-CPU freelists for fast allocation.

Page Cache

Shared with VFS — caches file data in memory using a radix tree (XArray since 5.x). Read-ahead prefetching and write-back policies are tunable per-BDI.

OOM Killer & THP

When memory is exhausted, the OOM killer selects and terminates processes based on oom_score. Transparent Huge Pages (THP) automatically promote 4K pages to 2M hugepages to reduce TLB misses.

Fig 4.1 — Memory Allocation Flow
graph LR
    PROC["Process"]
    VA["Virtual Address"]
    PT["Page Tables
4/5-level"] BUDDY["Buddy Allocator
Physical Pages"] SLUB["SLUB
Kernel Objects"] PC["Page Cache"] VFSC["VFS"] SWAP["Swap Subsystem"] NUMA["NUMA Balancing"] PROC --> VA VA --> PT PT --> BUDDY BUDDY --> NUMA SLUB --> BUDDY PC <--> VFSC PC <--> BUDDY BUDDY <--> SWAP style PROC fill:#e8d9a8,stroke:#8B6914,color:#2c3e50 style VA fill:#d4bc7a,stroke:#8B6914,color:#2c3e50 style PT fill:#bfa462,stroke:#8B6914,color:#2c3e50 style BUDDY fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style SLUB fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0 style PC fill:#2980b9,stroke:#1a5276,color:#f5f0e0 style VFSC fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style SWAP fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0 style NUMA fill:#6c3483,stroke:#4a3060,color:#f5f0e0
NUMA Awareness: On multi-socket systems, the kernel tracks which NUMA node each page belongs to and tries to keep memory allocations local to the CPU accessing them. The numactl tool and /proc/<pid>/numa_maps expose this topology.
Section 05

File Systems & VFS

The Virtual File System provides a uniform interface for all filesystems. Through operation tables on four key objects — superblock, inode, dentry, and file — any filesystem can plug into the kernel without userspace knowing the difference.

VFS Abstraction

Four operation tables: super_operations, inode_operations, dentry_operations, file_operations. Every filesystem implements these to integrate with the kernel.

Disk Filesystems

ext4 (default, journaled), btrfs (CoW, snapshots, RAID), XFS (high-perf, large files), F2FS (flash-optimized), bcachefs (new in 6.7, CoW with caching).

Virtual Filesystems

procfs (/proc — process info), sysfs (/sys — device model), debugfs (kernel debug), tmpfs (RAM-backed), overlayfs (container image layers).

Block Layer & I/O Schedulers

The block layer mediates between filesystems and storage drivers. Schedulers: BFQ (fairness), mq-deadline (latency), Kyber (fast SSDs), none (NVMe direct).

Fig 5.1 — VFS & Filesystem Tree
graph TD
    VFS["VFS Layer
superblock · inode · dentry · file"] EXT4["ext4"] BTRFS["btrfs"] XFS["XFS"] BCACHE["bcachefs"] PROC["procfs"] SYSFS["sysfs"] TMPFS["tmpfs"] OVL["overlayfs"] BLK["Block Layer"] IOSCHED["I/O Schedulers
BFQ · mq-deadline · Kyber"] STORAGE["Storage Drivers
NVMe · SCSI · virtio-blk"] VFS --> EXT4 VFS --> BTRFS VFS --> XFS VFS --> BCACHE VFS --> PROC VFS --> SYSFS VFS --> TMPFS VFS --> OVL EXT4 --> BLK BTRFS --> BLK XFS --> BLK BCACHE --> BLK BLK --> IOSCHED IOSCHED --> STORAGE style VFS fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style EXT4 fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0 style BTRFS fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0 style XFS fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0 style BCACHE fill:#d4bc7a,stroke:#8B6914,color:#2c3e50 style PROC fill:#2980b9,stroke:#1a5276,color:#f5f0e0 style SYSFS fill:#2980b9,stroke:#1a5276,color:#f5f0e0 style TMPFS fill:#5dade2,stroke:#2980b9,color:#2c3e50 style OVL fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style BLK fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0 style IOSCHED fill:#bfa462,stroke:#8B6914,color:#2c3e50 style STORAGE fill:#8B6914,stroke:#6B4F10,color:#f5f0e0
Section 06

Networking Stack

The Linux networking stack implements the full TCP/IP suite with extensive hooks for packet filtering, traffic shaping, and programmable packet processing via eBPF. Its flexibility has made Linux the foundation for modern software-defined networking.

Socket API

BSD-compatible socket interface. Supports AF_INET (IPv4), AF_INET6 (IPv6), AF_UNIX (local), AF_NETLINK (kernel comms), AF_XDP (fast path).

TCP/IP Stack

Full IPv4/IPv6 dual-stack. Congestion control algorithms: CUBIC (default), BBR (Google, bandwidth-based), DCTCP (data centers). Pluggable via setsockopt.

Netfilter / nftables

The packet filtering framework. nftables (successor to iptables) provides a unified rule engine with sets, maps, and concatenations for firewalling and NAT.

eBPF / XDP

Programmable packet processing in the kernel. XDP runs BPF programs at the driver level before sk_buff allocation — enabling line-rate packet filtering, load balancing, and DDoS mitigation.

Rapidly evolving
Fig 6.1 — Packet Flow (Ingress)
graph TD
    NIC["NIC Hardware"]
    DRV["Device Driver"]
    XDP["XDP Hook
(eBPF)"] TC["tc ingress
(eBPF)"] NFP["Netfilter
PREROUTING"] ROUTE["Routing Decision"] FWD["Forward Path"] NFI["Netfilter INPUT"] SOCK["Socket Layer"] APP["Application"] NFOUT["Netfilter
POSTROUTING"] NIC --> DRV DRV --> XDP XDP --> TC TC --> NFP NFP --> ROUTE ROUTE -->|local| NFI ROUTE -->|forward| FWD NFI --> SOCK SOCK --> APP FWD --> NFOUT style NIC fill:#8B6914,stroke:#6B4F10,color:#f5f0e0 style DRV fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0 style XDP fill:#c0392b,stroke:#922b21,color:#f5f0e0 style TC fill:#c0392b,stroke:#922b21,color:#f5f0e0 style NFP fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style ROUTE fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style FWD fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0 style NFI fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style SOCK fill:#2980b9,stroke:#1a5276,color:#f5f0e0 style APP fill:#e8d9a8,stroke:#8B6914,color:#2c3e50 style NFOUT fill:#6c3483,stroke:#4a3060,color:#f5f0e0
eBPF hooks span the entire stack: Beyond XDP and tc, BPF programs attach at socket filters, cgroup hooks, kprobes, tracepoints, and LSM hooks. This makes eBPF the kernel's general-purpose extension mechanism.
Section 07

Device Drivers & Hardware

Driver code constitutes roughly 60% of the kernel source. The unified device model (device, driver, bus) provides a consistent registration and discovery framework. Hardware description comes from Device Tree (ARM/RISC-V) or ACPI (x86).

Unified Device Model

Three core abstractions: struct device, struct device_driver, struct bus_type. The bus matches devices to drivers; probing initializes the hardware.

Bus Types

PCI/PCIe (GPUs, NICs, NVMe), USB (peripherals), I2C/SPI (sensors, embedded), Platform (SoC-integrated devices with no discoverable bus).

Hardware Description

Device Tree (DT) — compiled .dtb blobs for ARM/RISC-V. ACPI — firmware tables for x86 describing topology, power, and devices. Both feed into the device model.

DMA & IOMMU

DMA allows devices to read/write memory directly. The IOMMU (Intel VT-d, ARM SMMU) translates device addresses, enabling isolation for VMs and protecting against DMA attacks.

Interrupt Handling

Top half: Minimal ISR in interrupt context. Bottom half: Deferred work via softirq (network/block), tasklet (legacy), or workqueue (process context, sleepable).

Fig 7.1 — Driver Registration & Discovery
graph LR
    HWE["Hardware Event"]
    BUS["Bus Enumeration
PCI / USB / DT / ACPI"] MATCH["bus.match()"] PROBE["driver.probe()"] SYSFS["sysfs Registration
/sys/devices/"] UEVENT["kobject_uevent"] UDEV["udev (userspace)
Device node creation"] DMA["DMA Engine"] IOMMU["IOMMU
VT-d / SMMU"] HWE --> BUS BUS --> MATCH MATCH --> PROBE PROBE --> SYSFS SYSFS --> UEVENT UEVENT --> UDEV PROBE --> DMA DMA --> IOMMU style HWE fill:#8B6914,stroke:#6B4F10,color:#f5f0e0 style BUS fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0 style MATCH fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style PROBE fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0 style SYSFS fill:#2980b9,stroke:#1a5276,color:#f5f0e0 style UEVENT fill:#d4bc7a,stroke:#8B6914,color:#2c3e50 style UDEV fill:#e8d9a8,stroke:#8B6914,color:#2c3e50 style DMA fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style IOMMU fill:#c0392b,stroke:#922b21,color:#f5f0e0
Security note: DMA-capable devices have unrestricted memory access without an IOMMU. This is why IOMMU isolation is critical for VM pass-through (VFIO) and why Thunderbolt security levels exist to gate external device DMA.
Section 08

Security Subsystems

Security in the Linux kernel is layered and modular. The Linux Security Modules (LSM) framework provides ~250+ hooks woven throughout the kernel, allowing multiple security policies to coexist. Alongside LSM, capabilities, seccomp-bpf, and integrity subsystems form a defense-in-depth architecture.

LSM Framework

~250+ hooks at security-critical points (file access, socket ops, task creation). Exclusive LSMs: SELinux (RHEL/Fedora), AppArmor (Ubuntu/Debian), Smack (Tizen), TOMOYO. Stackable: Yama, LoadPin, SafeSetID, IPE, Landlock, BPF LSM.

POSIX Capabilities

Fine-grained privilege splitting instead of all-or-nothing root. Key capabilities: CAP_NET_ADMIN (network config), CAP_SYS_ADMIN (catch-all admin), CAP_DAC_OVERRIDE (bypass file permissions), CAP_NET_RAW (raw sockets).

seccomp-bpf

Syscall filtering via BPF programs. Restricts which syscalls a process can invoke. Used by Chromium, Docker, Flatpak, and systemd services. Actions: ALLOW, KILL, TRAP, ERRNO, LOG, NOTIFY (user-space decision).

Landlock LSM

Unprivileged application sandboxing (since 5.13). Programs restrict their own filesystem and network access without root. Stackable — layers on top of SELinux/AppArmor.

Since 5.13

BPF LSM

Programmable security policies via eBPF (since 5.7). Attach BPF programs to any LSM hook for custom MAC policies, audit logging, or runtime security monitoring without kernel recompilation.

Since 5.7

IMA / EVM

IMA (Integrity Measurement Architecture): measures file hashes, extends TPM PCRs, enforces appraisal policies. EVM (Extended Verification Module): protects security xattrs with HMAC/digital signatures. Together they enable trusted boot and runtime integrity.

Fig 8.1 — Security Hook Chain
graph TD
    SYSCALL["Syscall Entry"]
    SECCOMP["seccomp-bpf
Syscall Filter"] CAPS["Capabilities
Check"] RAC["Resource Access
Check"] LSM["LSM Hook
Framework"] SELINUX["SELinux
MAC Policy"] APPARMOR["AppArmor
Path-based MAC"] SMACK["Smack
Label-based"] LANDLOCK["Landlock
Unprivileged Sandbox"] BPFLSM["BPF LSM
Programmable"] IMA["IMA / EVM
Integrity"] ALLOW["Access
Granted"] SYSCALL --> SECCOMP SECCOMP -->|allowed| RAC RAC --> CAPS CAPS --> LSM LSM --> SELINUX LSM --> APPARMOR LSM --> SMACK LSM --> LANDLOCK LSM --> BPFLSM LSM --> IMA SELINUX --> ALLOW APPARMOR --> ALLOW LANDLOCK --> ALLOW style SYSCALL fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0 style SECCOMP fill:#c0392b,stroke:#922b21,color:#f5f0e0 style CAPS fill:#bfa462,stroke:#8B6914,color:#2c3e50 style RAC fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0 style LSM fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style SELINUX fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style APPARMOR fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style SMACK fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style LANDLOCK fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style BPFLSM fill:#c0392b,stroke:#922b21,color:#f5f0e0 style IMA fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style ALLOW fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
Exclusive vs Stackable: Only one exclusive LSM (SELinux or AppArmor or Smack) can be the primary MAC provider. Stackable LSMs (Yama, Landlock, BPF LSM, LoadPin, SafeSetID, IPE) layer on top, each adding restrictions. The kernel boot parameter lsm= controls the order.
Section 09

Build System & Configuration

The kernel build system transforms a hierarchical configuration into a monolithic image plus optional loadable modules. Kconfig manages ~20,000 options; Kbuild orchestrates per-directory Makefiles. Recent additions include Rust language support and Clang/LLVM as a first-class compiler.

Kconfig

Hierarchical configuration system. Interactive frontends: make menuconfig (ncurses), make xconfig (Qt). Output: .config file with CONFIG_* symbols. Supports dependencies, selects, and choice groups.

Kbuild

Make-based build system with per-directory Makefiles using obj-y (built-in) and obj-m (module) syntax. Handles header dependencies, cross-compilation, and incremental rebuilds.

Kernel Modules (.ko)

Loadable at runtime via modprobe/insmod. DKMS rebuilds out-of-tree modules on kernel updates. Module signing (CONFIG_MODULE_SIG) ensures only trusted modules load.

Rust Support

Available since 6.1, promoted to core language Dec 2025. Requires LLVM/Clang toolchain. bindgen generates Rust bindings from C headers. First Rust drivers: Nova (GPU, 6.15), PHY, block.

Since 6.1

Compiler Support

GCC (primary, widest arch support), Clang/LLVM (required for Rust, CFI, and some sanitizers). Cross-compilation supported for all 22 architectures via ARCH= and CROSS_COMPILE=.

Fig 9.1 — Build Pipeline
flowchart LR
    KCONFIG["Kconfig
.config"] KBUILD["Kbuild
Make"] CSRC["C Sources"] RSRC["Rust Sources"] BINDGEN["bindgen
C→Rust FFI"] GCC["GCC / Clang"] RUSTC["rustc"] VMLINUX["vmlinux"] MODULES["Modules
.ko files"] BZIMAGE["bzImage
(x86)"] IMAGE["Image
(ARM64)"] MODPROBE["modprobe"] KERNEL["Running Kernel"] KCONFIG --> KBUILD KBUILD --> GCC CSRC --> GCC RSRC --> BINDGEN BINDGEN --> RUSTC RUSTC --> VMLINUX GCC --> VMLINUX GCC --> MODULES VMLINUX --> BZIMAGE VMLINUX --> IMAGE MODULES --> MODPROBE MODPROBE --> KERNEL style KCONFIG fill:#bfa462,stroke:#8B6914,color:#2c3e50 style KBUILD fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0 style CSRC fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style RSRC fill:#c0392b,stroke:#922b21,color:#f5f0e0 style BINDGEN fill:#c0392b,stroke:#922b21,color:#f5f0e0 style GCC fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0 style RUSTC fill:#c0392b,stroke:#922b21,color:#f5f0e0 style VMLINUX fill:#2980b9,stroke:#1a5276,color:#f5f0e0 style MODULES fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style BZIMAGE fill:#8B6914,stroke:#6B4F10,color:#f5f0e0 style IMAGE fill:#8B6914,stroke:#6B4F10,color:#f5f0e0 style MODPROBE fill:#d4bc7a,stroke:#8B6914,color:#2c3e50 style KERNEL fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
Module signing: When CONFIG_MODULE_SIG_FORCE is enabled, the kernel refuses to load any module without a valid signature. This is critical for Secure Boot chains and is the default on RHEL, Fedora, and Ubuntu kernels.
Section 10

Kernel Interconnection Map

A full subsystem-to-subsystem view of the Linux kernel, showing how the major components connect from userspace applications down to hardware. This map illustrates the kernel’s layered architecture and the cross-cutting role of eBPF and security hooks.

Fig 10.1 — Full Subsystem Interconnection
graph TD
    subgraph USERSPACE["Userspace"]
        APPS["Applications"]
        LIBS["Libraries
glibc · musl"] CRUNTIME["Container Runtimes
runc · crun"] end SYSCALLS["Syscall Interface
~450 syscalls"] subgraph CORE["Core Kernel"] SCHED["Scheduler
EEVDF · sched_ext"] MM["Memory Manager
Buddy · SLUB · THP"] VFS["VFS
inode · dentry"] NETSTACK["Network Stack
TCP/IP · sockets"] end subgraph INFRA["Infrastructure"] CGROUPS["cgroups v2"] NS["Namespaces"] IOURING["io_uring"] EBPF["eBPF
Hooks everywhere"] end subgraph SECURITY["Security"] LSMF["LSM Framework"] SECCOMPF["seccomp-bpf"] end PCACHE["Page Cache"] BLK["Block Layer"] NFILT["Netfilter
nftables"] DEVMODEL["Device Model
sysfs · udev"] DRIVERS["Drivers
GPU · NIC · Storage"] HW["Hardware"] APPS --> LIBS LIBS --> SYSCALLS CRUNTIME --> SYSCALLS SYSCALLS --> SECCOMPF SECCOMPF --> SCHED SYSCALLS --> VFS SYSCALLS --> NETSTACK SCHED --> CGROUPS CGROUPS --> NS MM --> PCACHE PCACHE --> VFS VFS --> BLK VFS --> IOURING NETSTACK --> NFILT NETSTACK --> EBPF BLK --> DRIVERS NFILT --> EBPF LSMF --> VFS LSMF --> NETSTACK DEVMODEL --> DRIVERS DRIVERS --> HW style APPS fill:#e8d9a8,stroke:#8B6914,color:#2c3e50 style LIBS fill:#d4bc7a,stroke:#8B6914,color:#2c3e50 style CRUNTIME fill:#d4bc7a,stroke:#8B6914,color:#2c3e50 style SYSCALLS fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0 style SCHED fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style MM fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style VFS fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style NETSTACK fill:#4a7c28,stroke:#2d5016,color:#f5f0e0 style CGROUPS fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0 style NS fill:#2980b9,stroke:#1a5276,color:#f5f0e0 style IOURING fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0 style EBPF fill:#c0392b,stroke:#922b21,color:#f5f0e0 style LSMF fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style SECCOMPF fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style PCACHE fill:#5dade2,stroke:#2980b9,color:#2c3e50 style BLK fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0 style NFILT fill:#6c3483,stroke:#4a3060,color:#f5f0e0 style DEVMODEL fill:#bfa462,stroke:#8B6914,color:#2c3e50 style DRIVERS fill:#8B6914,stroke:#6B4F10,color:#f5f0e0 style HW fill:#2c3e50,stroke:#1a252f,color:#f5f0e0

Modernization Tracker

Feature Version Status Notes
Rust support 6.1+ Active Promoted to core language Dec 2025
EEVDF scheduler 6.6 Stable Replaced CFS as default scheduler
sched_ext (BPF scheduler) 6.12 Active Full BPF scheduling class
PREEMPT_RT 6.12 Merged 20 years of patches finally mainline
io_uring 5.1+ Stable ~60 operation types, mature async I/O
Bcachefs 6.7 Stabilizing New CoW filesystem with caching
Landlock LSM 5.13+ Stable Unprivileged sandboxing
BPF LSM 5.7+ Stable Programmable security hooks
KCFI (kernel CFI) 6.10+ Active Clang/LLVM only, control flow integrity
Nova GPU driver (Rust) 6.15 Active First Rust DRM driver (NVIDIA GSP)
Y2038 fixes 5.6+ Complete Kernel-side done, userspace ongoing

Acronym Reference

ACPI — Advanced Configuration & Power Interface
BFQ — Budget Fair Queueing
BPF — Berkeley Packet Filter
BTF — BPF Type Format
CFS — Completely Fair Scheduler
CFI — Control Flow Integrity
CoW — Copy on Write
CPU — Central Processing Unit
DMA — Direct Memory Access
DKMS — Dynamic Kernel Module Support
DRM — Direct Rendering Manager
DT — Device Tree
EAS — Energy Aware Scheduling
EEVDF — Earliest Eligible Virtual Deadline First
eBPF — extended Berkeley Packet Filter
EVM — Extended Verification Module
ext4 — Fourth Extended Filesystem
F2FS — Flash-Friendly File System
FUSE — Filesystem in Userspace
GCC — GNU Compiler Collection
GPLv2 — GNU General Public License v2
GSP — GPU System Processor
HID — Human Interface Device
HMAC — Hash-based Message Auth Code
I2C — Inter-Integrated Circuit
IMA — Integrity Measurement Architecture
IOMMU — I/O Memory Management Unit
IPC — Inter-Process Communication
IRQ — Interrupt Request
KASLR — Kernel Address Space Layout Randomization
KCFI — Kernel Control Flow Integrity
KPTI — Kernel Page Table Isolation
KVM — Kernel-based Virtual Machine
LKML — Linux Kernel Mailing List
LLVM — Low Level Virtual Machine
LRU — Least Recently Used
LSM — Linux Security Modules
MAC — Mandatory Access Control
MSI — Message Signaled Interrupts
NAT — Network Address Translation
NIC — Network Interface Card
NUMA — Non-Uniform Memory Access
NVMe — Non-Volatile Memory express
OOM — Out of Memory
PAC — Pointer Authentication Codes
PCIe — PCI Express
PELT — Per-Entity Load Tracking
PID — Process Identifier
PSI — Pressure Stall Information
RTNL — Routing Netlink
SCSI — Small Computer System Interface
SLUB — Simple List Unqueued Buddy (slab allocator)
SMP — Symmetric Multiprocessing
SMMU — System Memory Management Unit
SPI — Serial Peripheral Interface
SR-IOV — Single Root I/O Virtualization
THP — Transparent Huge Pages
TLB — Translation Lookaside Buffer
TPM — Trusted Platform Module
URB — USB Request Block
USB — Universal Serial Bus
UEFI — Unified Extensible Firmware Interface
VFS — Virtual File System
VFIO — Virtual Function I/O
XDP — eXpress Data Path
XFS — X File System
Diagram
100%
Scroll to zoom · Drag to pan · Esc to close