The foundation of everything from smartphones to supercomputers — a monolithic kernel with modular capabilities powering over 90% of the world's infrastructure.
The Linux kernel is the core of the GNU/Linux operating system. Since Linus Torvalds' first release in 1991, it has grown into the most widely deployed OS kernel in history, running on everything from embedded IoT devices to the world's fastest supercomputers.
graph TD
UA["Userspace Applications"]
SCI["System Call Interface
(glibc / musl → syscall)"]
PM["Process Management"]
MM["Memory Management"]
VFS["Virtual File System"]
NET["Networking Stack"]
DD["Device Drivers"]
SEC["Security (LSM)"]
HAL["Hardware Abstraction
(arch/)"]
HW["Hardware
CPU · RAM · Storage · Network · Peripherals"]
UA --> SCI
SCI --> PM
SCI --> MM
SCI --> VFS
SCI --> NET
SCI --> SEC
PM --> HAL
MM --> HAL
VFS --> DD
NET --> DD
DD --> HAL
HAL --> HW
style UA fill:#e8d9a8,stroke:#8B6914,color:#2c3e50
style SCI fill:#d4bc7a,stroke:#6B4F10,color:#2c3e50
style PM fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style MM fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style VFS fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style NET fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style DD fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0
style SEC fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style HAL fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0
style HW fill:#8B6914,stroke:#6B4F10,color:#f5f0e0
The kernel operates as topographic contours — from the high-elevation userspace (Ring 3) down through the syscall treeline into the lowland kernel subsystems (Ring 0), finally reaching the hardware bedrock. Understanding these elevation bands is key to navigating the codebase.
graph TD
APPS["User Applications
(Ring 3 — Summit)"]
CLIB["C Library
glibc / musl / bionic"]
SYSCALL["System Call Interface
≈ 450 syscalls (x86_64)"]
SCHED["Process Scheduler
EEVDF (6.6+)"]
MEMMGR["Memory Manager
Buddy + SLUB"]
VFSL["VFS Layer
Unified FS Interface"]
NETSTACK["Network Stack
TCP/IP + Netfilter"]
DEVMODEL["Device Model
Unified bus/driver/device"]
ARCH["Architecture Code
arch/{x86,arm64,riscv,...}"]
HWBED["Hardware (Bedrock)"]
APPS --> CLIB
CLIB --> SYSCALL
SYSCALL --> SCHED
SYSCALL --> MEMMGR
SYSCALL --> VFSL
SYSCALL --> NETSTACK
SCHED --> ARCH
MEMMGR --> ARCH
VFSL --> DEVMODEL
NETSTACK --> DEVMODEL
DEVMODEL --> ARCH
ARCH --> HWBED
style APPS fill:#e8d9a8,stroke:#8B6914,color:#2c3e50
style CLIB fill:#d4bc7a,stroke:#8B6914,color:#2c3e50
style SYSCALL fill:#bfa462,stroke:#8B6914,color:#2c3e50
style SCHED fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style MEMMGR fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style VFSL fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style NETSTACK fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style DEVMODEL fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0
style ARCH fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0
style HWBED fill:#8B6914,stroke:#6B4F10,color:#f5f0e0
Process management is the heartbeat of the kernel. The scheduler decides which task runs on which CPU, namespaces provide isolation for containers, and cgroups enforce resource limits. Linux 6.6 replaced the venerable CFS with the EEVDF scheduler for fairer latency distribution.
Earliest Eligible Virtual Deadline First — replaced CFS in kernel 6.6. Provides better latency guarantees by assigning virtual deadlines to tasks based on their weight and requested time slice.
Active (6.6+)The central process descriptor (~8KB). Contains PID, state, scheduling info, memory maps, file descriptors, credentials, signal handlers, and pointers to parent/child/sibling tasks.
Isolation primitives for containers: mount, UTS, IPC, net, PID, user, cgroup, time. Each namespace gives a process its own view of a global resource.
Unified hierarchy for resource control. Controllers: cpu, memory, io, pids, cpuset. A single tree mounted at /sys/fs/cgroup replaces the fragmented v1 design.
BPF-extensible scheduler class — allows loading custom scheduling policies as BPF programs at runtime without recompiling the kernel. Enables rapid experimentation with scheduling algorithms.
New (6.12+)
graph TD
TS["task_struct"]
CG["cgroups v2
Resource Limits"]
NS["Namespaces
Isolation"]
EEVDF["EEVDF Scheduler"]
SEXT["sched_ext
(BPF Programs)"]
RQ["Per-CPU Run Queues"]
CPU["CPU Cores"]
CTX["Context Switch
switch_to()"]
TS --> EEVDF
TS --> SEXT
CG --> EEVDF
NS -.->|isolates| TS
EEVDF --> RQ
SEXT --> RQ
RQ --> CTX
CTX --> CPU
style TS fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style CG fill:#2980b9,stroke:#1a5276,color:#f5f0e0
style NS fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style EEVDF fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0
style SEXT fill:#d4bc7a,stroke:#8B6914,color:#2c3e50
style RQ fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0
style CPU fill:#8B6914,stroke:#6B4F10,color:#f5f0e0
style CTX fill:#bfa462,stroke:#8B6914,color:#2c3e50
The memory subsystem maps virtual addresses to physical pages, manages page caches for file I/O, and balances memory pressure across NUMA nodes. It uses a layered allocator stack: the buddy system for physical pages, SLUB for kernel objects, and the page cache to unify file and memory semantics.
4-level (or 5-level on modern x86_64) page tables translate virtual addresses to physical frames. Each process gets its own mm_struct and page table hierarchy.
Manages physical page frames in power-of-2 blocks (order 0-10). Splits and coalesces buddies to minimize fragmentation. Operates per-zone (DMA, Normal, HighMem).
The default slab allocator for kernel objects. Caches frequently allocated structures (task_struct, inode, dentry) in per-CPU freelists for fast allocation.
Shared with VFS — caches file data in memory using a radix tree (XArray since 5.x). Read-ahead prefetching and write-back policies are tunable per-BDI.
When memory is exhausted, the OOM killer selects and terminates processes based on oom_score. Transparent Huge Pages (THP) automatically promote 4K pages to 2M hugepages to reduce TLB misses.
graph LR
PROC["Process"]
VA["Virtual Address"]
PT["Page Tables
4/5-level"]
BUDDY["Buddy Allocator
Physical Pages"]
SLUB["SLUB
Kernel Objects"]
PC["Page Cache"]
VFSC["VFS"]
SWAP["Swap Subsystem"]
NUMA["NUMA Balancing"]
PROC --> VA
VA --> PT
PT --> BUDDY
BUDDY --> NUMA
SLUB --> BUDDY
PC <--> VFSC
PC <--> BUDDY
BUDDY <--> SWAP
style PROC fill:#e8d9a8,stroke:#8B6914,color:#2c3e50
style VA fill:#d4bc7a,stroke:#8B6914,color:#2c3e50
style PT fill:#bfa462,stroke:#8B6914,color:#2c3e50
style BUDDY fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style SLUB fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0
style PC fill:#2980b9,stroke:#1a5276,color:#f5f0e0
style VFSC fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style SWAP fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0
style NUMA fill:#6c3483,stroke:#4a3060,color:#f5f0e0
numactl tool and /proc/<pid>/numa_maps expose this topology.
The Virtual File System provides a uniform interface for all filesystems. Through operation tables on four key objects — superblock, inode, dentry, and file — any filesystem can plug into the kernel without userspace knowing the difference.
Four operation tables: super_operations, inode_operations, dentry_operations, file_operations. Every filesystem implements these to integrate with the kernel.
ext4 (default, journaled), btrfs (CoW, snapshots, RAID), XFS (high-perf, large files), F2FS (flash-optimized), bcachefs (new in 6.7, CoW with caching).
procfs (/proc — process info), sysfs (/sys — device model), debugfs (kernel debug), tmpfs (RAM-backed), overlayfs (container image layers).
The block layer mediates between filesystems and storage drivers. Schedulers: BFQ (fairness), mq-deadline (latency), Kyber (fast SSDs), none (NVMe direct).
graph TD
VFS["VFS Layer
superblock · inode · dentry · file"]
EXT4["ext4"]
BTRFS["btrfs"]
XFS["XFS"]
BCACHE["bcachefs"]
PROC["procfs"]
SYSFS["sysfs"]
TMPFS["tmpfs"]
OVL["overlayfs"]
BLK["Block Layer"]
IOSCHED["I/O Schedulers
BFQ · mq-deadline · Kyber"]
STORAGE["Storage Drivers
NVMe · SCSI · virtio-blk"]
VFS --> EXT4
VFS --> BTRFS
VFS --> XFS
VFS --> BCACHE
VFS --> PROC
VFS --> SYSFS
VFS --> TMPFS
VFS --> OVL
EXT4 --> BLK
BTRFS --> BLK
XFS --> BLK
BCACHE --> BLK
BLK --> IOSCHED
IOSCHED --> STORAGE
style VFS fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style EXT4 fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0
style BTRFS fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0
style XFS fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0
style BCACHE fill:#d4bc7a,stroke:#8B6914,color:#2c3e50
style PROC fill:#2980b9,stroke:#1a5276,color:#f5f0e0
style SYSFS fill:#2980b9,stroke:#1a5276,color:#f5f0e0
style TMPFS fill:#5dade2,stroke:#2980b9,color:#2c3e50
style OVL fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style BLK fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0
style IOSCHED fill:#bfa462,stroke:#8B6914,color:#2c3e50
style STORAGE fill:#8B6914,stroke:#6B4F10,color:#f5f0e0
The Linux networking stack implements the full TCP/IP suite with extensive hooks for packet filtering, traffic shaping, and programmable packet processing via eBPF. Its flexibility has made Linux the foundation for modern software-defined networking.
BSD-compatible socket interface. Supports AF_INET (IPv4), AF_INET6 (IPv6), AF_UNIX (local), AF_NETLINK (kernel comms), AF_XDP (fast path).
Full IPv4/IPv6 dual-stack. Congestion control algorithms: CUBIC (default), BBR (Google, bandwidth-based), DCTCP (data centers). Pluggable via setsockopt.
The packet filtering framework. nftables (successor to iptables) provides a unified rule engine with sets, maps, and concatenations for firewalling and NAT.
Programmable packet processing in the kernel. XDP runs BPF programs at the driver level before sk_buff allocation — enabling line-rate packet filtering, load balancing, and DDoS mitigation.
graph TD
NIC["NIC Hardware"]
DRV["Device Driver"]
XDP["XDP Hook
(eBPF)"]
TC["tc ingress
(eBPF)"]
NFP["Netfilter
PREROUTING"]
ROUTE["Routing Decision"]
FWD["Forward Path"]
NFI["Netfilter INPUT"]
SOCK["Socket Layer"]
APP["Application"]
NFOUT["Netfilter
POSTROUTING"]
NIC --> DRV
DRV --> XDP
XDP --> TC
TC --> NFP
NFP --> ROUTE
ROUTE -->|local| NFI
ROUTE -->|forward| FWD
NFI --> SOCK
SOCK --> APP
FWD --> NFOUT
style NIC fill:#8B6914,stroke:#6B4F10,color:#f5f0e0
style DRV fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0
style XDP fill:#c0392b,stroke:#922b21,color:#f5f0e0
style TC fill:#c0392b,stroke:#922b21,color:#f5f0e0
style NFP fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style ROUTE fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style FWD fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0
style NFI fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style SOCK fill:#2980b9,stroke:#1a5276,color:#f5f0e0
style APP fill:#e8d9a8,stroke:#8B6914,color:#2c3e50
style NFOUT fill:#6c3483,stroke:#4a3060,color:#f5f0e0
Driver code constitutes roughly 60% of the kernel source. The unified device model (device, driver, bus) provides a consistent registration and discovery framework. Hardware description comes from Device Tree (ARM/RISC-V) or ACPI (x86).
Three core abstractions: struct device, struct device_driver, struct bus_type. The bus matches devices to drivers; probing initializes the hardware.
PCI/PCIe (GPUs, NICs, NVMe), USB (peripherals), I2C/SPI (sensors, embedded), Platform (SoC-integrated devices with no discoverable bus).
Device Tree (DT) — compiled .dtb blobs for ARM/RISC-V. ACPI — firmware tables for x86 describing topology, power, and devices. Both feed into the device model.
DMA allows devices to read/write memory directly. The IOMMU (Intel VT-d, ARM SMMU) translates device addresses, enabling isolation for VMs and protecting against DMA attacks.
Top half: Minimal ISR in interrupt context. Bottom half: Deferred work via softirq (network/block), tasklet (legacy), or workqueue (process context, sleepable).
graph LR
HWE["Hardware Event"]
BUS["Bus Enumeration
PCI / USB / DT / ACPI"]
MATCH["bus.match()"]
PROBE["driver.probe()"]
SYSFS["sysfs Registration
/sys/devices/"]
UEVENT["kobject_uevent"]
UDEV["udev (userspace)
Device node creation"]
DMA["DMA Engine"]
IOMMU["IOMMU
VT-d / SMMU"]
HWE --> BUS
BUS --> MATCH
MATCH --> PROBE
PROBE --> SYSFS
SYSFS --> UEVENT
UEVENT --> UDEV
PROBE --> DMA
DMA --> IOMMU
style HWE fill:#8B6914,stroke:#6B4F10,color:#f5f0e0
style BUS fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0
style MATCH fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style PROBE fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0
style SYSFS fill:#2980b9,stroke:#1a5276,color:#f5f0e0
style UEVENT fill:#d4bc7a,stroke:#8B6914,color:#2c3e50
style UDEV fill:#e8d9a8,stroke:#8B6914,color:#2c3e50
style DMA fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style IOMMU fill:#c0392b,stroke:#922b21,color:#f5f0e0
Security in the Linux kernel is layered and modular. The Linux Security Modules (LSM) framework provides ~250+ hooks woven throughout the kernel, allowing multiple security policies to coexist. Alongside LSM, capabilities, seccomp-bpf, and integrity subsystems form a defense-in-depth architecture.
~250+ hooks at security-critical points (file access, socket ops, task creation). Exclusive LSMs: SELinux (RHEL/Fedora), AppArmor (Ubuntu/Debian), Smack (Tizen), TOMOYO. Stackable: Yama, LoadPin, SafeSetID, IPE, Landlock, BPF LSM.
Fine-grained privilege splitting instead of all-or-nothing root. Key capabilities: CAP_NET_ADMIN (network config), CAP_SYS_ADMIN (catch-all admin), CAP_DAC_OVERRIDE (bypass file permissions), CAP_NET_RAW (raw sockets).
Syscall filtering via BPF programs. Restricts which syscalls a process can invoke. Used by Chromium, Docker, Flatpak, and systemd services. Actions: ALLOW, KILL, TRAP, ERRNO, LOG, NOTIFY (user-space decision).
Unprivileged application sandboxing (since 5.13). Programs restrict their own filesystem and network access without root. Stackable — layers on top of SELinux/AppArmor.
Since 5.13Programmable security policies via eBPF (since 5.7). Attach BPF programs to any LSM hook for custom MAC policies, audit logging, or runtime security monitoring without kernel recompilation.
Since 5.7IMA (Integrity Measurement Architecture): measures file hashes, extends TPM PCRs, enforces appraisal policies. EVM (Extended Verification Module): protects security xattrs with HMAC/digital signatures. Together they enable trusted boot and runtime integrity.
graph TD
SYSCALL["Syscall Entry"]
SECCOMP["seccomp-bpf
Syscall Filter"]
CAPS["Capabilities
Check"]
RAC["Resource Access
Check"]
LSM["LSM Hook
Framework"]
SELINUX["SELinux
MAC Policy"]
APPARMOR["AppArmor
Path-based MAC"]
SMACK["Smack
Label-based"]
LANDLOCK["Landlock
Unprivileged Sandbox"]
BPFLSM["BPF LSM
Programmable"]
IMA["IMA / EVM
Integrity"]
ALLOW["Access
Granted"]
SYSCALL --> SECCOMP
SECCOMP -->|allowed| RAC
RAC --> CAPS
CAPS --> LSM
LSM --> SELINUX
LSM --> APPARMOR
LSM --> SMACK
LSM --> LANDLOCK
LSM --> BPFLSM
LSM --> IMA
SELINUX --> ALLOW
APPARMOR --> ALLOW
LANDLOCK --> ALLOW
style SYSCALL fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0
style SECCOMP fill:#c0392b,stroke:#922b21,color:#f5f0e0
style CAPS fill:#bfa462,stroke:#8B6914,color:#2c3e50
style RAC fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0
style LSM fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style SELINUX fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style APPARMOR fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style SMACK fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style LANDLOCK fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style BPFLSM fill:#c0392b,stroke:#922b21,color:#f5f0e0
style IMA fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style ALLOW fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
lsm= controls the order.
The kernel build system transforms a hierarchical configuration into a monolithic image plus optional loadable modules. Kconfig manages ~20,000 options; Kbuild orchestrates per-directory Makefiles. Recent additions include Rust language support and Clang/LLVM as a first-class compiler.
Hierarchical configuration system. Interactive frontends: make menuconfig (ncurses), make xconfig (Qt). Output: .config file with CONFIG_* symbols. Supports dependencies, selects, and choice groups.
Make-based build system with per-directory Makefiles using obj-y (built-in) and obj-m (module) syntax. Handles header dependencies, cross-compilation, and incremental rebuilds.
Loadable at runtime via modprobe/insmod. DKMS rebuilds out-of-tree modules on kernel updates. Module signing (CONFIG_MODULE_SIG) ensures only trusted modules load.
Available since 6.1, promoted to core language Dec 2025. Requires LLVM/Clang toolchain. bindgen generates Rust bindings from C headers. First Rust drivers: Nova (GPU, 6.15), PHY, block.
GCC (primary, widest arch support), Clang/LLVM (required for Rust, CFI, and some sanitizers). Cross-compilation supported for all 22 architectures via ARCH= and CROSS_COMPILE=.
flowchart LR
KCONFIG["Kconfig
.config"]
KBUILD["Kbuild
Make"]
CSRC["C Sources"]
RSRC["Rust Sources"]
BINDGEN["bindgen
C→Rust FFI"]
GCC["GCC / Clang"]
RUSTC["rustc"]
VMLINUX["vmlinux"]
MODULES["Modules
.ko files"]
BZIMAGE["bzImage
(x86)"]
IMAGE["Image
(ARM64)"]
MODPROBE["modprobe"]
KERNEL["Running Kernel"]
KCONFIG --> KBUILD
KBUILD --> GCC
CSRC --> GCC
RSRC --> BINDGEN
BINDGEN --> RUSTC
RUSTC --> VMLINUX
GCC --> VMLINUX
GCC --> MODULES
VMLINUX --> BZIMAGE
VMLINUX --> IMAGE
MODULES --> MODPROBE
MODPROBE --> KERNEL
style KCONFIG fill:#bfa462,stroke:#8B6914,color:#2c3e50
style KBUILD fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0
style CSRC fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style RSRC fill:#c0392b,stroke:#922b21,color:#f5f0e0
style BINDGEN fill:#c0392b,stroke:#922b21,color:#f5f0e0
style GCC fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0
style RUSTC fill:#c0392b,stroke:#922b21,color:#f5f0e0
style VMLINUX fill:#2980b9,stroke:#1a5276,color:#f5f0e0
style MODULES fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style BZIMAGE fill:#8B6914,stroke:#6B4F10,color:#f5f0e0
style IMAGE fill:#8B6914,stroke:#6B4F10,color:#f5f0e0
style MODPROBE fill:#d4bc7a,stroke:#8B6914,color:#2c3e50
style KERNEL fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
CONFIG_MODULE_SIG_FORCE is enabled, the kernel refuses to load any module without a valid signature. This is critical for Secure Boot chains and is the default on RHEL, Fedora, and Ubuntu kernels.
A full subsystem-to-subsystem view of the Linux kernel, showing how the major components connect from userspace applications down to hardware. This map illustrates the kernel’s layered architecture and the cross-cutting role of eBPF and security hooks.
graph TD
subgraph USERSPACE["Userspace"]
APPS["Applications"]
LIBS["Libraries
glibc · musl"]
CRUNTIME["Container Runtimes
runc · crun"]
end
SYSCALLS["Syscall Interface
~450 syscalls"]
subgraph CORE["Core Kernel"]
SCHED["Scheduler
EEVDF · sched_ext"]
MM["Memory Manager
Buddy · SLUB · THP"]
VFS["VFS
inode · dentry"]
NETSTACK["Network Stack
TCP/IP · sockets"]
end
subgraph INFRA["Infrastructure"]
CGROUPS["cgroups v2"]
NS["Namespaces"]
IOURING["io_uring"]
EBPF["eBPF
Hooks everywhere"]
end
subgraph SECURITY["Security"]
LSMF["LSM Framework"]
SECCOMPF["seccomp-bpf"]
end
PCACHE["Page Cache"]
BLK["Block Layer"]
NFILT["Netfilter
nftables"]
DEVMODEL["Device Model
sysfs · udev"]
DRIVERS["Drivers
GPU · NIC · Storage"]
HW["Hardware"]
APPS --> LIBS
LIBS --> SYSCALLS
CRUNTIME --> SYSCALLS
SYSCALLS --> SECCOMPF
SECCOMPF --> SCHED
SYSCALLS --> VFS
SYSCALLS --> NETSTACK
SCHED --> CGROUPS
CGROUPS --> NS
MM --> PCACHE
PCACHE --> VFS
VFS --> BLK
VFS --> IOURING
NETSTACK --> NFILT
NETSTACK --> EBPF
BLK --> DRIVERS
NFILT --> EBPF
LSMF --> VFS
LSMF --> NETSTACK
DEVMODEL --> DRIVERS
DRIVERS --> HW
style APPS fill:#e8d9a8,stroke:#8B6914,color:#2c3e50
style LIBS fill:#d4bc7a,stroke:#8B6914,color:#2c3e50
style CRUNTIME fill:#d4bc7a,stroke:#8B6914,color:#2c3e50
style SYSCALLS fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0
style SCHED fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style MM fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style VFS fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style NETSTACK fill:#4a7c28,stroke:#2d5016,color:#f5f0e0
style CGROUPS fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0
style NS fill:#2980b9,stroke:#1a5276,color:#f5f0e0
style IOURING fill:#6b9b3a,stroke:#4a7c28,color:#f5f0e0
style EBPF fill:#c0392b,stroke:#922b21,color:#f5f0e0
style LSMF fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style SECCOMPF fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style PCACHE fill:#5dade2,stroke:#2980b9,color:#2c3e50
style BLK fill:#8b7d3c,stroke:#6B4F10,color:#f5f0e0
style NFILT fill:#6c3483,stroke:#4a3060,color:#f5f0e0
style DEVMODEL fill:#bfa462,stroke:#8B6914,color:#2c3e50
style DRIVERS fill:#8B6914,stroke:#6B4F10,color:#f5f0e0
style HW fill:#2c3e50,stroke:#1a252f,color:#f5f0e0
| Feature | Version | Status | Notes |
|---|---|---|---|
| Rust support | 6.1+ |
Active | Promoted to core language Dec 2025 |
| EEVDF scheduler | 6.6 |
Stable | Replaced CFS as default scheduler |
| sched_ext (BPF scheduler) | 6.12 |
Active | Full BPF scheduling class |
| PREEMPT_RT | 6.12 |
Merged | 20 years of patches finally mainline |
| io_uring | 5.1+ |
Stable | ~60 operation types, mature async I/O |
| Bcachefs | 6.7 |
Stabilizing | New CoW filesystem with caching |
| Landlock LSM | 5.13+ |
Stable | Unprivileged sandboxing |
| BPF LSM | 5.7+ |
Stable | Programmable security hooks |
| KCFI (kernel CFI) | 6.10+ |
Active | Clang/LLVM only, control flow integrity |
| Nova GPU driver (Rust) | 6.15 |
Active | First Rust DRM driver (NVIDIA GSP) |
| Y2038 fixes | 5.6+ |
Complete | Kernel-side done, userspace ongoing |