Tech Guides
01

Quick Reference

Essential CLI commands and dot-commands for the sqlite3 shell. These are not SQL statements but meta-commands that control the interactive environment.

Opening & Creating Databases

-- Open (or create) a database file
sqlite3 myapp.db

-- Open an in-memory database
sqlite3 :memory:

-- Open read-only
sqlite3 -readonly myapp.db

-- Execute SQL from command line
sqlite3 myapp.db "SELECT count(*) FROM users;"

-- Run a SQL script
sqlite3 myapp.db < schema.sql

Essential Dot-Commands

.mode

Set output format

.mode column .mode csv .mode json .mode markdown .mode table
.headers

Toggle column headers in output

.headers on .headers off
.schema

Show CREATE statements

.schema .schema users .schema --indent
.tables

List tables matching a pattern

.tables .tables %user%
.import

Import data from a file

.mode csv .import data.csv users
.output / .once

Redirect output to a file

.output results.txt .once report.csv

Inspection Commands

-- Show all databases (main + attached)
.databases

-- Show all indexes on a table
.indexes users

-- Show column info
PRAGMA table_info(users);

-- Detailed column info (includes hidden columns)
PRAGMA table_xinfo(users);

-- Show the SQL used to create everything
.dump

-- Dump a specific table
.dump users

-- Show current settings
.show
02

SQL Dialect Differences

SQLite implements most of SQL-92 and many SQL:2003 features, but has notable differences from PostgreSQL and MySQL. Understanding these prevents surprises when porting queries.

Key Differences from PostgreSQL & MySQL

Feature SQLite PostgreSQL / MySQL
Type enforcement Flexible (type affinity) Strict types
ALTER TABLE RENAME TABLE, RENAME COLUMN, ADD COLUMN, DROP COLUMN (3.35+) Full ALTER support
RIGHT / FULL OUTER JOIN Supported since 3.39.0 Always supported
Stored procedures Not supported Supported
Concurrent writes Single writer (WAL allows concurrent reads) MVCC, many concurrent writers
GRANT / REVOKE Not supported (file-level permissions) Full role-based access
Date/time types Stored as TEXT, REAL, or INTEGER Native DATE, TIMESTAMP types
RETURNING clause Supported since 3.35.0 PostgreSQL yes, MySQL 8.0+ limited
Boolean type Integers 0 and 1 Native BOOLEAN
IF NOT EXISTS Supported for CREATE TABLE, INDEX, VIEW, TRIGGER Supported

SQLite-Specific Syntax

-- UPSERT (INSERT OR REPLACE / ON CONFLICT)
INSERT INTO users (id, name, email)
VALUES (1, 'Alice', 'alice@example.com')
ON CONFLICT(id) DO UPDATE SET
    name = excluded.name,
    email = excluded.email;

-- RETURNING clause (3.35+)
INSERT INTO users (name, email)
VALUES ('Bob', 'bob@example.com')
RETURNING id, name;

-- UPDATE with LIMIT (requires compile-time option)
UPDATE logs SET archived = 1
WHERE created_at < '2024-01-01'
LIMIT 1000;

-- STRICT tables (3.37+) — enforce type checking
CREATE TABLE measurements (
    id INTEGER PRIMARY KEY,
    value REAL NOT NULL,
    label TEXT NOT NULL
) STRICT;

Date & Time Functions

-- Current date/time (UTC)
SELECT date('now');              -- 2026-03-04
SELECT time('now');              -- 14:30:00
SELECT datetime('now');          -- 2026-03-04 14:30:00
SELECT unixepoch('now');         -- 1772816200

-- Date arithmetic
SELECT date('now', '+7 days');
SELECT date('now', '-1 month');
SELECT date('now', 'start of month');
SELECT date('now', 'weekday 1');  -- next Monday

-- Format timestamps
SELECT strftime('%Y-%m-%d %H:%M', 'now');
SELECT strftime('%s', 'now');  -- Unix timestamp
03

Data Types & Type Affinity

SQLite uses a dynamic type system. Any column can hold any type of value. The declared type is a hint (affinity) that influences how values are stored and compared, not a constraint.

The Five Storage Classes

Storage Class Description Size
NULL Null value 0 bytes
INTEGER Signed integer (1, 2, 3, 4, 6, or 8 bytes) 1-8 bytes
REAL IEEE 754 floating-point 8 bytes
TEXT UTF-8, UTF-16BE, or UTF-16LE string Variable
BLOB Binary data, stored exactly as input Variable

Type Affinity Rules

SQLite determines column affinity from the declared type name using these rules (checked in order):

Affinity Rule Example Declarations
INTEGER Type contains "INT" INTEGER, INT, BIGINT, SMALLINT, TINYINT
TEXT Type contains "CHAR", "CLOB", or "TEXT" TEXT, VARCHAR(255), CLOB, CHARACTER(20)
BLOB Type contains "BLOB" or no type specified BLOB, (empty declaration)
REAL Type contains "REAL", "FLOA", or "DOUB" REAL, FLOAT, DOUBLE, DOUBLE PRECISION
NUMERIC Everything else NUMERIC, DECIMAL(10,2), BOOLEAN, DATE

INTEGER PRIMARY KEY & rowid

-- INTEGER PRIMARY KEY is an alias for rowid (auto-incrementing)
CREATE TABLE users (
    id INTEGER PRIMARY KEY,  -- alias for rowid
    name TEXT
);

-- AUTOINCREMENT prevents rowid reuse (slightly slower)
CREATE TABLE events (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    type TEXT
);

-- Check the typeof() a value
SELECT typeof(id), typeof(name) FROM users;
Note

Only INTEGER PRIMARY KEY (not INT PRIMARY KEY) becomes a rowid alias. The keyword must be exactly INTEGER.

STRICT Tables (3.37+)

-- STRICT mode enforces type checking like traditional databases
CREATE TABLE measurements (
    id INTEGER PRIMARY KEY,
    sensor_id INTEGER NOT NULL,
    value REAL NOT NULL,
    label TEXT,
    raw_data BLOB,
    metadata ANY          -- ANY allows dynamic typing in STRICT tables
) STRICT;

-- This would fail in a STRICT table:
-- INSERT INTO measurements (id, sensor_id, value) VALUES (1, 'abc', 42.0);
-- Error: cannot store TEXT value in INTEGER column
04

PRAGMAs

PRAGMAs are SQLite-specific commands that configure the database engine, query internal state, and control runtime behavior. Many must be set per-connection.

Essential PRAGMAs

-- WAL mode: allows concurrent readers during writes
PRAGMA journal_mode = WAL;

-- Enable foreign key enforcement (OFF by default!)
PRAGMA foreign_keys = ON;

-- Increase cache size (negative = KB, positive = pages)
PRAGMA cache_size = -64000;  -- 64MB cache

-- Memory-mapped I/O (can dramatically speed up reads)
PRAGMA mmap_size = 268435456;  -- 256MB

-- Synchronous mode: NORMAL is safe with WAL
PRAGMA synchronous = NORMAL;

-- Busy timeout: wait N ms if database is locked
PRAGMA busy_timeout = 5000;  -- 5 seconds

-- Temp storage in memory (faster)
PRAGMA temp_store = MEMORY;

Recommended Connection Setup

-- Run these PRAGMAs at connection open for best performance
PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL;
PRAGMA foreign_keys = ON;
PRAGMA busy_timeout = 5000;
PRAGMA cache_size = -20000;      -- 20MB
PRAGMA temp_store = MEMORY;
PRAGMA mmap_size = 268435456;    -- 256MB
Warning

journal_mode = WAL is persistent (stored in the database file). All other PRAGMAs must be re-applied on every new connection.

Inspection PRAGMAs

table_info

Column names, types, nullability, defaults

PRAGMA table_info(users);
index_list

All indexes on a table

PRAGMA index_list(users);
index_info

Columns in an index

PRAGMA index_info(idx_email);
foreign_key_list

Foreign keys on a table

PRAGMA foreign_key_list(orders);
compile_options

Build-time options of this SQLite binary

PRAGMA compile_options;
integrity_check

Verify database file integrity

PRAGMA integrity_check;

Journal Modes Compared

Mode Behavior Concurrency Use Case
DELETE Default. Rollback journal deleted after commit. Single reader OR single writer Legacy, simple deployments
WAL Write-Ahead Logging. Appends to WAL file. Many readers + one writer concurrently Web apps, most server uses
MEMORY Journal stored in memory. Same as DELETE Ephemeral data, testing
OFF No rollback. Corruption risk on crash. N/A Bulk imports only
05

Full-Text Search (FTS5)

FTS5 is SQLite's built-in full-text search engine. It creates a virtual table with an inverted index, enabling fast text queries with ranking, snippets, and boolean operators.

Creating FTS Tables

-- Basic FTS5 table
CREATE VIRTUAL TABLE articles_fts USING fts5(
    title,
    body,
    content='articles',       -- external content table
    content_rowid='id'          -- maps to articles.id
);

-- Populate from existing table
INSERT INTO articles_fts(rowid, title, body)
SELECT id, title, body FROM articles;

-- Keep in sync with triggers
CREATE TRIGGER articles_ai AFTER INSERT ON articles BEGIN
    INSERT INTO articles_fts(rowid, title, body)
    VALUES (new.id, new.title, new.body);
END;

CREATE TRIGGER articles_ad AFTER DELETE ON articles BEGIN
    INSERT INTO articles_fts(articles_fts, rowid, title, body)
    VALUES ('delete', old.id, old.title, old.body);
END;

CREATE TRIGGER articles_au AFTER UPDATE ON articles BEGIN
    INSERT INTO articles_fts(articles_fts, rowid, title, body)
    VALUES ('delete', old.id, old.title, old.body);
    INSERT INTO articles_fts(rowid, title, body)
    VALUES (new.id, new.title, new.body);
END;

Querying FTS

-- Simple term search
SELECT * FROM articles_fts WHERE articles_fts MATCH 'database';

-- Search specific column
SELECT * FROM articles_fts WHERE title MATCH 'sqlite';

-- Boolean operators
SELECT * FROM articles_fts
WHERE articles_fts MATCH 'database AND NOT nosql';

-- Phrase search
SELECT * FROM articles_fts
WHERE articles_fts MATCH '"full text search"';

-- Prefix search
SELECT * FROM articles_fts
WHERE articles_fts MATCH 'data*';

-- NEAR operator (within 5 tokens by default)
SELECT * FROM articles_fts
WHERE articles_fts MATCH 'NEAR(sqlite database, 10)';

Ranking & Snippets

-- BM25 ranking (lower = more relevant)
SELECT *, rank
FROM articles_fts
WHERE articles_fts MATCH 'database engine'
ORDER BY rank;

-- Custom BM25 weights (title weighted 10x, body 1x)
SELECT *, bm25(articles_fts, 10.0, 1.0) AS score
FROM articles_fts
WHERE articles_fts MATCH 'database'
ORDER BY score;

-- Snippets with highlighted matches
SELECT snippet(articles_fts, 1, '<b>', '</b>', '...', 32)
FROM articles_fts
WHERE articles_fts MATCH 'sqlite';

-- Highlighted full text
SELECT highlight(articles_fts, 0, '<mark>', '</mark>') AS title
FROM articles_fts
WHERE articles_fts MATCH 'sqlite';
06

JSON Support

SQLite has extensive JSON support via built-in functions. JSON is stored as TEXT but manipulated with SQL functions. Since 3.38.0, the -> and ->> operators provide shorthand extraction.

Extraction Functions

-- json_extract: returns the SQL type of the value
SELECT json_extract(data, '$.name') FROM users;
SELECT json_extract(data, '$.address.city') FROM users;
SELECT json_extract(data, '$.tags[0]') FROM users;

-- Arrow operators (3.38+)
SELECT data -> '$.name' FROM users;     -- returns JSON value
SELECT data ->> '$.name' FROM users;    -- returns SQL value (unquoted)

-- json_type: get the JSON type of a value
SELECT json_type(data, '$.age') FROM users;  -- 'integer'
SELECT json_type(data, '$.tags') FROM users; -- 'array'

json_each & json_tree

-- Iterate over JSON array elements
SELECT u.name, j.value AS tag
FROM users u, json_each(u.tags) j;

-- Filter rows by array contents
SELECT * FROM users
WHERE id IN (
    SELECT u.id
    FROM users u, json_each(u.tags) j
    WHERE j.value = 'admin'
);

-- json_tree: recursively walk a JSON structure
SELECT key, value, type, path
FROM json_tree('{"a":1,"b":{"c":[2,3]}}');

-- Count elements in a JSON array
SELECT json_array_length(tags) FROM users;

Building & Modifying JSON

-- Build JSON objects
SELECT json_object('name', name, 'email', email) FROM users;

-- Build JSON arrays
SELECT json_array(1, 2, 'three');  -- [1,2,"three"]

-- Aggregate into JSON array
SELECT json_group_array(name) FROM users;

-- Aggregate into JSON object
SELECT json_group_object(name, email) FROM users;

-- Modify JSON: set a key
UPDATE users
SET data = json_set(data, '$.verified', 1)
WHERE id = 1;

-- Insert a new key (fails silently if exists)
UPDATE users
SET data = json_insert(data, '$.created_at', '2026-03-04');

-- Replace an existing key (fails silently if missing)
UPDATE users
SET data = json_replace(data, '$.name', 'Alice Smith');

-- Remove a key
UPDATE users
SET data = json_remove(data, '$.temp_field');

-- Patch: merge two JSON objects
SELECT json_patch(
    '{"a":1,"b":2}',
    '{"b":99,"c":3}'
);  -- {"a":1,"b":99,"c":3}
07

Window Functions & CTEs

SQLite supports window functions (since 3.25.0) and Common Table Expressions including recursive CTEs. These enable complex analytics and hierarchical queries without subquery nesting.

Window Functions

-- ROW_NUMBER: assign sequential row numbers
SELECT
    name,
    department,
    salary,
    row_number() OVER (
        PARTITION BY department
        ORDER BY salary DESC
    ) AS rank_in_dept
FROM employees;

-- RANK and DENSE_RANK
SELECT
    name,
    score,
    rank() OVER (ORDER BY score DESC) AS rank,
    dense_rank() OVER (ORDER BY score DESC) AS dense_rank
FROM players;

-- Running total with SUM
SELECT
    date,
    amount,
    sum(amount) OVER (ORDER BY date) AS running_total
FROM transactions;

-- LAG and LEAD: access previous/next rows
SELECT
    date,
    price,
    lag(price, 1) OVER (ORDER BY date) AS prev_price,
    price - lag(price, 1) OVER (ORDER BY date) AS change
FROM stock_prices;

-- Named window definition
SELECT
    name,
    salary,
    avg(salary) OVER dept_window AS dept_avg,
    max(salary) OVER dept_window AS dept_max
FROM employees
WINDOW dept_window AS (PARTITION BY department);

Common Table Expressions (CTEs)

-- Basic CTE
WITH active_users AS (
    SELECT * FROM users WHERE active = 1
)
SELECT name, email FROM active_users;

-- Multiple CTEs
WITH
    dept_totals AS (
        SELECT department, sum(salary) AS total
        FROM employees
        GROUP BY department
    ),
    company_avg AS (
        SELECT avg(total) AS avg_total FROM dept_totals
    )
SELECT d.department, d.total, c.avg_total
FROM dept_totals d, company_avg c
WHERE d.total > c.avg_total;

Recursive CTEs

-- Generate a series of numbers (SQLite has no generate_series)
WITH RECURSIVE cnt(x) AS (
    VALUES(1)
    UNION ALL
    SELECT x + 1 FROM cnt WHERE x < 100
)
SELECT x FROM cnt;

-- Traverse a tree / org chart
WITH RECURSIVE org_tree AS (
    -- Base case: top-level managers
    SELECT id, name, manager_id, 0 AS depth
    FROM employees
    WHERE manager_id IS NULL
    UNION ALL
    -- Recursive step: reports
    SELECT e.id, e.name, e.manager_id, t.depth + 1
    FROM employees e
    JOIN org_tree t ON e.manager_id = t.id
)
SELECT printf('%.*c%s', depth * 2, ' ', name) AS org_chart
FROM org_tree
ORDER BY depth, name;

-- Find all ancestors of a node (bill of materials)
WITH RECURSIVE ancestors AS (
    SELECT id, parent_id, name FROM categories WHERE id = 42
    UNION ALL
    SELECT c.id, c.parent_id, c.name
    FROM categories c
    JOIN ancestors a ON c.id = a.parent_id
)
SELECT * FROM ancestors;
08

Embedding Patterns

SQLite is an embedded database -- it runs in-process, not as a separate server. Here are the standard patterns for using it from popular programming languages.

Python (stdlib sqlite3)

import sqlite3

# Connect (creates file if missing)
conn = sqlite3.connect("app.db")
conn.execute("PRAGMA journal_mode=WAL")
conn.execute("PRAGMA foreign_keys=ON")

# Use context manager for auto-commit/rollback
with conn:
    conn.execute("""
        CREATE TABLE IF NOT EXISTS users (
            id INTEGER PRIMARY KEY,
            name TEXT NOT NULL,
            email TEXT UNIQUE
        )
    """)
    conn.execute(
        "INSERT INTO users (name, email) VALUES (?, ?)",
        ("Alice", "alice@example.com")
    )

# Query with row factory
conn.row_factory = sqlite3.Row
cursor = conn.execute("SELECT * FROM users WHERE name = ?", ("Alice",))
row = cursor.fetchone()
print(row["email"])  # alice@example.com

conn.close()

Node.js (better-sqlite3)

const Database = require('better-sqlite3');

const db = new Database('app.db');
db.pragma('journal_mode = WAL');
db.pragma('foreign_keys = ON');

// Create table
db.exec(`
    CREATE TABLE IF NOT EXISTS users (
        id INTEGER PRIMARY KEY,
        name TEXT NOT NULL,
        email TEXT UNIQUE
    )
`);

// Prepared statements (synchronous, fast)
const insert = db.prepare(
    'INSERT INTO users (name, email) VALUES (?, ?)'
);
insert.run('Alice', 'alice@example.com');

// Transaction helper
const insertMany = db.transaction((users) => {
    for (const u of users) insert.run(u.name, u.email);
});
insertMany([
    { name: 'Bob', email: 'bob@example.com' },
    { name: 'Carol', email: 'carol@example.com' },
]);

// Query
const user = db.prepare('SELECT * FROM users WHERE name = ?').get('Alice');
console.log(user.email);

Go (modernc.org/sqlite or mattn/go-sqlite3)

import (
    "database/sql"
    _ "modernc.org/sqlite"  // pure Go, no CGO
)

db, err := sql.Open("sqlite", "app.db")
if err != nil {
    log.Fatal(err)
}
defer db.Close()

// Connection PRAGMAs
db.Exec("PRAGMA journal_mode=WAL")
db.Exec("PRAGMA foreign_keys=ON")

// Single writer: limit connection pool
db.SetMaxOpenConns(1)

// Prepared statement
stmt, _ := db.Prepare("INSERT INTO users (name, email) VALUES (?, ?)")
stmt.Exec("Alice", "alice@example.com")

// Transaction
tx, _ := db.Begin()
tx.Exec("INSERT INTO users (name, email) VALUES (?, ?)", "Bob", "bob@ex.com")
tx.Commit()

// Query
var name, email string
row := db.QueryRow("SELECT name, email FROM users WHERE id = ?", 1)
row.Scan(&name, &email)

Rust (rusqlite)

use rusqlite::{Connection, params};

fn main() -> rusqlite::Result<()> {
    let conn = Connection::open("app.db")?;

    conn.execute_batch("
        PRAGMA journal_mode=WAL;
        PRAGMA foreign_keys=ON;
    ")?;

    conn.execute(
        "CREATE TABLE IF NOT EXISTS users (
            id INTEGER PRIMARY KEY,
            name TEXT NOT NULL,
            email TEXT UNIQUE
        )",
        [],
    )?;

    // Insert
    conn.execute(
        "INSERT INTO users (name, email) VALUES (?1, ?2)",
        params!["Alice", "alice@example.com"],
    )?;

    // Query with mapping
    let mut stmt = conn.prepare(
        "SELECT id, name, email FROM users WHERE name = ?1"
    )?;

    let users = stmt.query_map(params!["Alice"], |row| {
        Ok((
            row.get::<_, i64>(0)?,
            row.get::<_, String>(1)?,
            row.get::<_, String>(2)?,
        ))
    })?;

    for user in users {
        println!("{:?}", user?);
    }

    Ok(())
}
Tip: Connection Pooling

SQLite supports one writer at a time. For web servers, use a single write connection and a pool of read connections. Many ORMs (like Drizzle, Prisma, SQLAlchemy) handle this automatically.

09

Backup & Recovery

SQLite databases are single files, making backup straightforward. But you must use the right method to avoid corruption -- never copy a database file while it might be written to.

The .backup Command

-- Online backup (safe even during writes)
sqlite3 myapp.db ".backup backup.db"

-- Backup from within the shell
.backup main backup.db

-- Backup to a different location
sqlite3 myapp.db ".backup /mnt/backups/myapp-$(date +%F).db"
Best Practice

The .backup command uses the SQLite Online Backup API. It creates a consistent snapshot even if the database is being written to concurrently. Always prefer this over file copy.

VACUUM

-- Rebuild the entire database (reclaims space, defragments)
VACUUM;

-- VACUUM INTO: creates a compacted copy (3.27+)
VACUUM INTO 'backup.db';

-- Auto-vacuum: reclaim pages on DELETE (must be set before tables exist)
PRAGMA auto_vacuum = INCREMENTAL;
PRAGMA incremental_vacuum(100);  -- free up to 100 pages

Integrity & Repair

-- Quick integrity check (checks B-tree structure)
PRAGMA quick_check;

-- Full integrity check (also verifies content)
PRAGMA integrity_check;

-- Check foreign key constraints
PRAGMA foreign_key_check;

-- Recover data from a corrupt database
sqlite3 corrupt.db ".recover" | sqlite3 recovered.db

-- Export as SQL then reimport (last resort repair)
sqlite3 corrupt.db ".dump" > dump.sql
sqlite3 new.db < dump.sql

WAL Checkpointing

-- Manual checkpoint: flush WAL to main database
PRAGMA wal_checkpoint(TRUNCATE);

-- Checkpoint modes:
-- PASSIVE  - checkpoint without blocking
-- FULL     - wait for readers, then checkpoint
-- RESTART  - like FULL, also restart WAL from beginning
-- TRUNCATE - like RESTART, also truncate WAL file to zero

-- Auto-checkpoint threshold (default 1000 pages)
PRAGMA wal_autocheckpoint = 2000;
10

Performance Tuning

SQLite is fast by default, but understanding indexes, query plans, and the ANALYZE command can make it orders of magnitude faster for large datasets.

Index Design

-- Single-column index
CREATE INDEX idx_users_email ON users(email);

-- Composite index (column order matters!)
CREATE INDEX idx_orders_user_date
ON orders(user_id, created_at DESC);

-- Unique index
CREATE UNIQUE INDEX idx_users_email_unique ON users(email);

-- Partial index (only index active users)
CREATE INDEX idx_active_users ON users(name)
WHERE active = 1;

-- Expression index
CREATE INDEX idx_users_lower_email ON users(lower(email));

-- Covering index (avoids table lookup)
CREATE INDEX idx_orders_covering
ON orders(user_id, status, total);

EXPLAIN QUERY PLAN

-- See how SQLite will execute a query
EXPLAIN QUERY PLAN
SELECT * FROM orders WHERE user_id = 42 AND status = 'shipped';
-- SEARCH orders USING INDEX idx_orders_user_date (user_id=?)

-- Full EXPLAIN shows bytecodes (rarely needed)
EXPLAIN
SELECT * FROM users WHERE email = 'alice@example.com';

-- What to look for in query plans:
-- SEARCH ... USING INDEX     = good (index lookup)
-- SCAN                       = bad (full table scan)
-- USING COVERING INDEX       = best (no table access)
-- USE TEMP B-TREE FOR ORDER  = needs sort (consider index)

ANALYZE

-- Gather statistics for the query planner
ANALYZE;

-- Analyze a specific table
ANALYZE users;

-- View statistics
SELECT * FROM sqlite_stat1;

-- Re-run ANALYZE after bulk inserts/deletes
-- for the optimizer to make better index choices

Bulk Insert Optimization

-- Wrap bulk inserts in a single transaction (100x faster)
BEGIN;
INSERT INTO logs VALUES (1, 'event_a', '2026-01-01');
INSERT INTO logs VALUES (2, 'event_b', '2026-01-02');
-- ... thousands more ...
COMMIT;

-- Disable synchronous writes for initial bulk load
PRAGMA synchronous = OFF;    -- DANGER: corruption risk on crash
PRAGMA journal_mode = OFF;
BEGIN;
-- ... bulk inserts ...
COMMIT;
PRAGMA synchronous = NORMAL;
PRAGMA journal_mode = WAL;

Performance Checklist

Use WAL Mode

Enables concurrent reads during writes. Almost always the right choice for server workloads.

PRAGMA journal_mode = WAL;
Increase Cache

Default 2MB cache is small. 20-64MB covers most working sets.

PRAGMA cache_size = -20000;
Enable mmap

Memory-mapped I/O bypasses the page cache for reads.

PRAGMA mmap_size = 268435456;
Index Your WHERE Clauses

Check EXPLAIN QUERY PLAN for SCAN operations and add indexes.

Batch Writes in Transactions

Each standalone INSERT is a full transaction with fsync. Batching is 10-100x faster.

Run ANALYZE

Helps the query planner choose the right index. Re-run after data changes.

ANALYZE;