← Back to Tech Guides
Fire & Heat Design System · In-Memory Data Store

Redis

The molten engine of in-memory data — keys, structures, streams & clusters

10 sections · v7.x reference · CLI, persistence, clustering
01

Quick Reference

The most common commands at a glance — your daily toolkit.

Connecting & Server

# Connect to local Redis
redis-cli

# Connect to remote with auth
redis-cli -h myhost.example.com -p 6379 -a password

# Connect with TLS
redis-cli --tls --cert ./client.crt --key ./client.key

# Server info
INFO                     # Full server info
INFO memory              # Memory usage
INFO replication         # Replication status
DBSIZE                   # Key count in current DB
CONFIG GET maxmemory     # Read config at runtime
CONFIG SET maxmemory 2gb # Set config at runtime

Core GET / SET

SET / GET

SET user:1:name "Ada Lovelace"
GET user:1:name
"Ada Lovelace"

Store and retrieve a string value by key.

SET with Options

# Set with 300s TTL
SET session:abc "data" EX 300

# Set only if key does NOT exist
SET lock:order 1 NX EX 30

EX (seconds), PX (ms), NX (if-not-exists), XX (if-exists).

MSET / MGET

MSET k1 "v1" k2 "v2"
MGET k1 k2
1) "v1"
2) "v2"

Batch get/set — atomic and faster than N round-trips.

Counters

INCR page:views
(integer) 1
INCRBY page:views 10
(integer) 11
DECR page:views
INCRBYFLOAT price 3.14

Atomic increment/decrement — no race conditions.

Key Management

EXISTS mykey            # 1 if exists, 0 if not
DEL mykey              # Delete (blocking)
UNLINK mykey           # Delete (async, non-blocking)
TYPE mykey             # Returns: string, list, set, zset, hash, stream
RENAME old new         # Rename a key
KEYS user:*            # Pattern match (NEVER in production!)
SCAN 0 MATCH user:* COUNT 100  # Cursor-based iteration (safe)
OBJECT ENCODING mykey  # Internal encoding (listpack, hashtable, etc.)
Never use KEYS in production. It blocks the server while scanning all keys. Use SCAN with cursor-based iteration instead.

Data Types at a Glance

Type Use Case Max Size Key Commands
String Cache, counters, flags 512 MB GET SET INCR
Hash Objects, user profiles 4B fields HGET HSET HGETALL
List Queues, feeds, stacks 4B elements LPUSH RPOP LRANGE
Set Tags, unique items 4B members SADD SMEMBERS SINTER
Sorted Set Leaderboards, ranges 4B members ZADD ZRANGE ZRANK
Stream Event log, messaging Configurable XADD XREAD XACK
02

Data Structures

Six core structures, each with dedicated commands and internal optimizations.

Strings

The simplest type. Strings store text, numbers, or binary data up to 512 MB. They are the foundation of caching and counters.

SET greeting "Hello, World"
APPEND greeting "!"
STRLEN greeting              (integer) 13
GETRANGE greeting 0 4       "Hello"
SETRANGE greeting 7 "Redis"

# Bit operations
SETBIT flags 7 1
GETBIT flags 7
BITCOUNT flags               # Population count

Hashes

Field-value maps ideal for representing objects. More memory-efficient than serialized JSON strings for partial updates.

HSET user:100 name "Ada" email "ada@example.com" age 36
HGET user:100 name           "Ada"
HGETALL user:100
1) "name"
2) "Ada"
3) "email"
4) "ada@example.com"
5) "age"
6) "36"

HINCRBY user:100 age 1     # Increment a field
HDEL user:100 email          # Remove a field
HEXISTS user:100 name        # Check field existence
HLEN user:100                # Number of fields
Memory optimization: Hashes with fewer than 128 fields and values under 64 bytes use a compact listpack encoding (was ziplist before Redis 7.0).

Lists

Doubly-linked lists supporting push/pop from both ends. Perfect for queues, stacks, and recent-activity feeds.

LPUSH queue:jobs "job1" "job2" "job3"
RPOP queue:jobs               "job1"
LRANGE queue:jobs 0 -1      # All elements
LLEN queue:jobs               # Length
LINDEX queue:jobs 0          # Element at index
LTRIM queue:jobs 0 99       # Keep only first 100
LPOS queue:jobs "job2"      # Find index of value

# Blocking pop (for worker queues)
BRPOP queue:jobs 30         # Block up to 30 seconds

Sets

Unordered collections of unique strings. Support set-theoretic operations like intersection, union, and difference.

SADD tags:post:1 "redis" "database" "nosql"
SADD tags:post:2 "redis" "cache" "performance"
SMEMBERS tags:post:1        # All members
SISMEMBER tags:post:1 "redis"  (integer) 1
SCARD tags:post:1            # Count

# Set operations
SINTER tags:post:1 tags:post:2     "redis"
SUNION tags:post:1 tags:post:2     # All tags
SDIFF tags:post:1 tags:post:2      # In 1 but not 2
SRANDMEMBER tags:post:1 2          # 2 random members

Sorted Sets (ZSets)

Like sets, but every member has a floating-point score. Members are ordered by score, enabling leaderboards and range queries.

ZADD leaderboard 100 "alice" 250 "bob" 175 "carol"
ZRANGE leaderboard 0 -1 WITHSCORES
1) "alice"   2) "100"
3) "carol"   4) "175"
5) "bob"     6) "250"

ZREVRANGE leaderboard 0 2   # Top 3 (highest first)
ZRANK leaderboard "bob"       (integer) 2
ZSCORE leaderboard "bob"      "250"
ZINCRBY leaderboard 50 "alice"
ZRANGEBYSCORE leaderboard 100 200  # Score range
ZCOUNT leaderboard 100 200       # Count in range

Streams

Append-only log structures inspired by Apache Kafka. Each entry has an auto-generated ID (timestamp-sequence) and field-value pairs.

# Append entries
XADD events * sensor "temp" value 22.5
XADD events * sensor "temp" value 23.1

# Read last 10 entries
XREVRANGE events + - COUNT 10

# Stream length
XLEN events

# Trim to max 1000 entries
XTRIM events MAXLEN ~ 1000

# Blocking read for new entries
XREAD BLOCK 5000 STREAMS events $
03

Key Expiration & Eviction

Controlling the lifecycle of keys — TTLs, lazy expiry, and memory policies.

Setting TTL

SET session:xyz "data" EX 3600   # Expire in 1 hour
SET session:xyz "data" PX 5000   # Expire in 5000 ms
SET key "val" EXAT 1735689600      # Expire at Unix timestamp

# Set TTL on existing key
EXPIRE mykey 600               # 600 seconds
PEXPIRE mykey 5000             # 5000 milliseconds
EXPIREAT mykey 1735689600     # Unix timestamp

# Check or remove TTL
TTL mykey                      # Remaining seconds (-1 = no TTL, -2 = expired)
PTTL mykey                     # Remaining milliseconds
PERSIST mykey                  # Remove TTL (make permanent)

How Expiration Works

Redis uses two strategies together:

  • Lazy expiry: When a key is accessed and found expired, it is deleted on the spot.
  • Active expiry: Redis periodically samples 20 random keys with TTLs and deletes those that are expired. If more than 25% are expired, it samples again immediately.

Eviction Policies

When maxmemory is reached, Redis evicts keys according to the configured policy.

Policy Behavior Best For
noeviction Return errors on writes Primary database (no data loss)
allkeys-lru Evict least recently used from all keys General-purpose cache
volatile-lru LRU only among keys with TTL Cache + persistent data mix
allkeys-lfu Evict least frequently used Frequency-biased cache
volatile-lfu LFU only among keys with TTL Frequency cache with persistent keys
allkeys-random Random eviction Uniform access patterns
volatile-ttl Evict keys with shortest TTL first Short-lived session cache
# Set eviction policy
CONFIG SET maxmemory 2gb
CONFIG SET maxmemory-policy allkeys-lru

# Check current policy
CONFIG GET maxmemory-policy
04

Pub/Sub Messaging

Fire-and-forget message broadcasting — publishers and subscribers decoupled in real time.

Basic Pub/Sub

Subscriber (Terminal 1)

# Subscribe to channels
SUBSCRIBE news alerts

# Pattern subscribe
PSUBSCRIBE user:*

Publisher (Terminal 2)

# Publish a message
PUBLISH news "Breaking: Redis 8"
(integer) 1

PUBLISH user:42 "profile updated"

Pub/Sub Characteristics

  • Fire and forget: Messages are not persisted. If no subscriber is listening, the message is lost.
  • No ack: Publishers do not know if subscribers processed the message.
  • Fan-out: One message is delivered to all matching subscribers.
  • No replay: New subscribers miss all previously published messages.
Need durability? Use Redis Streams (Section 08) instead of Pub/Sub when you need message persistence, acknowledgment, and consumer groups.

Management Commands

PUBSUB CHANNELS               # List active channels
PUBSUB NUMSUB news alerts    # Subscriber count per channel
PUBSUB NUMPAT                 # Count of pattern subscriptions
05

Transactions & Lua Scripting

Atomic command batching with MULTI/EXEC and server-side scripting with Lua.

MULTI / EXEC Transactions

Commands between MULTI and EXEC are queued and executed atomically. No other client can interleave commands.

MULTI
SET account:1:balance 500
SET account:2:balance 300
EXEC
1) OK
2) OK

# Discard a transaction
MULTI
SET key1 "oops"
DISCARD                 # Aborts the transaction
No rollback. If one command in a transaction fails (e.g., wrong type), the others still execute. Redis transactions are atomic but not rollback-safe like SQL.

Optimistic Locking with WATCH

WATCH account:1:balance
# Read current value
GET account:1:balance
"500"

MULTI
SET account:1:balance 450    # Deduct 50
EXEC
# Returns nil if account:1:balance was modified by another client
# Returns results if the key was unchanged (optimistic lock succeeded)

Lua Scripting

Lua scripts execute atomically on the server, avoiding round-trips and race conditions. They can read/write multiple keys in one call.

# Inline script: atomic compare-and-set
EVAL "
  local current = redis.call('GET', KEYS[1])
  if current == ARGV[1] then
    redis.call('SET', KEYS[1], ARGV[2])
    return 1
  end
  return 0
" 1 mykey "old_value" "new_value"

# Rate limiter in Lua
EVAL "
  local key = KEYS[1]
  local limit = tonumber(ARGV[1])
  local window = tonumber(ARGV[2])
  local current = tonumber(redis.call('GET', key) or 0)
  if current >= limit then
    return 0
  end
  current = redis.call('INCR', key)
  if current == 1 then
    redis.call('EXPIRE', key, window)
  end
  return 1
" 1 rate:user:42 100 60

Redis Functions (7.0+)

Redis 7.0 introduced Functions as a replacement for EVAL scripts. Functions are stored server-side and loaded by name.

# Load a function library
FUNCTION LOAD "#!lua name=mylib
redis.register_function('myfunc', function(keys, args)
  return redis.call('GET', keys[1])
end)"

# Call the function
FCALL myfunc 1 mykey

# List loaded libraries
FUNCTION LIST
06

Persistence

Two mechanisms to survive restarts: RDB snapshots and the Append-Only File.

RDB Snapshots

Point-in-time snapshots of the entire dataset written to a compact binary file (dump.rdb). Fast to load, but you lose changes since the last snapshot.

# Trigger manual snapshot (blocking)
SAVE

# Trigger background snapshot (non-blocking, forks child process)
BGSAVE

# Check last save time
LASTSAVE
# redis.conf -- automatic snapshots
save 900 1        # Snapshot if 1+ keys changed in 900s
save 300 10       # Snapshot if 10+ keys changed in 300s
save 60 10000     # Snapshot if 10000+ keys changed in 60s

dbfilename dump.rdb
dir /var/lib/redis

AOF (Append-Only File)

Logs every write operation. Higher durability than RDB, but larger files. Redis can rewrite the AOF in the background to compact it.

# redis.conf -- enable AOF
appendonly yes
appendfilename "appendonly.aof"

# Fsync policy
appendfsync everysec   # Recommended (1s data loss max)
# appendfsync always    # Safest (slowest)
# appendfsync no        # OS decides (fastest, riskiest)
# Trigger AOF rewrite (compaction)
BGREWRITEAOF

# Auto-rewrite thresholds
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

RDB vs AOF Comparison

Aspect RDB AOF
Data loss risk Minutes (between snapshots) 1 second (everysec) or none (always)
File size Compact binary Larger (log of all writes)
Restart speed Very fast (load binary) Slower (replay commands)
Fork overhead Yes (BGSAVE forks) Yes (BGREWRITEAOF forks)
Recommended Backups, disaster recovery Primary persistence
Best practice: Enable both RDB and AOF. Use AOF for durability and RDB for fast backups and disaster recovery.
07

Redis as Cache

Caching patterns, invalidation strategies, and TTL management for high-throughput applications.

Cache-Aside (Lazy Loading)

The most common pattern. The application checks Redis first, falls back to the database on miss, and populates the cache.

# Pseudocode: cache-aside pattern
def get_user(user_id):
    # 1. Check cache
    cached = redis.GET(f"user:{user_id}")
    if cached:
        return deserialize(cached)

    # 2. Cache miss -- query database
    user = db.query("SELECT * FROM users WHERE id = ?", user_id)

    # 3. Populate cache with TTL
    redis.SET(f"user:{user_id}", serialize(user), EX=3600)
    return user

Write-Through

Every write goes to both the cache and the database. Ensures cache is always current, but adds write latency.

# Pseudocode: write-through
def update_user(user_id, data):
    # 1. Update database
    db.execute("UPDATE users SET ... WHERE id = ?", data, user_id)

    # 2. Update cache
    redis.SET(f"user:{user_id}", serialize(data), EX=3600)

Write-Behind (Write-Back)

Writes go to cache immediately and are asynchronously flushed to the database. Low write latency, but risk of data loss if Redis crashes before flush.

TTL Strategies

Strategy TTL Trade-off
Fixed TTL EX 3600 Simple; stale data possible up to TTL
Sliding TTL Reset on each read Hot keys stay cached; cold keys expire
Jittered TTL EX (3600 + rand(0,600)) Prevents thundering herd on mass expiry
Event-driven invalidation DEL on write events Freshest data; requires pub/sub or hooks
Cache stampede: When a popular key expires, many clients hit the database simultaneously. Mitigate with jittered TTLs, SET ... NX locking, or probabilistic early expiration.
08

Redis as Queue

From simple list-based queues to durable streams with consumer groups.

List-Based Queue (LPUSH / BRPOP)

The simplest reliable queue pattern. Producers push to the left, consumers block-pop from the right.

# Producer
LPUSH queue:emails '{"to":"ada@ex.com","subject":"Hello"}'

# Consumer (blocks until message available)
BRPOP queue:emails 0    # 0 = block indefinitely

# Reliable queue with RPOPLPUSH (processing list)
RPOPLPUSH queue:emails queue:emails:processing
# After successful processing, remove from processing list
LREM queue:emails:processing 1 "message"

Streams with Consumer Groups

The production-grade solution. Consumer groups enable multiple workers to share the load with exactly-once delivery semantics.

# Create consumer group ($ = only new messages, 0 = all existing)
XGROUP CREATE events workers $ MKSTREAM

# Producer: add entries
XADD events * type "order" id 42 total 99.50

# Consumer: read from group
XREADGROUP GROUP workers worker-1 COUNT 10 BLOCK 5000 STREAMS events >

# Acknowledge successful processing
XACK events workers 1709123456789-0

# Check pending (unacked) messages
XPENDING events workers

# Claim a stale message from a dead consumer
XCLAIM events workers worker-2 60000 1709123456789-0

# Auto-claim stale messages (Redis 6.2+)
XAUTOCLAIM events workers worker-2 60000 0

Lists vs Streams

Feature Lists (BRPOP) Streams (XREADGROUP)
Consumer groups Manual Built-in
Message ACK No Yes (XACK)
Message replay No (consumed = gone) Yes (by ID range)
Dead letter handling Manual XCLAIM / XAUTOCLAIM
Complexity Simple Moderate
Best for Simple job queues Event sourcing, reliable processing
09

Cluster & Sentinel

High availability and horizontal scaling — from automatic failover to hash-slot sharding.

Redis Sentinel

Sentinel monitors Redis instances and performs automatic failover. It does not shard data — it provides HA for a single dataset.

# sentinel.conf
sentinel monitor mymaster 127.0.0.1 6379 2   # quorum of 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1

# Start sentinel
redis-sentinel /etc/redis/sentinel.conf
  • Monitoring: Sentinels ping master and replicas every second.
  • Failover: If the master is unreachable for down-after-milliseconds, sentinels vote and promote a replica.
  • Quorum: Minimum sentinels that must agree the master is down. Use at least 3 sentinels.
  • Client discovery: Clients connect to Sentinel to discover the current master address.

Redis Cluster

Cluster distributes data across multiple nodes using 16,384 hash slots. Each key maps to a slot via CRC16(key) % 16384.

# Create a cluster (3 masters + 3 replicas)
redis-cli --cluster create \
  127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 \
  127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \
  --cluster-replicas 1

# Check cluster status
redis-cli -c -p 7000 CLUSTER INFO
redis-cli -c -p 7000 CLUSTER NODES

# Add a new node
redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000

# Reshard slots to the new node
redis-cli --cluster reshard 127.0.0.1:7000

Hash Tags (Multi-Key Operations)

Cluster does not support multi-key commands across different slots. Use hash tags to force keys onto the same slot.

# These keys hash to different slots (fails in cluster)
MGET user:1 user:2          # CROSSSLOT error

# Hash tags: only the {tag} part determines the slot
SET {user:1}:name "Ada"
SET {user:1}:email "ada@ex.com"
MGET {user:1}:name {user:1}:email   # Same slot, works!

Sentinel vs Cluster

Aspect Sentinel Cluster
Sharding No (single dataset) Yes (16,384 hash slots)
HA / failover Yes Yes (built-in)
Max data size Single node memory Sum of all nodes
Multi-key ops Full support Same slot only (hash tags)
Complexity Low Higher
Best for Dataset fits in one node Large datasets, horizontal scale
10

Client Libraries

Production-ready Redis clients for Node.js, Python, and Go.

Node.js — ioredis

// npm install ioredis
import Redis from 'ioredis';

// Single instance
const redis = new Redis({
  host: 'localhost',
  port: 6379,
  password: 'secret',
  db: 0,
  retryStrategy: (times) => Math.min(times * 50, 2000),
});

// Basic operations
await redis.set('key', 'value', 'EX', 3600);
const val = await redis.get('key');

// Pipeline (batch commands, one round-trip)
const pipeline = redis.pipeline();
pipeline.set('k1', 'v1');
pipeline.set('k2', 'v2');
pipeline.get('k1');
const results = await pipeline.exec();

// Cluster mode
const cluster = new Redis.Cluster([
  { host: 'node1', port: 7000 },
  { host: 'node2', port: 7001 },
]);

Python — redis-py

# pip install redis
import redis

# Connection pool (recommended for production)
pool = redis.ConnectionPool(
    host='localhost', port=6379,
    password='secret', db=0,
    max_connections=20,
    decode_responses=True,
)
r = redis.Redis(connection_pool=pool)

# Basic operations
r.set('key', 'value', ex=3600)
val = r.get('key')

# Pipeline
with r.pipeline() as pipe:
    pipe.set('k1', 'v1')
    pipe.set('k2', 'v2')
    pipe.get('k1')
    results = pipe.execute()

# Pub/Sub
pubsub = r.pubsub()
pubsub.subscribe('channel')
for message in pubsub.listen():
    print(message)

Go — go-redis

// go get github.com/redis/go-redis/v9
import (
    "context"
    "github.com/redis/go-redis/v9"
)

// Single instance
rdb := redis.NewClient(&redis.Options{
    Addr:     "localhost:6379",
    Password: "secret",
    DB:       0,
    PoolSize: 20,
})

ctx := context.Background()

// Basic operations
err := rdb.Set(ctx, "key", "value", time.Hour).Err()
val, err := rdb.Get(ctx, "key").Result()

// Pipeline
pipe := rdb.Pipeline()
pipe.Set(ctx, "k1", "v1", 0)
pipe.Set(ctx, "k2", "v2", 0)
pipe.Get(ctx, "k1")
cmds, err := pipe.Exec(ctx)

// Cluster client
rdb := redis.NewClusterClient(&redis.ClusterOptions{
    Addrs: []string{"node1:7000", "node2:7001"},
})

Connection Best Practices

  • Connection pools: Always use connection pools in production. A single connection blocks on each command.
  • Pipelines: Batch multiple commands into a single round-trip. Reduces latency by 5-10x for bulk operations.
  • Timeouts: Set connect, read, and write timeouts. Default infinite timeout is dangerous.
  • Retry logic: Use exponential backoff with jitter on transient failures.
  • Cluster-aware clients: Use the cluster mode of your client library — it handles MOVED/ASK redirections automatically.
• • •