High-Performance Web Server & Reverse Proxy
Essential commands and common patterns for rapid deployment.
nginx -t # Test configuration syntax nginx -s reload # Reload configuration (graceful) sudo systemctl reload nginx # Reload via systemd nginx # Start nginx sudo systemctl start nginx # Start via systemd nginx -s quit # Stop nginx (graceful) nginx -s stop # Stop nginx (fast) sudo systemctl stop nginx # Stop via systemd sudo systemctl restart nginx # Restart nginx nginx -v # Check version nginx -V # Check version with compile options nginx -T # View complete config dump
server {
listen 80;
server_name example.com;
root /var/www/example;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
location /api/ {
proxy_pass http://localhost:3000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
}
upstream backend {
least_conn;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com backup;
}
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://backend;
}
Deploy and manage nginx processes on your infrastructure.
# Ubuntu/Debian sudo apt update sudo apt install nginx # CentOS/RHEL sudo yum install epel-release sudo yum install nginx # Verify installation nginx -v
# Enable nginx to start on boot sudo systemctl enable nginx # Check status sudo systemctl status nginx # View logs via journalctl sudo journalctl -u nginx --since today # Reload after config changes (graceful, zero downtime) sudo systemctl reload nginx # Restart (brief downtime) sudo systemctl restart nginx # Stop sudo systemctl stop nginx
NGINX responds to Unix signals for process control:
# Get master process PID cat /var/run/nginx.pid # Send signals directly kill -s SIGNAL PID # Common signals: # TERM, INT - fast shutdown # QUIT - graceful shutdown # HUP - reload configuration # USR1 - reopen log files (for log rotation) # USR2 - upgrade executable on the fly # WINCH - graceful shutdown of worker processes
nginx -s reload
# OR
kill -HUP $(cat /var/run/nginx.pid)
# After rotating log files nginx -s reopen # OR kill -USR1 $(cat /var/run/nginx.pid)
Always test configuration before reloading. Configuration reload is safer than restart because old worker processes continue serving requests during reload, and if new config has syntax errors, reload is aborted with no downtime.
# Test and show errors sudo nginx -t # Test and show full parsed config sudo nginx -T # If test passes, then reload sudo nginx -t && sudo systemctl reload nginx
Understanding nginx's hierarchical configuration architecture.
NGINX configuration uses a hierarchical structure with contexts (blocks):
main (global)
├── events
└── http
├── upstream
├── server
│ └── location
│ └── location (nested)
└── server
└── location
Top-level directives affecting entire NGINX process:
# /etc/nginx/nginx.conf user www-data; worker_processes auto; # Set to CPU core count pid /run/nginx.pid; error_log /var/log/nginx/error.log warn; # Load dynamic modules load_module modules/ngx_http_brotli_filter_module.so; events { worker_connections 1024; use epoll; # Linux: efficient event model } http { # HTTP-specific configuration include /etc/nginx/mime.types; default_type application/octet-stream; # Include virtual host configs include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }
Connection processing configuration:
events {
# Max simultaneous connections per worker
worker_connections 2048;
# Connection processing method (Linux: epoll, BSD: kqueue)
use epoll;
# Accept as many connections as possible
multi_accept on;
}
Contains all HTTP/HTTPS configuration:
http {
# MIME types
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log main;
# Performance
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
# Compression
gzip on;
gzip_vary on;
gzip_types text/plain text/css application/json application/javascript;
# Virtual hosts
include /etc/nginx/conf.d/*.conf;
}
Organize configuration across multiple files:
http {
# Include all .conf files in conf.d/
include /etc/nginx/conf.d/*.conf;
# Include MIME types
include /etc/nginx/mime.types;
# Include custom configs
include /etc/nginx/custom/*.conf;
}
server {
# Include common security headers
include /etc/nginx/snippets/security-headers.conf;
# Include SSL config
include /etc/nginx/snippets/ssl-params.conf;
}
/etc/nginx/
├── nginx.conf # Main config
├── conf.d/
│ └── default.conf # Default server
├── sites-available/
│ ├── example.com.conf
│ └── api.example.com.conf
├── sites-enabled/ # Symlinks to sites-available
│ └── example.com.conf -> ../sites-available/example.com.conf
└── snippets/
├── ssl-params.conf
└── security-headers.conf
Directives inherit from parent contexts and can be overridden in child contexts:
http {
# Applies to all servers
gzip on;
server {
# Inherits gzip on
listen 80;
location / {
# Inherits gzip on
# Can override: gzip off;
}
location /api/ {
# Override for this location
gzip off;
}
}
}
Virtual hosts defining request handling for different domains.
server {
listen 80; # Port
listen [::]:80; # IPv6
server_name example.com www.example.com;
root /var/www/example.com;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
Specifies IP address and port to listen on:
# Listen on all interfaces, port 80 listen 80; # Listen on specific IP listen 192.168.1.10:80; # IPv6 listen [::]:80; # HTTPS listen 443 ssl; listen 443 ssl http2; # With HTTP/2 # Unix socket listen unix:/var/run/nginx.sock; # Default server for this port listen 80 default_server; # Set socket options listen 80 reuseport; # SO_REUSEPORT for load balancing
Defines which server block handles which requests:
# Exact match (highest priority) server_name example.com; # Multiple names server_name example.com www.example.com; # Wildcard at start server_name *.example.com; server_name .example.com; # Matches example.com and *.example.com # Wildcard at end server_name mail.*; # Regular expression (must start with ~) server_name ~^www\d+\.example\.com$; # Catch-all / default (lowest priority) server_name _;
**# Explicitly set default server server { listen 80 default_server; listen [::]:80 default_server; server_name _; # Catch-all name # Return 444 (close connection) for undefined hosts return 444; } # OR return a generic page server { listen 80 default_server; server_name _; root /var/www/default; location / { return 403 "Access denied\n"; } }
server {
server_name example.com;
# Document root
root /var/www/example.com/public;
# Index files (checked in order)
index index.html index.htm index.php;
# Override in location
location /docs/ {
root /var/www/documentation; # Serves /var/www/documentation/docs/
index README.html;
}
# Use alias instead of root
location /images/ {
alias /data/images/; # Serves /data/images/ (not /data/images/images/)
}
}
root appends the location path to the filesystem path.
alias replaces the location path with the filesystem path.
Use alias when you want to serve files from a different directory structure than your URL structure.
# Site 1 server { listen 80; server_name example.com www.example.com; root /var/www/example.com; index index.html; } # Site 2 server { listen 80; server_name api.example.com; location / { proxy_pass http://localhost:3000; } } # Site 3 server { listen 80; server_name blog.example.com; root /var/www/blog; index index.php; location ~ \.php$ { fastcgi_pass unix:/var/run/php/php8.1-fpm.sock; include fastcgi_params; } }
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com www.example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
root /var/www/example.com;
index index.html;
}
Precise URI matching and request routing configuration.
location [modifier] pattern {
# Configuration
}
= - Exact match (stops searching immediately)^~ - Preferential prefix (skips regex if matched)~ - Case-sensitive regex~* - Case-insensitive regex# Matches ONLY /favicon.ico location = /favicon.ico { access_log off; log_not_found off; expires 30d; } # Matches ONLY / (not /index.html or /about) location = / { try_files /index.html =404; }
Specific endpoints like /favicon.ico, /robots.txt, /health, /api/v1/status
# If URI starts with /images/, use this block (skip regex checks) location ^~ /images/ { root /data; expires 30d; } # Skip regex for static files directory location ^~ /static/ { alias /var/www/static/; add_header Cache-Control "public, max-age=31536000"; }
Static file directories where you want to avoid regex overhead
# Match .php files (case-sensitive) location ~ \.php$ { fastcgi_pass unix:/var/run/php/php-fpm.sock; include fastcgi_params; } # Match files with specific extensions location ~ \.(jpg|jpeg|png|gif|ico|svg)$ { expires 30d; add_header Cache-Control "public, immutable"; } # Match paths with version numbers location ~ ^/api/v[0-9]+/ { proxy_pass http://api_backend; }
# Match image files (any case) location ~* \.(jpg|jpeg|png|gif|ico|svg|webp)$ { expires 1y; add_header Cache-Control "public, immutable"; } # Match CSS and JS files location ~* \.(css|js)$ { expires 1y; add_header Cache-Control "public, immutable"; } # Block access to hidden files (. prefix) location ~* /\. { deny all; access_log off; }
# Matches anything starting with /api/ location /api/ { proxy_pass http://backend; } # Matches anything starting with /downloads/ location /downloads/ { root /data; autoindex on; } # Root location (matches everything, lowest priority) location / { try_files $uri $uri/ =404; }
server {
listen 80;
server_name example.com;
root /var/www/example;
# Priority 1: Exact match for /
location = / {
try_files /index.html =404;
}
# Priority 2: Exact match for /health
location = /health {
access_log off;
return 200 "OK\n";
}
# Priority 3: Preferential prefix (skips regex)
location ^~ /static/ {
alias /var/cache/static/;
expires max;
}
# Priority 4: Regex (checked in order)
location ~ \.php$ {
fastcgi_pass php_backend;
}
location ~* \.(jpg|png|gif)$ {
expires 30d;
}
# Priority 5: Prefix match (longest wins)
location /api/v2/ {
proxy_pass http://api_v2;
}
location /api/ {
proxy_pass http://api_v1;
}
# Fallback (matches everything)
location / {
try_files $uri $uri/ =404;
}
}
Request /api/v2/users matches:
= / (not exact match)^~ /static/ (doesn't start with /static/)~ \.php$ (no .php extension)~* \.(jpg|png|gif)$ (no image extension)/api/v2/ (longest prefix match wins over /api/)Attempts to serve files in order, with fallback:
# Try file, then directory, then 404 location / { try_files $uri $uri/ =404; } # Try file, then pass to PHP location / { try_files $uri $uri/ /index.php?$query_string; } # Try file, then named location location / { try_files $uri $uri/ @proxy; } location @proxy { proxy_pass http://backend; } # SPA (Single Page Application) pattern location / { try_files $uri $uri/ /index.html; } # Static files with backend fallback location /media/ { try_files $uri @backend; } location @backend { proxy_pass http://app_server; }
Load balancing and backend request forwarding infrastructure.
server {
listen 80;
server_name api.example.com;
location / {
# Forward to backend
proxy_pass http://localhost:3000;
# Preserve host header
proxy_set_header Host $host;
# Forward client IP
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Forward protocol (http/https)
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Trailing slash behavior differs significantly:
# WITHOUT trailing slash - appends location path location /api/ { proxy_pass http://backend; # Request: /api/users # Proxied to: http://backend/api/users } # WITH trailing slash - replaces location path location /api/ { proxy_pass http://backend/; # Request: /api/users # Proxied to: http://backend/users (note: /api/ removed) } # With path in proxy_pass location /api/ { proxy_pass http://backend/v2/; # Request: /api/users # Proxied to: http://backend/v2/users }
Define groups of backend servers for load balancing:
upstream backend {
# Default: round-robin load balancing
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
server {
location / {
proxy_pass http://backend;
}
}
upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
# Distributes requests evenly across servers
upstream backend {
least_conn;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
# Sends to server with fewest active connections
upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
# Same client IP always goes to same server
upstream backend {
random;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
# Randomly selects server
upstream backend {
random two least_conn;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com;
}
# Picks two random servers, chooses one with fewer connections
upstream backend {
server backend1.example.com weight=5; # Receives 5/8 of traffic
server backend2.example.com weight=2; # Receives 2/8 of traffic
server backend3.example.com weight=1; # Receives 1/8 of traffic
}
upstream backend {
server backend1.example.com max_fails=3 fail_timeout=30s;
server backend2.example.com max_fails=3 fail_timeout=30s;
server backend3.example.com backup; # Only used if others fail
server backend4.example.com down; # Temporarily disabled
}
# max_fails=3: Mark as down after 3 failed attempts
# fail_timeout=30s: Try again after 30 seconds
# backup: Only receives traffic if primary servers are down
# down: Permanently disabled
upstream backend {
least_conn;
# Server parameters
server backend1.example.com:8080 weight=5 max_fails=3 fail_timeout=30s;
server backend2.example.com:8080 weight=3;
server 192.168.1.100:8080;
server unix:/tmp/backend.sock;
server backend3.example.com backup;
# Connection pooling
keepalive 32;
keepalive_requests 100;
keepalive_timeout 60s;
}
server {
location / {
proxy_pass http://backend;
# Required for keepalive
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
location / {
proxy_pass http://backend;
# Standard proxy headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Preserve original request URI
proxy_set_header X-Original-URI $request_uri;
}
location / {
proxy_pass http://backend;
# Timeouts
proxy_connect_timeout 60s; # Time to establish connection
proxy_send_timeout 60s; # Time between successive writes
proxy_read_timeout 60s; # Time between successive reads
# Buffering (enabled by default)
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
# Disable buffering for streaming
# proxy_buffering off;
}
Secure communications with HTTPS and modern cryptography.
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
# Certificate files
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.key;
# Root and locations
root /var/www/example.com;
index index.html;
}
# HTTP to HTTPS redirect
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
# Protocols ssl_protocols TLSv1.2 TLSv1.3; # Cipher suites (Mozilla Intermediate profile) ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384'; # Prefer server ciphers ssl_prefer_server_ciphers on; # DH parameters (generate with: openssl dhparam -out /etc/ssl/dhparam.pem 2048) ssl_dhparam /etc/ssl/dhparam.pem;
ssl_protocols TLSv1.3;
# SSL session cache (shared across workers) ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; # Session tickets (for resumption) ssl_session_tickets on;
Improves SSL handshake performance by including certificate status in TLS handshake:
# Enable OCSP stapling ssl_stapling on; ssl_stapling_verify on; # Intermediate certificate chain (required for verification) ssl_trusted_certificate /etc/ssl/certs/chain.pem; # DNS resolver for OCSP responder lookup resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s;
openssl s_client -connect example.com:443 -status -tlsextdebug < /dev/null 2>&1 | grep -A 17 "OCSP response"
HTTP/2 requires HTTPS:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
# HTTP/2 specific settings
http2_max_field_size 16k;
http2_max_header_size 32k;
}
# Install certbot sudo apt install certbot python3-certbot-nginx # Obtain certificate sudo certbot --nginx -d example.com -d www.example.com # Test automatic renewal sudo certbot renew --dry-run # View certificates sudo certbot certificates
server {
listen 80;
server_name example.com www.example.com;
# ACME challenge for Let's Encrypt
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
# Redirect other requests to HTTPS
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
server_name example.com www.example.com;
# Let's Encrypt certificates
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
}
Create /etc/nginx/snippets/ssl-params.conf:
# Modern SSL configuration ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305; ssl_prefer_server_ciphers off; # Session optimization ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_session_tickets off; # DH parameters ssl_dhparam /etc/ssl/dhparam.pem; # OCSP stapling ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s;
server {
listen 443 ssl http2;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
include snippets/ssl-params.conf;
}
# Test with OpenSSL openssl s_client -connect example.com:443 -tls1_2 # Check protocols and ciphers nmap --script ssl-enum-ciphers -p 443 example.com # Online test (recommended) # Visit: https://www.ssllabs.com/ssltest/
Hardening infrastructure with security headers and access control.
Create /etc/nginx/snippets/security-headers.conf:
# Strict Transport Security (HSTS) add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always; # Prevent clickjacking add_header X-Frame-Options "SAMEORIGIN" always; # Prevent MIME type sniffing add_header X-Content-Type-Options "nosniff" always; # XSS protection (legacy, but still useful) add_header X-XSS-Protection "1; mode=block" always; # Referrer policy add_header Referrer-Policy "strict-origin-when-cross-origin" always; # Permissions policy (formerly Feature-Policy) add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
server {
listen 443 ssl http2;
server_name example.com;
include snippets/security-headers.conf;
location / {
# ...
}
}
add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self'; img-src 'self' data: https:; font-src 'self'; connect-src 'self'; frame-ancestors 'none'" always;
add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'nonce-$request_id'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self' https://fonts.gstatic.com; connect-src 'self' https://api.example.com" always;
add_header Content-Security-Policy-Report-Only "default-src 'self'; report-uri /csp-report" always;
location /api/ {
# Simple CORS
add_header Access-Control-Allow-Origin "https://app.example.com" always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type" always;
add_header Access-Control-Allow-Credentials "true" always;
# Handle preflight
if ($request_method = OPTIONS) {
return 204;
}
proxy_pass http://backend;
}
map $http_origin $cors_origin {
default "";
"https://app.example.com" $http_origin;
"https://admin.example.com" $http_origin;
"https://mobile.example.com" $http_origin;
}
server {
location /api/ {
add_header Access-Control-Allow-Origin $cors_origin always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type" always;
add_header Access-Control-Allow-Credentials "true" always;
if ($request_method = OPTIONS) {
return 204;
}
proxy_pass http://backend;
}
}
# Define rate limit zone (10 req/sec per IP) limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s; server { location / { # Apply rate limit limit_req zone=general burst=20 nodelay; # Custom error response limit_req_status 429; proxy_pass http://backend; } }
http {
# Different limits for different endpoints
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
limit_req_zone $binary_remote_addr zone=api:10m rate=100r/s;
limit_req_zone $binary_remote_addr zone=search:10m rate=10r/s;
server {
# Strict limit on login
location /login {
limit_req zone=login burst=5 nodelay;
limit_req_status 429;
proxy_pass http://backend;
}
# Higher limit for API
location /api/ {
limit_req zone=api burst=200 nodelay;
proxy_pass http://backend;
}
# Moderate limit for search
location /search {
limit_req zone=search burst=20 nodelay;
proxy_pass http://backend;
}
}
}
# Limit concurrent connections per IP limit_conn_zone $binary_remote_addr zone=addr:10m; server { # Max 10 concurrent connections per IP limit_conn addr 10; location /downloads/ { # Max 1 connection per IP for downloads limit_conn addr 1; # Rate limit bandwidth after 1MB limit_rate_after 1m; limit_rate 500k; # 500 KB/s } }
# Combine multiple conditions map $http_user_agent $limit_bots { default ""; ~*(bot|crawler|spider) $binary_remote_addr; } limit_req_zone $limit_bots zone=bots:10m rate=1r/s; # Whitelist IPs geo $limit_ip { default 1; 192.168.1.0/24 0; # Internal network 10.0.0.0/8 0; # Office network } map $limit_ip $limit_key { 0 ""; 1 $binary_remote_addr; } limit_req_zone $limit_key zone=general:10m rate=10r/s; server { location / { limit_req zone=general burst=20 nodelay; limit_req zone=bots burst=5; proxy_pass http://backend; } }
Maximize throughput with intelligent caching and optimization.
# Define cache path and settings proxy_cache_path /var/cache/nginx/proxy levels=1:2 keys_zone=my_cache:10m max_size=1g inactive=60m use_temp_path=off; # Cache key proxy_cache_key "$scheme$request_method$host$request_uri"; server { location / { proxy_cache my_cache; # Cache for different status codes proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; # Add cache status to response header add_header X-Cache-Status $upstream_cache_status; proxy_pass http://backend; } }
proxy_cache_path /var/cache/nginx/proxy
levels=1:2
keys_zone=proxy_cache:100m
max_size=10g
inactive=1h
use_temp_path=off;
# Cache bypass conditions
map $request_method $skip_cache {
default 0;
POST 1;
PUT 1;
DELETE 1;
}
map $http_cookie $skip_cache {
default 0;
~*session 1;
}
server {
location / {
proxy_cache proxy_cache;
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
proxy_cache_valid any 1m;
# Bypass cache
proxy_cache_bypass $skip_cache;
proxy_no_cache $skip_cache;
# Cache lock (prevent stampede)
proxy_cache_lock on;
proxy_cache_lock_timeout 5s;
# Use stale cache during updates
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_background_update on;
# Revalidate stale content
proxy_cache_revalidate on;
# Headers
add_header X-Cache-Status $upstream_cache_status;
proxy_ignore_headers Cache-Control Expires;
proxy_pass http://backend;
}
}
Cache dynamic content for very short periods (1-5 seconds) to absorb traffic spikes:
proxy_cache_path /var/cache/nginx/micro
levels=1:2
keys_zone=microcache:10m
max_size=1g
inactive=1h;
server {
location / {
proxy_cache microcache;
proxy_cache_valid 200 1s; # Cache for 1 second
proxy_cache_use_stale updating;
proxy_cache_background_update on;
proxy_cache_lock on;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://backend;
}
}
location ~* \.(jpg|jpeg|png|gif|ico|svg|webp)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
location ~* \.(css|js)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
location ~* \.(woff|woff2|ttf|otf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
location ~* \.(pdf|doc|docx|xls|xlsx)$ {
expires 30d;
add_header Cache-Control "public";
}
# Enable gzip gzip on; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # 1-9, higher = more CPU, smaller files gzip_min_length 256; # Don't compress small files gzip_types text/plain text/css text/xml text/javascript application/json application/javascript application/xml+rss application/rss+xml application/atom+xml image/svg+xml text/x-component text/x-cross-domain-policy; # Don't compress already-compressed formats gzip_disable "msie6";
Requires ngx_brotli module:
# Enable brotli (better compression than gzip) brotli on; brotli_comp_level 6; # 1-11 brotli_types text/plain text/css text/xml text/javascript application/json application/javascript application/xml+rss application/rss+xml application/atom+xml image/svg+xml; # Static pre-compressed files brotli_static on; # Serve .br files if available gzip_static on; # Serve .gz files if available
# Compress with gzip find /var/www -type f \( -name "*.css" -o -name "*.js" \) -exec gzip -k9 {} \; # Compress with brotli find /var/www -type f \( -name "*.css" -o -name "*.js" \) -exec brotli -k {} \;
# Client buffers client_body_buffer_size 128k; client_max_body_size 10m; client_header_buffer_size 1k; large_client_header_buffers 4 8k; # Proxy buffers proxy_buffer_size 4k; proxy_buffers 8 4k; proxy_busy_buffers_size 8k; proxy_max_temp_file_size 1024m; proxy_temp_file_write_size 8k;
# HTTP keepalive keepalive_timeout 65s; keepalive_requests 100; # For HTTPS keepalive_timeout 75s;
upstream backend {
server backend1.example.com:8080;
server backend2.example.com:8080;
# Keepalive pool (connections per worker)
keepalive 32;
keepalive_requests 100;
keepalive_timeout 60s;
}
server {
location / {
proxy_pass http://backend;
# Required for upstream keepalive
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
# Main context user www-data; worker_processes auto; # One per CPU core worker_rlimit_nofile 65535; # Max open files events { worker_connections 4096; # Max connections per worker use epoll; # Linux: efficient event model multi_accept on; } http { # Optimize file serving sendfile on; tcp_nopush on; # Send headers in one packet tcp_nodelay on; # Don't buffer small packets # Timeouts keepalive_timeout 65; send_timeout 30; client_body_timeout 12; client_header_timeout 12; # Other optimizations reset_timedout_connection on; open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; }
Comprehensive monitoring and diagnostics infrastructure.
access_log /var/log/nginx/access.log;
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log main;
log_format detailed '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log detailed;
log_format json escape=json '{'
'"time":"$time_iso8601",'
'"remote_addr":"$remote_addr",'
'"remote_user":"$remote_user",'
'"request":"$request",'
'"status":$status,'
'"body_bytes_sent":$body_bytes_sent,'
'"request_time":$request_time,'
'"http_referer":"$http_referer",'
'"http_user_agent":"$http_user_agent",'
'"http_x_forwarded_for":"$http_x_forwarded_for",'
'"upstream_addr":"$upstream_addr",'
'"upstream_status":"$upstream_status",'
'"upstream_response_time":"$upstream_response_time"'
'}';
access_log /var/log/nginx/access.log json;
# Main context error_log /var/log/nginx/error.log warn; # Log levels: debug, info, notice, warn, error, crit, alert, emerg error_log /var/log/nginx/error.log error; # Server-specific error log server { error_log /var/log/nginx/example.com-error.log; } # Disable error logging error_log /dev/null;
# Map to determine if request should be logged
map $request_uri $loggable {
default 1;
~^/health$ 0;
~^/ping$ 0;
~^/favicon.ico$ 0;
}
access_log /var/log/nginx/access.log combined if=$loggable;
map $status $loggable {
~^[23] 0; # Don't log 2xx and 3xx
default 1;
}
access_log /var/log/nginx/access.log combined if=$loggable;
map $request_id $loggable {
default 0;
~^..........[01]$ 1; # Last digit is 0 or 1 (~10%)
}
access_log /var/log/nginx/access.log combined if=$loggable;
server {
# Multiple access logs
access_log /var/log/nginx/access.log combined;
access_log /var/log/nginx/json.log json;
# Send to syslog
access_log syslog:server=logserver.example.com:514,tag=nginx combined;
# Different logs for different locations
location /api/ {
access_log /var/log/nginx/api-access.log;
proxy_pass http://backend;
}
}
# Buffer logs to improve I/O performance
access_log /var/log/nginx/access.log combined buffer=32k flush=5s;
# Disable access log for static files location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ { access_log off; log_not_found off; } # Disable for specific location location /health { access_log off; return 200 "OK\n"; }
Monitor NGINX in real-time:
# Enable stub_status module
server {
listen 127.0.0.1:80;
server_name localhost;
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
curl http://127.0.0.1/nginx_status
Output:
Active connections: 2 server accepts handled requests 10 10 20 Reading: 0 Writing: 1 Waiting: 1
Create /etc/logrotate.d/nginx:
/var/log/nginx/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
postrotate
[ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
endscript
}
# Move logs mv /var/log/nginx/access.log /var/log/nginx/access.log.old # Reopen log files nginx -s reopen
Best practices and advanced techniques from the field.
# WRONG - missing semicolon server { listen 80 server_name example.com; } # CORRECT server { listen 80; server_name example.com; }
# WRONG - leads to /var/www/images/images/photo.jpg location /images/ { root /var/www/images; } # CORRECT - use alias location /images/ { alias /var/www/images/; # Note trailing slash }
# WRONG - backend doesn't know real client IP location / { proxy_pass http://backend; } # CORRECT - forward client info location / { proxy_pass http://backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; }
# WRONG - might break production systemctl reload nginx # CORRECT - always test first nginx -t && systemctl reload nginx
# WRONG - doesn't apply to proxied responses location / { add_header X-Frame-Options DENY; proxy_pass http://backend; } # CORRECT - use 'always' to include in all responses location / { add_header X-Frame-Options DENY always; proxy_pass http://backend; }
# Test and show errors nginx -t # Test and show full configuration nginx -T # Check specific config file nginx -t -c /etc/nginx/nginx.conf
# Requires nginx built with --with-debug
error_log /var/log/nginx/debug.log debug;
location / {
add_header X-Debug-Backend $upstream_addr;
add_header X-Debug-Cache $upstream_cache_status;
add_header X-Debug-Time $request_time;
proxy_pass http://backend;
}
tail -f /var/log/nginx/error.log
nginx -V 2>&1 | grep -o '\-\-conf-path=\S*'
# Verbose output curl -v http://example.com/test # Check response headers curl -I http://example.com/test # Follow redirects curl -L http://example.com/test
worker_processes to number of CPU coresworker_rlimit_nofile and worker_connectionssendfile, tcp_nopush, tcp_nodelayopen_file_cache for frequently accessed filesexpiresincludeSubDomains and preloadserver_tokens offlocation ~ /\.)| Error Message | Cause & Solution |
|---|---|
invalid number of arguments |
Missing semicolon at end of directive |
unexpected end of file |
Missing closing brace } |
host not found in upstream |
DNS lookup failed for upstream server. Use IP address or ensure DNS works. |
bind() failed (98: Address already in use) |
Another process is using the port. Check with netstat -tlnp | grep :80 |
conflicting server name |
Multiple server blocks have same server_name. One should be default_server. |
proxy_pass cannot have URI part |
Using variables with URI in proxy_pass. Remove URI path when using variables. |