← Tech Guides

CURL & WGET

Data Transfer Command Line Tools - Complete Reference

System Information

curl - Client URL Library. Transfer data with URLs. Supports 20+ protocols including HTTP, HTTPS, FTP, SFTP, SCP, SMTP, LDAP, and more. API testing powerhouse with full control over headers, methods, and authentication.

wget - Network downloader. Recursive website mirroring, background downloads, resume capability. Built for downloading files and entire websites with depth control and filtering.

Both tools are essential for command-line data transfer, each with distinct strengths.

1. QUICK REFERENCE Essential Commands

The most commonly used commands for curl and wget. Start here for quick lookups.

Essential curl Commands

# Basic GET request
curl https://api.example.com

# POST JSON data
curl -X POST -H "Content-Type: application/json" \
     -d '{"key":"value"}' https://api.example.com

# Download file
curl -O https://example.com/file.zip

# Follow redirects
curl -L https://example.com

# Save with custom name
curl -o myfile.zip https://example.com/file.zip

# Include headers in output
curl -i https://api.example.com

# Send with authentication
curl -u username:password https://api.example.com

# Bearer token auth
curl -H "Authorization: Bearer TOKEN" https://api.example.com

Essential wget Commands

# Download file
wget https://example.com/file.zip

# Continue interrupted download
wget -c https://example.com/largefile.iso

# Download in background
wget -b https://example.com/file.zip

# Mirror website
wget -m https://example.com

# Recursive download with depth limit
wget -r -l 2 https://example.com

# Download from file list
wget -i urls.txt

# Limit download speed
wget --limit-rate=200k https://example.com/file.zip
2. CURL BASICS HTTP Methods & URLs

Fundamental curl usage patterns. HTTP methods, URL syntax, and protocol support.

HTTP Methods

# GET (default)
curl https://api.example.com/users

# Explicit GET
curl -X GET https://api.example.com/users

# POST
curl -X POST https://api.example.com/users

# PUT
curl -X PUT https://api.example.com/users/123

# DELETE
curl -X DELETE https://api.example.com/users/123

# PATCH
curl -X PATCH https://api.example.com/users/123

# HEAD (headers only)
curl -I https://api.example.com/users
curl --head https://api.example.com/users

# OPTIONS
curl -X OPTIONS https://api.example.com/users

URL Syntax & Multiple URLs

# Query parameters
curl "https://api.example.com/search?q=query&limit=10"

# Multiple URLs (sequential)
curl https://example.com/file1.zip https://example.com/file2.zip

# Parallel downloads (curl 7.66.0+)
curl -Z https://example.com/file1.zip https://example.com/file2.zip

# URL with authentication in URL
curl https://username:password@example.com/api

# URL ranges
curl https://example.com/file[1-10].jpg
curl https://example.com/page[001-100].html

# Multiple URLs with custom output names
curl https://example.com/file1.zip -o first.zip \
     https://example.com/file2.zip -o second.zip

Protocol Support

curl supports 20+ protocols beyond HTTP/HTTPS:

HTTP/HTTPS
Web requests and RESTful APIs
FTP/FTPS
File transfer protocol
SCP/SFTP
Secure file transfer over SSH
SMTP/POP3/IMAP
Email protocols
LDAP
Directory services
SMB
Windows file sharing
# FTP download
curl ftp://ftp.example.com/file.zip

# FTP upload
curl -T localfile.txt ftp://ftp.example.com/

# SFTP with key authentication
curl -u user: --key ~/.ssh/id_rsa sftp://example.com/file.txt
3. HEADERS & AUTHENTICATION curl Authentication Methods

Custom headers, cookies, and authentication patterns for API access.

Custom Headers (-H)

# Single header
curl -H "Content-Type: application/json" https://api.example.com

# Multiple headers
curl -H "Content-Type: application/json" \
     -H "Accept: application/json" \
     -H "X-Custom-Header: value" \
     https://api.example.com

# User-Agent
curl -H "User-Agent: MyApp/1.0" https://api.example.com
curl -A "MyApp/1.0" https://api.example.com  # Shorthand

# Referer
curl -H "Referer: https://example.com" https://api.example.com
curl -e "https://example.com" https://api.example.com  # Shorthand

# Remove default header
curl -H "Accept:" https://api.example.com

Cookies (-b/-c)

# Send cookie (inline)
curl -b "session=abc123" https://example.com

# Send multiple cookies (inline)
curl -b "session=abc123; user=john" https://example.com
curl -b "session=abc123" -b "user=john" https://example.com

# Load cookies from file
curl -b cookies.txt https://example.com

# Save cookies to file
curl -c cookies.txt https://example.com

# Both save and send cookies (session management)
curl -b cookies.txt -c cookies.txt https://example.com/login

# Cookie jar example workflow
# 1. Login and save session cookie
curl -X POST -d "username=user&password=pass" \
     -c cookies.txt https://example.com/login

# 2. Use saved cookie for authenticated request
curl -b cookies.txt https://example.com/dashboard

# 3. Logout and update cookie
curl -b cookies.txt -c cookies.txt https://example.com/logout

Basic Authentication (-u)

# Basic auth (prompts for password)
curl -u username https://api.example.com

# Basic auth (password in command)
curl -u username:password https://api.example.com

# Basic auth (from .netrc file)
curl -n https://api.example.com

# Basic auth with POST
curl -u username:password \
     -X POST \
     -H "Content-Type: application/json" \
     -d '{"data":"value"}' \
     https://api.example.com

Bearer Token Authentication

# Bearer token
curl -H "Authorization: Bearer YOUR_TOKEN_HERE" https://api.example.com

# Bearer token with POST
curl -X POST \
     -H "Authorization: Bearer YOUR_TOKEN_HERE" \
     -H "Content-Type: application/json" \
     -d '{"key":"value"}' \
     https://api.example.com

# Token from file
TOKEN=$(cat token.txt)
curl -H "Authorization: Bearer $TOKEN" https://api.example.com

# Token from environment variable
curl -H "Authorization: Bearer $API_TOKEN" https://api.example.com

OAuth & Advanced Authentication

# API Key in header
curl -H "X-API-Key: YOUR_API_KEY" https://api.example.com

# API Key in query parameter
curl "https://api.example.com/endpoint?api_key=YOUR_API_KEY"

# Digest authentication
curl --digest -u username:password https://api.example.com

# NTLM authentication (Windows)
curl --ntlm -u username:password https://api.example.com

# Certificate-based authentication
curl --cert client.pem --key client-key.pem https://api.example.com

# Certificate with passphrase
curl --cert client.pem:passphrase --key client-key.pem https://api.example.com
4. DATA & FORMS POST Data & File Uploads

Sending data with curl - URL-encoded, JSON, multipart forms, and file uploads.

POST Data (-d)

# URL-encoded POST data
curl -d "name=John&email=john@example.com" https://api.example.com

# Multiple -d flags (automatically concatenated with &)
curl -d "name=John" -d "email=john@example.com" https://api.example.com

# POST data from file
curl -d @data.txt https://api.example.com

# POST with custom Content-Type
curl -d "param1=value1¶m2=value2" \
     -H "Content-Type: application/x-www-form-urlencoded" \
     https://api.example.com

# Empty POST
curl -d "" https://api.example.com

# POST without data (sends Content-Length: 0)
curl -X POST https://api.example.com

JSON Data (--json, -d)

# JSON with --json flag (curl 7.82.0+, auto-sets headers)
curl --json '{"name":"John","email":"john@example.com"}' \
     https://api.example.com

# JSON with -d flag (manual header)
curl -X POST \
     -H "Content-Type: application/json" \
     -d '{"name":"John","email":"john@example.com"}' \
     https://api.example.com

# JSON from file
curl -X POST \
     -H "Content-Type: application/json" \
     -d @payload.json \
     https://api.example.com

# JSON with heredoc (better for complex data)
curl -X POST \
     -H "Content-Type: application/json" \
     -d @- https://api.example.com <<'EOF'
{
  "user": {
    "name": "John Doe",
    "email": "john@example.com",
    "settings": {
      "notifications": true,
      "theme": "dark"
    }
  }
}
EOF

# Nested JSON
curl -X POST \
     -H "Content-Type: application/json" \
     -d '{
       "order": {
         "items": [
           {"id": 1, "quantity": 2},
           {"id": 2, "quantity": 1}
         ],
         "total": 59.99
       }
     }' \
     https://api.example.com

# JSON with variables
NAME="John"
EMAIL="john@example.com"
curl -X POST \
     -H "Content-Type: application/json" \
     -d "{\"name\":\"$NAME\",\"email\":\"$EMAIL\"}" \
     https://api.example.com

Form Data (-F, Multipart)

# Simple form field
curl -F "name=John" https://api.example.com

# Multiple form fields
curl -F "name=John" -F "email=john@example.com" https://api.example.com

# File upload
curl -F "file=@/path/to/document.pdf" https://api.example.com/upload

# File upload with custom filename
curl -F "file=@local.jpg;filename=remote.jpg" https://api.example.com/upload

# File upload with custom content type
curl -F "file=@data.json;type=application/json" https://api.example.com/upload

# Multiple files
curl -F "file1=@image1.jpg" \
     -F "file2=@image2.jpg" \
     https://api.example.com/upload

# Form fields with file
curl -F "name=John" \
     -F "email=john@example.com" \
     -F "avatar=@profile.jpg" \
     https://api.example.com/profile

# Upload file from stdin
cat document.txt | curl -F "file=@-" https://api.example.com/upload

# Binary file upload
curl -F "binary=@file.bin;type=application/octet-stream" \
     https://api.example.com/upload

File Uploads (Various Methods)

# PUT method for file upload
curl -T localfile.txt https://example.com/upload/remotefile.txt

# PUT with authentication
curl -T file.zip -u username:password ftp://ftp.example.com/

# Upload with progress bar
curl -# -T largefile.iso https://example.com/upload/

# Binary data upload
curl -X POST \
     --data-binary @image.jpg \
     -H "Content-Type: image/jpeg" \
     https://api.example.com/upload

# Upload with custom headers
curl -T file.pdf \
     -H "X-File-Name: document.pdf" \
     -H "X-File-Size: 1024000" \
     https://api.example.com/upload
5. CURL ADVANCED Redirects, SSL, Timeouts, Proxies

Advanced curl features for production use: SSL/TLS, timeouts, retries, proxies, and network control.

Follow Redirects (-L)

# Follow redirects
curl -L https://example.com

# Follow redirects with maximum redirect count
curl -L --max-redirs 5 https://example.com

# Follow redirects and show redirect chain
curl -Lv https://example.com

# Follow POST through redirects
curl -L -X POST -d "data=value" https://example.com

SSL/TLS Options

# Ignore SSL certificate errors (insecure)
curl -k https://self-signed.example.com
curl --insecure https://self-signed.example.com

# Specify CA certificate
curl --cacert /path/to/ca-cert.pem https://api.example.com

# Client certificate authentication
curl --cert client.pem --key client-key.pem https://api.example.com

# Client certificate with password
curl --cert client.pem:password https://api.example.com

# Specify TLS version
curl --tlsv1.2 https://api.example.com
curl --tlsv1.3 https://api.example.com

# Show TLS handshake details
curl -v https://api.example.com 2>&1 | grep -i tls

# Certificate pinning
curl --pinnedpubkey sha256//BASE64_HASH https://api.example.com

Timeouts & Retries

# Connection timeout (seconds)
curl --connect-timeout 10 https://api.example.com

# Maximum time for entire operation (seconds)
curl --max-time 60 https://api.example.com
curl -m 60 https://api.example.com  # Short form

# Combine timeouts
curl --connect-timeout 10 --max-time 60 https://api.example.com

# Retry on failure (curl 7.71.0+)
curl --retry 5 https://api.example.com

# Retry with delay between attempts
curl --retry 5 --retry-delay 3 https://api.example.com

# Retry only on specific errors
curl --retry 3 --retry-connrefused https://api.example.com

# Maximum retry time
curl --retry 5 --retry-max-time 120 https://api.example.com

# Exponential backoff
curl --retry 5 --retry-delay 1 --retry-max-time 60 https://api.example.com

Rate Limiting & Bandwidth Control

# Limit download speed (bytes per second)
curl --limit-rate 100k https://example.com/largefile.zip

# Limit speed in KB/s
curl --limit-rate 100k https://example.com/file.zip

# Limit speed in MB/s
curl --limit-rate 1m https://example.com/file.zip

# Speed limit (abort if slower than rate for duration)
curl --speed-limit 1000 --speed-time 10 https://example.com/file.zip
# Aborts if speed drops below 1000 bytes/sec for 10 seconds

# Combine rate limiting with retry
curl --limit-rate 500k --retry 3 https://example.com/file.zip

Connection & Network Options

# IPv4 only
curl -4 https://api.example.com

# IPv6 only
curl -6 https://api.example.com

# Use specific network interface
curl --interface eth0 https://api.example.com

# Bind to specific IP address
curl --interface 192.168.1.100 https://api.example.com

# Resolve hostname to specific IP
curl --resolve example.com:443:93.184.216.34 https://example.com

# Use HTTP/2
curl --http2 https://api.example.com

# Use HTTP/3 (QUIC)
curl --http3 https://api.example.com

# Keep-alive
curl --keepalive-time 60 https://api.example.com

Proxy Configuration

# HTTP proxy
curl -x http://proxy.example.com:8080 https://api.example.com

# SOCKS5 proxy
curl -x socks5://proxy.example.com:1080 https://api.example.com

# Proxy with authentication
curl -x http://proxy.example.com:8080 \
     -U proxyuser:proxypass https://api.example.com

# No proxy for specific hosts
curl --noproxy localhost,127.0.0.1 \
     -x http://proxy:8080 https://api.example.com

# Use system proxy settings
curl -x "" https://api.example.com  # Reads from env vars
6. CURL OUTPUT Saving Files & Response Info

Control curl output: save to files, show headers, format responses, extract metrics.

Save to File (-o/-O)

# Save with remote filename
curl -O https://example.com/file.zip

# Save with custom filename
curl -o myfile.zip https://example.com/file.zip

# Save multiple files with remote names
curl -O https://example.com/file1.zip -O https://example.com/file2.zip

# Save multiple files with custom names
curl https://example.com/file1.zip -o first.zip \
     https://example.com/file2.zip -o second.zip

# Append to existing file
curl https://api.example.com >> output.txt

# Save to directory
curl -o ~/downloads/file.zip https://example.com/file.zip

# Create directory structure from URL
curl --create-dirs -o path/to/file.zip https://example.com/file.zip

Silent & Verbose Modes

# Silent mode (no progress bar or errors)
curl -s https://api.example.com

# Silent with errors shown
curl -sS https://api.example.com

# Verbose mode (detailed debug info)
curl -v https://api.example.com

# Very verbose (even more details)
curl -vv https://api.example.com

# Trace (complete packet trace)
curl --trace trace.txt https://api.example.com

# Trace ASCII (readable trace)
curl --trace-ascii trace.txt https://api.example.com

# Show only errors
curl --no-progress-meter https://api.example.com  # curl 7.67.0+

Progress Bar & Display

# Progress bar instead of meter
curl -# https://example.com/largefile.zip

# No progress indicator at all
curl -s https://example.com/file.zip

# Progress bar with output to file
curl -# -o file.zip https://example.com/file.zip

# Show transfer stats at end
curl -w "\nTotal time: %{time_total}s\n" https://api.example.com

Write-Out Format (-w)

# Show HTTP status code
curl -w "%{http_code}\n" -o /dev/null -s https://api.example.com

# Show status code only (suppress body)
curl -o /dev/null -s -w "%{http_code}" https://api.example.com

# Show multiple stats
curl -w "Status: %{http_code}\nTime: %{time_total}s\n" \
     https://api.example.com

# Complete response stats
curl -w @- https://api.example.com <<'EOF'
HTTP Status: %{http_code}
Content Type: %{content_type}
Total Time: %{time_total}s
Download Size: %{size_download} bytes
Download Speed: %{speed_download} bytes/sec
EOF

# Common write-out variables
curl -w "
    http_code:      %{http_code}
    time_total:     %{time_total}
    time_connect:   %{time_connect}
    time_starttransfer: %{time_starttransfer}
    size_download:  %{size_download}
    speed_download: %{speed_download}
    url_effective:  %{url_effective}
\n" https://api.example.com

# JSON format output (curl 7.70.0+)
curl -w "%{json}" https://api.example.com

Response Codes & Headers

# Include response headers in output
curl -i https://api.example.com

# Show only headers (HEAD request)
curl -I https://api.example.com

# Save headers to separate file
curl -D headers.txt https://api.example.com

# Save headers to stdout, body to file
curl -D - -o body.txt https://api.example.com

# Show specific header
curl -s -I https://api.example.com | grep -i content-type

# Multiple header formats
curl -v https://api.example.com 2>&1 | grep "^< "  # Response headers
curl -v https://api.example.com 2>&1 | grep "^> "  # Request headers

# Extract status code with write-out
curl -w "%{http_code}" -o /dev/null -s https://api.example.com

# Check if URL is accessible
curl -f https://api.example.com && echo "Success" || echo "Failed"
# -f fails silently on HTTP errors (returns exit code 22 for 400+)

# Show redirect chain
curl -Lsv https://example.com 2>&1 | grep "^< HTTP"
7. WGET BASICS Downloads & Mirroring

wget fundamentals: downloading files, recursive downloads, and website mirroring.

Download Files

# Download single file
wget https://example.com/file.zip

# Download with custom filename
wget -O myfile.zip https://example.com/file.zip

# Download multiple files
wget https://example.com/file1.zip https://example.com/file2.zip

# Download from file list
wget -i urls.txt

# Download to specific directory
wget -P ~/downloads/ https://example.com/file.zip

# Download in background
wget -b https://example.com/largefile.iso
# Output written to wget-log

# Background with custom log file
wget -b -o download.log https://example.com/file.zip

# Continue interrupted download
wget -c https://example.com/largefile.iso

# Try count (default is 20)
wget --tries=5 https://example.com/file.zip

# Infinite retries
wget --tries=inf https://example.com/file.zip

# Wait between retries
wget --waitretry=5 https://example.com/file.zip

Recursive Downloads (-r)

# Basic recursive download
wget -r https://example.com

# Recursive with depth limit
wget -r -l 2 https://example.com
# -l 0 or --level=0 means infinite recursion

# Don't ascend to parent directory
wget -r --no-parent https://example.com/docs/

# Recursive download of specific types
wget -r -A pdf,zip https://example.com

# Exclude file types
wget -r -R jpg,png https://example.com

# Follow only relative links
wget -r --relative https://example.com

# Convert links for offline browsing
wget -r -k https://example.com
# -k converts links to relative paths

# Recursive with page requisites (images, CSS, JS)
wget -r -p https://example.com

# Complete website download for offline viewing
wget -r -p -k -E https://example.com
# -E adds .html extension to HTML files

Mirroring (-m)

# Mirror website
wget -m https://example.com

# Mirror is equivalent to:
wget -r -N -l inf --no-remove-listing https://example.com
# -N enables timestamping
# -l inf sets infinite recursion depth

# Mirror with conversion for offline viewing
wget -m -k -p -E https://example.com

# Mirror specific section
wget -m --no-parent https://example.com/documentation/

# Mirror with bandwidth limit
wget -m --limit-rate=200k https://example.com

# Mirror with wait time (polite crawling)
wget -m --wait=2 https://example.com

# Mirror excluding domains
wget -m --exclude-domains ads.example.com,tracker.example.com \
     https://example.com

# Mirror with custom user agent
wget -m --user-agent="Mozilla/5.0" https://example.com
8. WGET ADVANCED Spider Mode, Filtering, Auth

Advanced wget features: link checking, filtering, authentication, and bandwidth control.

Spider Mode (Check Links)

# Basic spider mode (don't download)
wget --spider https://example.com

# Check single URL
wget --spider https://example.com/file.zip
# Exit code 0 if exists, 8 if error

# Spider mode with recursive checking
wget --spider -r https://example.com

# Find broken links
wget --spider -r -nd -nv -o spider.log https://example.com
grep -B 2 "broken link" spider.log

# Spider with detailed logging
wget --spider -r -nd -nv -o links.log https://example.com

# Check links without following robots.txt
wget --spider -e robots=off -r https://example.com

# Show only HTTP responses
wget --spider --server-response https://example.com 2>&1 | grep "HTTP/"

# Check if file exists (return code)
wget --spider -q https://example.com/file.zip
if [ $? -eq 0 ]; then
    echo "File exists"
else
    echo "File not found"
fi

Depth Control & Filtering

# Limit recursion depth
wget -r -l 3 https://example.com
# Default is 5 levels

# No depth limit
wget -r -l 0 https://example.com

# Span hosts (follow links to other domains)
wget -r -H https://example.com

# Limit to specific domains
wget -r -H -D example.com,cdn.example.com https://example.com

# Exclude directories
wget -r -X /cgi-bin/,/temp/ https://example.com

# Include only specific directories
wget -r -I /docs/,/images/ https://example.com

# Follow FTP links from HTTP page
wget -r --follow-ftp https://example.com

Accept/Reject Patterns

# Accept only specific file types
wget -r -A pdf,zip,tar.gz https://example.com

# Reject file types
wget -r -R jpg,png,gif,css,js https://example.com

# Accept patterns (regex)
wget -r --accept-regex ".*\.(pdf|zip)$" https://example.com

# Reject patterns (regex)
wget -r --reject-regex ".*\.(jpg|png|gif)$" https://example.com

# Case-insensitive matching
wget -r -A pdf --ignore-case https://example.com

# Download only HTML files
wget -r -A html,htm https://example.com

# Download everything except multimedia
wget -r -R mp3,mp4,avi,mkv,flv https://example.com

# Combine accept and reject
wget -r -A pdf -R "*sample*" https://example.com

Bandwidth Limiting & Throttling

# Limit download rate (bytes per second)
wget --limit-rate=100k https://example.com/file.zip

# Rate in MB/s
wget --limit-rate=1m https://example.com/largefile.iso

# Wait between downloads (seconds)
wget --wait=5 -r https://example.com

# Random wait (0.5 to 1.5 times --wait value)
wget --wait=5 --random-wait -r https://example.com

# Wait between retries
wget --waitretry=10 https://example.com/file.zip

# Quota limit (stop after downloading X bytes)
wget -Q 100m -r https://example.com
# Stops after downloading 100MB

# Polite crawling (recommended settings)
wget -r --wait=2 --random-wait --limit-rate=200k https://example.com

Background Downloads & Logging

# Background download
wget -b https://example.com/largefile.iso
# Logs to wget-log by default

# Background with custom log
wget -b -o download.log https://example.com/file.zip

# Quiet mode (no output)
wget -q https://example.com/file.zip

# Verbose output
wget -v https://example.com/file.zip
# This is the default

# No verbose (minimal output)
wget -nv https://example.com/file.zip

# Debug output
wget -d https://example.com/file.zip

# Append to log file
wget -a download.log https://example.com/file.zip

# Monitor background download
tail -f wget-log

Authentication & Headers

# HTTP basic authentication
wget --user=username --password=password https://example.com

# Read password from file
wget --user=username --password=$(cat password.txt) https://example.com

# Custom headers
wget --header="Authorization: Bearer TOKEN" https://api.example.com

# Multiple headers
wget --header="Accept: application/json" \
     --header="X-API-Key: key123" \
     https://api.example.com

# Custom User-Agent
wget --user-agent="Mozilla/5.0" https://example.com

# Referer
wget --referer="https://google.com" https://example.com

# Load cookies from file
wget --load-cookies=cookies.txt https://example.com

# Save cookies to file
wget --save-cookies=cookies.txt --keep-session-cookies https://example.com

# POST request with wget
wget --post-data="user=name&pass=secret" https://example.com/login

# POST data from file
wget --post-file=data.txt https://api.example.com
9. API TESTING PATTERNS REST, GraphQL, Load Testing

Using curl for API testing: REST endpoints, GraphQL queries, and load testing patterns.

REST API Testing with curl

# GET request with JSON response
curl -H "Accept: application/json" https://api.example.com/users

# Pretty-print JSON response
curl -s https://api.example.com/users | jq '.'

# GET with query parameters
curl "https://api.example.com/users?page=1&limit=10"

# POST - Create resource
curl -X POST \
     -H "Content-Type: application/json" \
     -H "Authorization: Bearer TOKEN" \
     -d '{"name":"John","email":"john@example.com"}' \
     https://api.example.com/users

# PUT - Update resource
curl -X PUT \
     -H "Content-Type: application/json" \
     -d '{"name":"John Doe","email":"john.doe@example.com"}' \
     https://api.example.com/users/123

# PATCH - Partial update
curl -X PATCH \
     -H "Content-Type: application/json" \
     -d '{"email":"newemail@example.com"}' \
     https://api.example.com/users/123

# DELETE - Remove resource
curl -X DELETE \
     -H "Authorization: Bearer TOKEN" \
     https://api.example.com/users/123

# Error handling and status checking
HTTP_CODE=$(curl -s -o response.json -w "%{http_code}" \
            https://api.example.com/users)
if [ $HTTP_CODE -eq 200 ]; then
    echo "Success"
    cat response.json | jq '.'
else
    echo "Error: HTTP $HTTP_CODE"
    cat response.json
fi

# Test API with different methods
for METHOD in GET POST PUT DELETE; do
    echo "Testing $METHOD"
    curl -X $METHOD -w "\nStatus: %{http_code}\n" \
         https://api.example.com/resource
done

GraphQL API Testing

# Basic GraphQL query
curl -X POST \
     -H "Content-Type: application/json" \
     -d '{"query": "{ users { id name email } }"}' \
     https://api.example.com/graphql

# GraphQL query with variables
curl -X POST \
     -H "Content-Type: application/json" \
     -d '{
       "query": "query GetUser($id: ID!) { user(id: $id) { name email } }",
       "variables": {"id": "123"}
     }' \
     https://api.example.com/graphql

# GraphQL mutation
curl -X POST \
     -H "Content-Type: application/json" \
     -d '{
       "query": "mutation CreateUser($name: String!, $email: String!) {
         createUser(name: $name, email: $email) { id name email }
       }",
       "variables": {"name": "John", "email": "john@example.com"}
     }' \
     https://api.example.com/graphql

# GraphQL with authentication
curl -X POST \
     -H "Content-Type: application/json" \
     -H "Authorization: Bearer TOKEN" \
     -d '{"query": "{ me { name email } }"}' \
     https://api.example.com/graphql

# Pretty-print GraphQL response
curl -X POST \
     -H "Content-Type: application/json" \
     -d '{"query": "{ users { id name } }"}' \
     https://api.example.com/graphql | jq '.data'

# GraphQL query from file
curl -X POST \
     -H "Content-Type: application/json" \
     -d @query.json \
     https://api.example.com/graphql

API Testing Scripts

# Complete API test script
#!/bin/bash
API_URL="https://api.example.com"
TOKEN="your_token_here"

# Test GET
echo "Testing GET..."
curl -s -H "Authorization: Bearer $TOKEN" "$API_URL/users" | jq '.'

# Test POST
echo "Testing POST..."
RESPONSE=$(curl -s -X POST \
     -H "Content-Type: application/json" \
     -H "Authorization: Bearer $TOKEN" \
     -d '{"name":"Test User","email":"test@example.com"}' \
     "$API_URL/users")
USER_ID=$(echo $RESPONSE | jq -r '.id')
echo "Created user with ID: $USER_ID"

# Test PUT
echo "Testing PUT..."
curl -s -X PUT \
     -H "Content-Type: application/json" \
     -H "Authorization: Bearer $TOKEN" \
     -d '{"name":"Updated Name"}' \
     "$API_URL/users/$USER_ID" | jq '.'

# Test DELETE
echo "Testing DELETE..."
curl -s -X DELETE \
     -H "Authorization: Bearer $TOKEN" \
     -w "Status: %{http_code}\n" \
     "$API_URL/users/$USER_ID"

Rate Limiting & Load Testing

# Simple load test (10 requests)
for i in {1..10}; do
    curl -s -w "%{http_code} - %{time_total}s\n" \
         https://api.example.com/health
done

# Parallel load test
seq 1 100 | xargs -P 10 -I {} curl -s -w "%{http_code}\n" \
     https://api.example.com/health

# Test rate limiting
for i in {1..50}; do
    HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" \
                https://api.example.com/endpoint)
    echo "Request $i: HTTP $HTTP_CODE"
    if [ $HTTP_CODE -eq 429 ]; then
        echo "Rate limit hit at request $i"
        break
    fi
    sleep 0.1
done

# Response time testing
curl -w "
    time_namelookup:  %{time_namelookup}
    time_connect:     %{time_connect}
    time_appconnect:  %{time_appconnect}
    time_pretransfer: %{time_pretransfer}
    time_redirect:    %{time_redirect}
    time_starttransfer: %{time_starttransfer}
    time_total:       %{time_total}
\n" -o /dev/null https://api.example.com
10. PRO TIPS Best Practices & Comparisons

When to use curl vs wget, configuration files, parallel downloads, HTTP/2 support, and debugging.

curl vs wget: When to Use Each

USE CURL
  • Interacting with REST APIs
  • Fine control over HTTP headers and methods
  • Uploading data or files
  • Testing APIs (POST, PUT, PATCH, DELETE)
  • Using protocols beyond HTTP (FTP, SMTP, LDAP)
  • Response piped to stdout for processing
  • Bearer tokens or OAuth authentication
USE WGET
  • Downloading files or entire websites
  • Recursive downloads with depth control
  • Mirroring websites for offline viewing
  • Resuming interrupted downloads
  • Background downloads
  • Simple file downloads without complex auth
  • Downloading from file lists
  • Fire-and-forget download tool
Feature curl wget
Protocols 20+ protocols HTTP, HTTPS, FTP
HTTP Methods All methods GET, POST
Default Behavior Output to stdout Save to file
Recursive Download No Yes
Resume Downloads Yes (-C) Yes (-c)
Background Downloads No Yes (-b)
Parallel Downloads Yes (7.66.0+) No
API Testing Excellent Limited
Cookie Management Yes Yes
Authentication Extensive Basic
Customization Highly customizable Moderate

Configuration Files (.curlrc / .wgetrc)

# ~/.curlrc - Default options for all curl requests
# Follow redirects by default
location

# Show error messages
show-error

# Timeout settings
connect-timeout = 10
max-time = 300

# User agent
user-agent = "Mozilla/5.0 (curl)"

# Progress bar
progress-bar

# Retry settings
retry = 3
retry-delay = 2

# Disable .curlrc for a specific request
curl -q https://api.example.com

# Use specific config file
curl -K custom-config.txt https://api.example.com
# ~/.wgetrc - Default options for all wget requests
# Wait between requests (polite crawling)
wait = 2
random_wait = on

# Retry settings
tries = 3
retry_connrefused = on
waitretry = 10

# Timeout
timeout = 60
dns_timeout = 30
connect_timeout = 30
read_timeout = 60

# User agent
user_agent = Mozilla/5.0 (wget)

# Continue downloads
continue = on

# No parent directory traversal
no_parent = on

# Timestamping
timestamping = on

# Disable .wgetrc
wget --no-config https://example.com/file.zip

# Use specific config file
WGETRC=custom-wgetrc wget https://example.com/file.zip

Parallel Downloads

# curl parallel downloads (7.66.0+)
curl -Z https://example.com/file1.zip https://example.com/file2.zip

# Parallel with output files
curl -Z \
     https://example.com/file1.zip -o file1.zip \
     https://example.com/file2.zip -o file2.zip

# Limit concurrent connections
curl -Z --parallel-max 5 \
     https://example.com/file[1-10].zip

# Parallel download from URL list
cat urls.txt | xargs -P 5 -n 1 curl -O

# Parallel with GNU parallel
parallel -j 10 curl -O ::: $(cat urls.txt)

# wget parallel (using GNU parallel)
cat urls.txt | parallel -j 5 wget -q

# Parallel with custom naming
cat urls.txt | parallel -j 5 'wget -O output-{#}.zip {}'

HTTP/2 & HTTP/3 Support

# Check if curl supports HTTP/2
curl --version | grep HTTP2

# Use HTTP/2 (negotiated via ALPN)
curl --http2 https://api.example.com

# Force HTTP/2 prior knowledge
curl --http2-prior-knowledge http://api.example.com

# HTTP/2 with verbose output
curl --http2 -v https://api.example.com 2>&1 | grep "ALPN"

# Check if curl supports HTTP/3
curl --version | grep HTTP3

# Use HTTP/3 (QUIC)
curl --http3 https://cloudflare-quic.com

# Force HTTP/3
curl --http3 -v https://example.com

# Note: HTTP/3 support requires curl built with nghttp3 and ngtcp2

Advanced Debugging & Troubleshooting

# curl verbose debugging
curl -v https://api.example.com
curl -vv https://api.example.com  # Very verbose

# Trace all communications
curl --trace trace.log https://api.example.com

# Trace ASCII (human-readable)
curl --trace-ascii trace.log https://api.example.com

# Show timing breakdown
curl -w @- https://api.example.com <<'EOF'
    time_namelookup:  %{time_namelookup}s
    time_connect:     %{time_connect}s
    time_appconnect:  %{time_appconnect}s
    time_pretransfer: %{time_pretransfer}s
    time_redirect:    %{time_redirect}s
    time_starttransfer: %{time_starttransfer}s
    time_total:       %{time_total}s
EOF

# Debug DNS resolution
curl -v https://api.example.com 2>&1 | grep "Trying"

# Test with specific IP
curl --resolve api.example.com:443:93.184.216.34 https://api.example.com

# wget debugging
wget -d https://api.example.com

# wget with server response
wget --server-response https://api.example.com

# Test SSL/TLS
curl -v https://api.example.com 2>&1 | grep -E "SSL|TLS"

# Check certificate details
curl -vI https://api.example.com 2>&1 | grep -A 10 "Server certificate"

Shell Scripting Best Practices

# Check if curl succeeded
if curl -f -s https://api.example.com > /dev/null; then
    echo "API is reachable"
else
    echo "API is down"
fi

# Store response and status code separately
HTTP_BODY=$(curl -s https://api.example.com/users)
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" \
            https://api.example.com/users)

if [ $HTTP_CODE -eq 200 ]; then
    echo "$HTTP_BODY" | jq '.'
else
    echo "Error: HTTP $HTTP_CODE"
fi

# Better: Get both in one request
RESPONSE=$(curl -s -w "\n%{http_code}" https://api.example.com/users)
HTTP_BODY=$(echo "$RESPONSE" | head -n -1)
HTTP_CODE=$(echo "$RESPONSE" | tail -n 1)

# Retry with exponential backoff
retry_count=0
max_retries=5
until curl -f https://api.example.com; do
    retry_count=$((retry_count + 1))
    if [ $retry_count -ge $max_retries ]; then
        echo "Max retries reached"
        exit 1
    fi
    sleep_time=$((2 ** retry_count))
    echo "Retry $retry_count/$max_retries in ${sleep_time}s..."
    sleep $sleep_time
done

Performance Optimization

# Reuse connections (HTTP keep-alive)
curl --keepalive-time 60 https://api.example.com

# Disable DNS cache (for load balancing tests)
curl --no-keepalive https://api.example.com

# Use connection pooling
for i in {1..100}; do
    curl -s https://api.example.com/endpoint$i &
done
wait

# Compress responses
curl --compressed https://api.example.com

# Use CDN edge locations
curl -H "Accept-Encoding: gzip, deflate, br" https://cdn.example.com/asset.js

# Parallel downloads with progress
cat urls.txt | parallel --bar -j 10 curl -O
Pro Tip: Use curl for APIs and scripting, wget for downloading files and mirroring websites. Both tools complement each other perfectly.