Quick Reference
The essential setup flow at a glance. Run these commands in order to go from a fresh Windows machine to a working AI-powered dev environment.
wsl --install
# Update package lists & upgrade
sudo apt update && sudo apt upgrade -y
# Install essentials
sudo apt install -y build-essential curl git
# Install nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
# Reload shell, then install Node
source ~/.bashrc
nvm install --lts
# Install globally via npm
npm install -g @anthropic-ai/claude-code
# Verify
claude --version
# Install globally via npm
npm install -g @openai/codex
# Verify
codex --version
git config --global user.name "Your Name"
git config --global user.email "you@example.com"
git config --global init.defaultBranch main
# In WSL (Linux side)
git config --global core.autocrlf input
# Create .gitattributes in projects
echo "* text=auto eol=lf" > .gitattributes
Why WSL2?
AI CLI tools like Claude Code and Codex CLI are built for Unix environments. Here's why WSL2 is the best path for Windows developers.
What Breaks on Native Windows
Running AI dev tools directly on Windows (via PowerShell or cmd) leads to a cascade of subtle, hard-to-debug problems:
CRLF (\r\n), Unix uses LF (\n). AI tools generate LF code. Git diffs explode. Shell scripts break with \r errors.grep, sed, awk, chmod, ssh-keygen. AI agents assume these exist. Half their output fails.node_modules trees.WSL2 Architecture
WSL2 runs a real Linux kernel inside a lightweight virtual machine managed by Windows. It's not emulation — it's actual Linux.
System Requirements
| Requirement | Minimum | Recommended |
|---|---|---|
| Windows Version | Windows 10 version 2004 (build 19041) | Windows 11 |
| RAM | 8 GB | 16 GB+ |
| Disk Space | ~1 GB for WSL + distro | 10 GB+ free for projects |
| Virtualization | Must be enabled in BIOS/UEFI | Usually on by default |
| Architecture | x64 or ARM64 | x64 |
Win + R, type winver, and press Enter. You'll see your Windows version and build number.
Key Benefits of WSL2
WSL2 Setup
A step-by-step walkthrough to get WSL2 and Ubuntu running on your Windows machine. No prior Linux experience needed.
Step 1: Install WSL
Open PowerShell or Windows Terminal as Administrator (right-click → "Run as administrator") and run:
wsl --install
This single command does everything:
- Enables the WSL feature
- Enables the Virtual Machine Platform
- Downloads the latest Linux kernel
- Sets WSL2 as the default version
- Installs Ubuntu (the default distribution)
wsl --install, you must restart your computer. Don't skip this step — WSL won't work until you reboot.
Step 2: Choose Your Distribution
Ubuntu is installed by default and is the best choice for beginners. If you want a different distro, you can see available options:
# List available distributions
wsl --list --online
# Install a specific one (example)
wsl --install -d Ubuntu-24.04
| Distribution | Best For | Notes |
|---|---|---|
| Ubuntu (default) | Beginners, general dev | Largest community, most tutorials |
| Debian | Stability purists | Lighter than Ubuntu, rock-solid |
| Fedora | Latest packages | Bleeding-edge software |
Step 3: Get Windows Terminal
Windows Terminal is the best way to use WSL. It supports tabs, split panes, and automatic WSL integration.
- No tabs or panes
- Poor Unicode/emoji support
- No GPU-accelerated rendering
- Clunky copy/paste
- Tabs + split panes
- Full Unicode & emoji
- GPU-accelerated, buttery smooth
- Auto-detects WSL distros
- Customizable themes & fonts
Install it from the Microsoft Store or via winget:
# Install Windows Terminal (from PowerShell)
winget install Microsoft.WindowsTerminal
Once installed, Ubuntu will appear as a dropdown option in Windows Terminal automatically.
Step 4: First Launch — Creating Your User
After reboot, Ubuntu launches automatically (or open it from Windows Terminal). You'll be prompted to create a Unix username and password:
Installing, this may take a few minutes...
Please create a default UNIX user account.
The username does not need to match your Windows username.
Enter new UNIX username: yourname
New password: (type your password — it won't show)
Retype new password:
passwd: password updated successfully
Installation successful!
sudo commands (installing software, changing system settings). If you forget it, you can reset it, but it's a hassle.
Step 5: Update Everything
First thing to do in any fresh Linux install: update the package lists and upgrade installed packages.
# Update package lists
sudo apt update
# Upgrade all installed packages
sudo apt upgrade -y
# Install essential build tools
sudo apt install -y build-essential curl wget git unzip
What each package gives you:
| Package | What It Provides |
|---|---|
build-essential |
gcc, g++, make — needed to compile native Node.js modules |
curl |
Download files and interact with APIs from the command line |
wget |
Another download tool, commonly used in install scripts |
git |
Version control — essential for every developer |
unzip |
Extract .zip files (not included by default in minimal installs) |
Step 6: Configure Git
Tell Git who you are. This information appears in every commit you make.
# Set your identity
git config --global user.name "Your Full Name"
git config --global user.email "your.email@example.com"
# Use 'main' as the default branch name
git config --global init.defaultBranch main
# Set default editor (nano is easiest for beginners)
git config --global core.editor nano
# Enable colored output
git config --global color.ui auto
# Verify your config
git config --global --list
Step 7: Verify Everything Works
Run these commands to confirm your WSL2 environment is healthy:
# Confirm you're running WSL2 (not WSL1)
wsl.exe -l -v
# Should show VERSION 2 next to your distro
# Check your Linux version
lsb_release -a
# Should show Ubuntu with a version number
# Verify git
git --version
# Check you're in your home directory
pwd
# Should show /home/yourname
# Verify internet connectivity
curl -s https://httpbin.org/ip | head -5
Where Are My Files?
Understanding the filesystem is critical. WSL2 has its own Linux filesystem, but it can also access Windows files:
/home/yourname/This is your Linux home directory. Always work here for best performance. AI tools and Node.js run fastest on the native Linux filesystem.
/mnt/c/Users/YourName/Your Windows C: drive, mounted in WSL. Access is 5-10x slower due to filesystem translation. Avoid running projects from here.
~/projects/ (the Linux filesystem), not in /mnt/c/ (the Windows filesystem). Filesystem translation across the boundary is slow and causes permission issues, CRLF problems, and file-watching failures.
# Create a projects directory in your Linux home
mkdir -p ~/projects
cd ~/projects
# Access your Windows Desktop (if needed)
ls /mnt/c/Users/$USER/Desktop/
# Open current WSL directory in Windows Explorer
explorer.exe .
Essential Tools
Install development tools inside WSL — not on the Windows side. Everything here runs in your Ubuntu terminal.
Git — Beyond the Basics
Git comes pre-installed with Ubuntu, but you need to configure it properly for WSL. These settings prevent common cross-platform headaches.
# Identity (required for commits)
git config --global user.name "Your Name"
git config --global user.email "you@example.com"
# Line ending behavior (critical for WSL)
git config --global core.autocrlf false
git config --global core.eol lf
# Modern defaults
git config --global init.defaultBranch main
git config --global color.ui auto
git config --global pull.rebase false
# Credential storage (keeps you logged in)
git config --global credential.helper store
# Verify everything
git config --global --list
core.autocrlf false?
In WSL you're running native Linux — there's no reason for Git to convert line endings. Setting it to false means Git stores and checks out files exactly as-is. We'll cover line endings in depth in Section 05.
GitHub CLI (gh)
The GitHub CLI lets you create repos, open PRs, manage issues, and authenticate — all from the terminal. It's the fastest way to connect Git to your GitHub account.
-
Install GitHub CLI
Add the official repository and install:
# Add GitHub CLI repository (type -p wget >/dev/null || sudo apt install wget -y) \ && sudo mkdir -p -m 755 /etc/apt/keyrings \ && wget -qO- https://cli.github.com/packages/githubcli-archive-keyring.gpg \ | sudo tee /etc/apt/keyrings/githubcli-archive-keyring.gpg >/dev/null \ && sudo chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \ && echo "deb [arch=$(dpkg --print-architecture) \ signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] \ https://cli.github.com/packages stable main" \ | sudo tee /etc/apt/sources.list.d/github-cli.list >/dev/null \ && sudo apt update \ && sudo apt install gh -y -
Authenticate with GitHub
Log in using the browser-based OAuth flow:
gh auth loginChoose: GitHub.com → HTTPS → Login with a web browser. Copy the one-time code, open the URL in your browser, paste the code, and authorize.
-
Verify authentication
# Check login status gh auth status # Test by listing your repos gh repo list --limit 5
Node.js via nvm
sudo apt install nodejs
The Ubuntu repository ships an ancient Node.js version (often v12 or v14). It also installs to a system directory, requiring sudo for every global npm install. This breaks tools like Claude Code. Always use nvm instead.
nvm (Node Version Manager) lets you install and switch between multiple Node.js versions without root access. It installs everything to your home directory.
-
Install nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash -
Reload your shell
# Pick one depending on your shell source ~/.bashrc # for bash source ~/.zshrc # for zsh -
Install the latest LTS version
# Install latest Long Term Support release nvm install --lts # Set it as your default nvm alias default lts/* # Verify node --version # Should show v20.x or v22.x npm --version # Should show 10.x+
| Command | What It Does |
|---|---|
nvm install 22 |
Install Node.js v22 (latest in that major) |
nvm install --lts |
Install the latest LTS release |
nvm use 20 |
Switch to Node.js v20 for the current shell |
nvm alias default 22 |
Set v22 as the default for new shells |
nvm ls |
List all locally installed versions |
nvm ls-remote --lts |
List all available LTS versions |
nvm current |
Show the currently active version |
nvm uninstall 18 |
Remove a specific installed version |
.nvmrc
Create a .nvmrc file in your project root with just the version number (e.g., 20). Then run nvm use in that directory to automatically switch. Team members with nvm will use the same version.
# Create .nvmrc in your project
echo "20" > .nvmrc
# Later, in that project directory:
nvm use
# Found '/home/you/projects/myapp/.nvmrc' with version <20>
# Now using node v20.x.x
Python
Ubuntu ships with Python 3, but you'll often need a newer version or multiple versions. There are two solid approaches:
Simpler approach — install specific Python versions via Ubuntu's package manager. Best for most people.
- Easy to install and manage
- System-level, available everywhere
- Automatic security updates via apt
- Limited to versions the PPA provides
- Harder to switch between versions
More flexible — like nvm but for Python. Best if you work on multiple Python projects.
- Install any Python version ever released
- Switch versions per-project with
.python-version - No root access needed
- Builds from source (takes a few minutes)
- Requires build dependencies
# Add the deadsnakes PPA
sudo add-apt-repository ppa:deadsnakes/ppa -y
sudo apt update
# Install Python 3.12 (or whatever version you need)
sudo apt install -y python3.12 python3.12-venv python3.12-dev
# Verify
python3.12 --version
# Install build dependencies first
sudo apt install -y make build-essential libssl-dev zlib1g-dev \
libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \
libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev \
libffi-dev liblzma-dev
# Install pyenv
curl https://pyenv.run | bash
# Add to your shell config (~/.bashrc or ~/.zshrc)
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
echo 'command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(pyenv init -)"' >> ~/.bashrc
# Reload shell
source ~/.bashrc
# Install and set a Python version
pyenv install 3.12
pyenv global 3.12
# Verify
python --version
| Command | What It Does |
|---|---|
python3 -m venv .venv |
Create a virtual environment in the current directory |
source .venv/bin/activate |
Activate the virtual environment |
deactivate |
Exit the virtual environment |
pip install package-name |
Install a package (inside venv, no sudo needed) |
pip freeze > requirements.txt |
Export installed packages to a file |
pip install -r requirements.txt |
Install all packages from a requirements file |
Windows PATH Leaking into WSL
PATH to Linux's PATH. This means when you type node, you might accidentally run the Windows version of Node instead of the Linux one. Same for git, python, code, and any other tool installed on both sides.
Symptoms of PATH leaking:
which nodeshows/mnt/c/Program Files/nodejs/nodeinstead of an nvm path- Commands run painfully slow (crossing the filesystem boundary)
echo $PATHis hundreds of characters long, full of/mnt/c/...entries- Wrong tool versions are picked up silently
The fix: Disable Windows PATH injection in /etc/wsl.conf:
# Create or edit /etc/wsl.conf
sudo nano /etc/wsl.conf
Add these lines:
# /etc/wsl.conf
[interop]
appendWindowsPath = false
Then restart WSL from PowerShell:
# Run in PowerShell (not in WSL)
wsl --shutdown
# Reopen Ubuntu from Windows Terminal
explorer.exe and code (VS Code). You can still call them by full path (/mnt/c/Windows/explorer.exe) or add specific entries back to your ~/.bashrc:
# Selectively add back useful Windows commands
export PATH="$PATH:/mnt/c/Windows/System32"
export PATH="$PATH:/mnt/c/Users/$USER/AppData/Local/Programs/Microsoft VS Code/bin"
Line Endings & Encoding
The hidden gotcha that silently breaks scripts, pollutes diffs, and wastes hours of debugging time. Understand it once, fix it forever.
CRLF vs LF Explained
Every text file uses invisible characters to mark the end of each line. The problem? Windows and Linux use different ones.
Carriage Return + Line Feed
Two characters: \r\n (hex: 0D 0A)
Heritage from typewriters: "move carriage to start" + "advance paper one line." Windows kept both characters.
Line Feed only
One character: \n (hex: 0A)
Unix chose simplicity. One character, one meaning. This is the standard for programming, Git, and the web.
What goes wrong when CRLF sneaks into Linux files:
\r as part of the command. You get cryptic errors like $'\r': command not found or bad interpreter: No such file or directory..env files, and .gitattributes can silently break when trailing \r characters are present.Git Configuration
Git has a setting called core.autocrlf that controls automatic line ending conversion. Here are your options:
| Setting | On Checkout | On Commit | Verdict |
|---|---|---|---|
false |
No conversion | No conversion | Recommended for WSL |
input |
No conversion | CRLF → LF | Safe middle ground |
true |
LF → CRLF | CRLF → LF | Bad for WSL |
# Best for WSL: no conversion at all
git config --global core.autocrlf false
git config --global core.eol lf
# Alternative: convert CRLF to LF on commit only
# git config --global core.autocrlf input
core.autocrlf true in WSL
This tells Git to convert LF to CRLF on checkout — injecting Windows line endings into your Linux filesystem. Shell scripts will break, linters will complain, and debugging will be miserable.
.gitattributes — The Definitive Fix
While core.autocrlf is a per-machine setting, .gitattributes travels with the repository. It ensures every contributor uses the same line ending rules regardless of their OS or Git config.
# Auto-detect text files and normalize to LF
* text=auto eol=lf
# Explicitly declare text files
*.js text eol=lf
*.ts text eol=lf
*.jsx text eol=lf
*.tsx text eol=lf
*.json text eol=lf
*.md text eol=lf
*.html text eol=lf
*.css text eol=lf
*.yml text eol=lf
*.yaml text eol=lf
*.sh text eol=lf
*.py text eol=lf
*.rb text eol=lf
*.sql text eol=lf
*.xml text eol=lf
*.env text eol=lf
# Declare binary files (never convert)
*.png binary
*.jpg binary
*.jpeg binary
*.gif binary
*.ico binary
*.woff binary
*.woff2 binary
*.ttf binary
*.eot binary
*.pdf binary
*.zip binary
.gitattributes to every project
Make this the first file you create in any new repository. It protects the entire team. The * text=auto eol=lf line alone handles 95% of cases.
If your repo already has CRLF files, you need to renormalize after adding .gitattributes:
# Renormalize all files after adding .gitattributes
git add --renormalize .
git commit -m "Normalize line endings to LF"
dos2unix — Fix Existing Files
When CRLF has already crept into your files, dos2unix converts them in place.
# Install dos2unix
sudo apt install -y dos2unix
# Convert a single file
dos2unix script.sh
# Convert multiple files
dos2unix *.sh *.py *.js
# Convert all text files in a directory recursively
find . -type f -name "*.sh" -exec dos2unix {} +
How to detect CRLF in files:
# The `file` command shows line ending type
file script.sh
# script.sh: Bash script, ASCII text, with CRLF line terminators
# ^^^^^^^^^^^^^^^^^^^^^^^^
# If it says "CRLF" — that's your problem
# Show CRLF characters as ^M in a file
cat -A script.sh | head -5
# Lines ending with ^M$ have CRLF
# Lines ending with just $ are LF (correct)
BOM (Byte Order Mark)
The Byte Order Mark is a hidden 3-byte sequence (EF BB BF) that some Windows editors (notably old versions of Notepad) add to the beginning of UTF-8 files. It's invisible in most editors but causes real problems:
- Shell scripts fail: the shebang (
#!/bin/bash) isn't the first bytes anymore - JSON parsing breaks: the BOM isn't valid JSON
- PHP outputs a blank line before any content
- YAML and config files may silently misparse
# Detect BOM by inspecting the first bytes
hexdump -C script.sh | head -1
# If the first three bytes are: ef bb bf — that's a BOM
# Example output with BOM:
# 00000000 ef bb bf 23 21 2f 62 69 6e 2f 62 61 73 68 0a |...#!/bin/bash.|
# ^^^^^^^^ BOM bytes
# Remove BOM with sed
sed -i '1s/^\xEF\xBB\xBF//' file.txt
# Or use dos2unix (handles BOM too)
dos2unix --remove-bom file.txt
.editorconfig — Prevent Issues at the Source
EditorConfig is a cross-editor standard that configures line endings, indentation, and encoding before files are even saved. Most editors (VS Code, JetBrains, Vim, Sublime) support it natively or via plugin.
# EditorConfig — https://editorconfig.org
root = true
# Apply to all files
[*]
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
charset = utf-8
indent_style = space
indent_size = 2
# Python: 4-space indent
[*.py]
indent_size = 4
# Makefiles: must use tabs
[Makefile]
indent_style = tab
# Markdown: trailing whitespace is intentional (line breaks)
[*.md]
trim_trailing_whitespace = false
Drop this file in your project root alongside .gitattributes. Together, they form a two-layer defense:
Claude Code
Anthropic's AI coding assistant that lives in your terminal. It reads your codebase, writes code, runs commands, and manages Git — all through natural language.
Installation
There are two ways to install Claude Code. Both require Node.js 18+.
curl -fsSL https://claude.ai/install.sh | sh
npm install -g @anthropic-ai/claude-code
sudo npm install -g
If npm asks for sudo, your Node.js was installed with apt instead of nvm. Go back to Section 04 and install Node via nvm. With nvm, global installs work without root access.
# Verify the installation
claude --version
# Check that everything is working
claude doctor
Authentication
Claude Code supports two authentication methods:
If you have a Claude subscription, just run claude and it will open a browser window for OAuth login. No API key needed.
- Easiest setup — just log in
- Usage included in your subscription
- Max plan includes generous Claude Code usage
Generate a key at console.anthropic.com and set it as an environment variable. Pay-per-use.
# Add to ~/.bashrc or ~/.zshrc
export ANTHROPIC_API_KEY="sk-ant-..."
# Reload shell config
source ~/.bashrc
ANTHROPIC_API_KEY to Git or share it publicly. Add .env to your .gitignore. If a key is leaked, rotate it immediately at console.anthropic.com.
First Run
Navigate to a project directory and launch Claude Code:
# Navigate to your project
cd ~/projects/my-app
# Launch Claude Code
claude
On first run, you'll authenticate (browser opens automatically for OAuth). Then you're in an interactive session where you can type natural language requests.
| Command | What It Does |
|---|---|
/help |
Show all available commands and keyboard shortcuts |
/init |
Generate a CLAUDE.md file for the current project |
/cost |
Show token usage and cost for the current session |
/config |
Open the configuration menu |
/context |
Show what files and context Claude currently has loaded |
/clear |
Clear conversation history (reset context window) |
/compact |
Summarize conversation to free up context space |
Permission Modes
Claude Code has three permission modes that control how much autonomy the AI has. Cycle between them with Shift+Tab.
CLAUDE.md — Your Project Instruction File
CLAUDE.md is a special file that Claude Code reads automatically when you start a session. It tells the AI about your project: what it is, how it's structured, what conventions to follow, and what commands to run.
# My Web App
## Project Overview
A React + TypeScript SPA with a Node.js/Express backend.
Uses PostgreSQL for data, Redis for caching.
## Tech Stack
- Frontend: React 18, TypeScript, Vite, Tailwind CSS
- Backend: Node.js 20, Express, Prisma ORM
- Database: PostgreSQL 15
- Testing: Vitest (unit), Playwright (e2e)
## Commands
- `npm run dev` — Start dev server (frontend + backend)
- `npm test` — Run unit tests
- `npm run lint` — Run ESLint + Prettier check
- `npm run build` — Production build
## Conventions
- Use functional components with hooks (no class components)
- All API routes go in `src/api/routes/`
- Database migrations in `prisma/migrations/`
- Use named exports, not default exports
- Commit messages: conventional commits format
## Important Notes
- Never modify `prisma/schema.prisma` without running
`npx prisma migrate dev` afterward
- The `.env` file is gitignored — see `.env.example`
Best practices for CLAUDE.md:
- Keep it under 300 lines — Claude reads this every session, so conciseness matters
- Focus on what's non-obvious: conventions, gotchas, project-specific patterns
- Include the commands Claude will need: test, lint, build, deploy
- List the tech stack so Claude knows what frameworks/versions to target
- Update it as your project evolves — treat it like living documentation
CLAUDE.md at multiple levels: the repo root, inside subdirectories, and even in your home directory (~/.claude/CLAUDE.md) for global preferences. Claude Code merges them, with more specific files taking precedence.
# Generate a starter CLAUDE.md for your project
cd ~/projects/my-app
claude
# Then type:
/init
Cost Awareness
Claude Code uses AI models that cost money to run, whether through a subscription or API usage. Understanding costs helps you budget effectively.
| Plan | Price | Claude Code Access | Notes |
|---|---|---|---|
| Pro | $20/mo | Included (limited usage) | Good for light/occasional use |
| Max (5x) | $100/mo | Included (generous usage) | Best value for daily Claude Code users |
| Max (20x) | $200/mo | Included (heavy usage) | For power users and professionals |
| API (Pay-per-use) | Varies | Per-token billing | You control spending; requires API key |
| Model | Input (per 1M tokens) | Output (per 1M tokens) | Best For |
|---|---|---|---|
| Haiku | $0.25 | $1.25 | Quick tasks, simple edits |
| Sonnet | $3 | $15 | Balanced speed/quality (default) |
| Opus | $15 | $75 | Complex reasoning, architecture |
# Check your current session cost
/cost
# Example output:
# Total cost: $0.42
# Input tokens: 45,230
# Output tokens: 12,847
/compact to summarize long conversations and free up context. Use /clear to start fresh when switching tasks. Write a good CLAUDE.md so Claude doesn't waste tokens figuring out your project. On the API, a typical active day costs $2–$10 with Sonnet.
Codex CLI
OpenAI's open-source coding agent for the terminal. A different philosophy from Claude Code — worth knowing both.
Installation
Codex CLI is installed via npm, just like Claude Code. You need Node.js 18+ (which you already have if you followed the earlier sections).
# Install Codex CLI globally
npm install -g @openai/codex
# Verify installation
codex --version
Authentication
Codex CLI needs to connect to OpenAI's API. There are two ways to authenticate:
Just run codex and it will open your browser for a ChatGPT sign-in. No API key needed. Usage is tied to your ChatGPT subscription.
- Easiest setup — just log in with your ChatGPT account
- Usage included with ChatGPT Plus/Pro/Team
- No environment variables to manage
Generate a key at platform.openai.com and set it as an environment variable. Pay-per-use pricing.
# Add to ~/.bashrc or ~/.zshrc
export OPENAI_API_KEY="sk-..."
# Reload shell config
source ~/.bashrc
OPENAI_API_KEY to Git. Add .env to your .gitignore. Rotate immediately if leaked.
Basic Usage
Codex CLI has two primary modes: interactive and non-interactive.
# Interactive mode — opens a REPL session
codex
# Non-interactive — execute a single task and exit
codex exec "add error handling to the login function"
# Run with a specific model
codex --model o4-mini
# Quiet mode — less output, just results
codex --quiet exec "fix the failing test in auth.test.js"
| Command | What It Does |
|---|---|
/help |
Show available commands and options |
/model |
Switch between models (o4-mini, o3, etc.) |
/approval |
Change the approval mode (suggest, auto-edit, full-auto) |
/undo |
Revert the last file change |
/diff |
Show pending changes as a diff |
/clear |
Clear the conversation history |
Codex CLI also supports three approval modes, similar to Claude Code's permission system:
Claude Code vs. Codex CLI
Both are excellent AI coding agents, but they have different strengths and philosophies. Here's an honest comparison to help you decide when to reach for each one.
| Aspect | Claude Code | Codex CLI |
|---|---|---|
| Philosophy | "Measure twice, cut once" — thorough analysis before acting | "Move fast" — quick iterations, rapid prototyping |
| Coding Accuracy | Excellent on complex, multi-file refactors. Strong architectural reasoning | Very good for focused tasks. Excels at code generation speed |
| Speed | Slightly slower — reads more context, thinks longer | Generally faster response times, especially with o4-mini |
| MCP Support | Full MCP (Model Context Protocol) — connects to databases, APIs, external tools | MCP support available, growing ecosystem |
| Context Handling | 200K token window. Reads entire repos. Excellent at understanding large codebases | Uses sandboxed execution. Files loaded on demand |
| Pricing Model | Claude Pro/Max subscription or Anthropic API (pay-per-token) | ChatGPT Plus/Pro subscription or OpenAI API (pay-per-token) |
| Open Source | Source-available (not fully open source) | Fully open source (Apache 2.0) |
| Best At | Large refactors, understanding complex codebases, careful edits, Git workflows | Quick tasks, rapid prototyping, code generation, exploring ideas |
- Reads your entire codebase for context-aware edits
- CLAUDE.md project instructions for consistent behavior
- Rich MCP ecosystem (databases, APIs, tools)
- Excellent at multi-file refactors and architecture
- Built-in Git integration (commits, PRs, reviews)
- Open source — you can inspect and modify the code
- Fast iteration with
codex execone-liners - Sandboxed execution for safety
- Multiple model choices (o4-mini for speed, o3 for power)
- Lightweight and focused on the task at hand
Terminal & Shell
Customize your terminal environment for productivity and comfort. A well-configured shell saves hours every week.
Zsh Installation
Bash is the default shell in Ubuntu, but Zsh is a more feature-rich alternative with better autocompletion, theming, and plugin support. Most modern developer setups use Zsh.
# Install Zsh
sudo apt install -y zsh
# Verify installation
zsh --version
# Set Zsh as your default shell
chsh -s $(which zsh)
# Log out and back in (or restart your terminal)
# Verify it took effect
echo $SHELL
# Should output: /usr/bin/zsh
0 to create a blank .zshrc — we'll fill it with Oh My Zsh in the next step, which overwrites the default config anyway.
Oh My Zsh
Oh My Zsh is a community-driven framework for managing your Zsh configuration. It provides themes, plugins, and sensible defaults that make Zsh immediately more useful.
# Install Oh My Zsh (downloads and sets up automatically)
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
After installation, Oh My Zsh gives you:
robbyrussell is clean and informative. Change with ZSH_THEME="..." in .zshrc.plugins=(...) array in .zshrc.| Plugin | What It Does |
|---|---|
git |
Aliases like gst (status), gco (checkout), gp (push). Dozens of shortcuts. |
sudo |
Press Esc twice to prepend sudo to the current or previous command. |
z |
Jump to frequently-used directories by partial name. Like cd with memory. |
copypath |
Copy the current directory path to the clipboard. |
web-search |
Search Google from the terminal: google "how to rebase" |
Essential Plugins
These two third-party plugins are the most impactful upgrades you can make to your shell. They need to be installed separately, then enabled in .zshrc.
Fish-like autosuggestions. As you type, it shows a dimmed suggestion from your history. Press → to accept.
# Install
git clone https://github.com/zsh-users/zsh-autosuggestions \
${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions
Real-time command coloring. Valid commands turn green, errors turn red, strings are highlighted — all as you type.
# Install
git clone https://github.com/zsh-users/zsh-syntax-highlighting \
${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting
After installing both, enable them in your .zshrc plugins array:
# In ~/.zshrc, find the plugins=(...) line and update it:
plugins=(
git
sudo
z
zsh-autosuggestions
zsh-syntax-highlighting # Must be LAST in the list
)
zsh-syntax-highlighting must be the last plugin in the plugins=(...) list. If it's not last, it may not work correctly or could interfere with other plugins. This is documented in its official README.
# Reload your config to activate
source ~/.zshrc
Starship Prompt
Starship is a modern, cross-shell prompt that's fast, highly customizable, and looks great out of the box. It shows useful info like Git branch, Node version, Python virtualenv, and command duration — all without slowing down your terminal.
# Install Starship
curl -sS https://starship.rs/install.sh | sh
# Add to the END of ~/.zshrc
eval "$(starship init zsh)"
# Reload
source ~/.zshrc
Starship works immediately with zero configuration, but you can customize it by creating a config file:
# Create Starship config directory and file
mkdir -p ~/.config
touch ~/.config/starship.toml
# Minimal, clean prompt config
format = """
$directory\
$git_branch\
$git_status\
$nodejs\
$python\
$cmd_duration\
$line_break\
$character"""
[directory]
truncation_length = 3
truncate_to_repo = true
[git_branch]
symbol = " "
[git_status]
ahead = "+"
behind = "-"
diverged = "+-"
[nodejs]
symbol = " "
detect_files = ["package.json"]
[python]
symbol = " "
[cmd_duration]
min_time = 2000 # Only show for commands taking 2+ seconds
format = " took [$duration]($style)"
[character]
success_symbol = "[>](bold green)"
error_symbol = "[>](bold red)"
.zshrc Tips
Your ~/.zshrc is your shell's configuration file. It runs every time you open a terminal. Here are some useful additions:
# Modern replacements for classic tools
alias cat="batcat --style=plain" # bat: cat with syntax highlighting
alias ls="eza --icons --group-directories-first" # eza: modern ls
alias ll="eza -la --icons --group-directories-first --git"
alias find="fdfind" # fd: intuitive find replacement
alias grep="rg" # ripgrep: fast grep
# Quick navigation
alias ..="cd .."
alias ...="cd ../.."
alias proj="cd ~/projects"
# Git shortcuts (beyond what oh-my-zsh provides)
alias gs="git status"
alias gl="git log --oneline -20"
alias gd="git diff"
# History settings — add to ~/.zshrc
HISTSIZE=50000 # Commands to keep in memory
SAVEHIST=50000 # Commands to save to file
HISTFILE=~/.zsh_history
setopt HIST_IGNORE_DUPS # Don't store duplicate commands
setopt HIST_IGNORE_SPACE # Don't store commands starting with space
setopt SHARE_HISTORY # Share history between terminals
setopt HIST_REDUCE_BLANKS # Remove extra blanks from commands
setopt INC_APPEND_HISTORY # Write immediately, don't wait for exit
# nvm is slow to load (~200-400ms). Lazy-load it instead.
# Replace the default nvm init block with:
export NVM_DIR="$HOME/.nvm"
# This loads nvm only when you first call node, npm, npx, or nvm
lazy_load_nvm() {
unset -f node npm npx nvm
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
}
node() { lazy_load_nvm; node "$@"; }
npm() { lazy_load_nvm; npm "$@"; }
npx() { lazy_load_nvm; npx "$@"; }
nvm() { lazy_load_nvm; nvm "$@"; }
time zsh -i -c exit to see how long your shell takes to start. Under 200ms is great. If it's over 500ms, nvm lazy loading and reducing plugins will help. A fast shell makes every new terminal feel instant.
Nerd Fonts
Starship, Powerlevel10k, and many CLI tools use special icons (glyphs) that aren't in standard fonts. Nerd Fonts are patched fonts that include thousands of these icons alongside normal characters.
Since your terminal runs on the Windows side, the Nerd Font must be installed on Windows, not inside WSL.
-
Download a Nerd Font
Visit nerdfonts.com/font-downloads. Popular choices: FiraCode Nerd Font, JetBrainsMono Nerd Font, or MesloLGS NF.
-
Install on Windows
Extract the zip, select all
.ttffiles, right-click, and choose "Install for all users". -
Configure Windows Terminal
Open Windows Terminal → Settings → your Ubuntu profile → Appearance → Font face. Select the Nerd Font you installed (e.g., "FiraCode Nerd Font").
CLI Power Tools
Essential TUI and CLI tools that make terminal life better. These modern replacements are faster, friendlier, and more beautiful than the classics.
The Essential Toolkit
Each of these tools is a drop-in upgrade for a common task. Install them all or pick the ones that fit your workflow. They're sorted from "install this first" to "nice to have".
# Install lazygit
sudo add-apt-repository ppa:lazygit-team/daily
sudo apt update
sudo apt install -y lazygit
Ctrl+R for fuzzy history search. Pipe any list into it.# Install fzf
sudo apt install -y fzf
# Example: fuzzy-find a file
vim $(fzf)
.gitignore, searches recursively by default, and highlights matches.# Install ripgrep
sudo apt install -y ripgrep
# Example: search for a function
rg "function handleClick" --type js
find replacement. Simple syntax, colorized output, respects .gitignore, and 5x faster than find.# Install fd
sudo apt install -y fd-find
# The binary is 'fdfind' — alias it
alias fd="fdfind"
# Example: find all .ts files
fd -e ts
# Install bat
sudo apt install -y bat
# The binary is 'batcat' — alias it
alias bat="batcat"
# Example: view a file with highlighting
bat src/index.ts
ls replacement with icons, colors, Git status, and tree view. Makes directory listings beautiful and informative.# Install eza
sudo apt install -y eza
# Example: list with icons and git info
eza -la --icons --git
cd that learns your most-used directories. Type z proj instead of cd ~/projects/my-project.# Install via install script
curl -sSfL https://raw.githubusercontent.com/ajeetdsouza/zoxide/main/install.sh | sh
# Add to ~/.zshrc
eval "$(zoxide init zsh)"
# Usage: jump by partial name
z proj # Jumps to ~/projects
z my-app # Jumps to ~/projects/my-app
Ctrl+R with context-aware suggestions based on your directory and recent commands.# Install via cargo (requires Rust)
cargo install mcfly
# Add to ~/.zshrc
eval "$(mcfly init zsh)"
# Install mc
sudo apt install -y mc
# Launch it
mc
htop look dated.# Install btop
sudo apt install -y btop
# Launch it
btop
# Install tldr
sudo apt install -y tldr
# Update the page cache
tldr --update
# Example: quick help for tar
tldr tar
# Install delta
sudo apt install -y git-delta
# Configure git to use delta
git config --global core.pager delta
git config --global interactive.diffFilter "delta --color-only"
git config --global delta.navigate true
git config --global delta.side-by-side true
# Install httpie
sudo apt install -y httpie
# Example: GET request
http httpbin.org/get
# Example: POST JSON
http POST api.example.com/users name=John
Quick Install
Install all the apt-based tools in a single command:
# Install all apt-based CLI power tools at once
sudo apt install -y \
fzf \
ripgrep \
fd-find \
bat \
eza \
mc \
btop \
tldr \
git-delta \
httpie
sudo add-apt-repository ppa:lazygit-team/daily, then sudo apt install lazygit. It's not in the default Ubuntu repos.
mcfly and zoxide can be installed via cargo (Rust's package manager). If you don't have Rust installed:
# Install Rust (if not already installed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source ~/.cargo/env
# Then install cargo-based tools
cargo install mcfly zoxide
Alternatively, zoxide has an install script that doesn't require Rust. Check each tool's GitHub repo for alternative install methods.
Shell Integration Cheat Sheet
After installing the tools, add these lines to your ~/.zshrc to wire everything together:
# ===== Aliases for tools with different binary names =====
alias bat="batcat"
alias fd="fdfind"
# ===== Modern tool aliases =====
alias cat="batcat --style=plain"
alias ls="eza --icons --group-directories-first"
alias ll="eza -la --icons --group-directories-first --git"
# ===== Tool initialization =====
eval "$(zoxide init zsh)" # Smarter cd
eval "$(mcfly init zsh)" # AI-powered history
eval "$(starship init zsh)" # Starship prompt (keep this last)
.zshrc, tool initializations (eval "$(...)") should come after Oh My Zsh is sourced and after your aliases. Starship's eval should be the very last line. This ensures everything plays nicely together.
Common Pitfalls
Save yourself hours of debugging. These are the gotchas that trip up every WSL2 developer at least once.
/mnt/c Performance Penalty
/mnt/c/ from WSL2 is 10-15x slower than using the native Linux filesystem. This affects everything: npm install, git status, file watchers, builds, and AI CLI tools that scan your codebase.
# DON'T do this
cd /mnt/c/Users/YourName/Documents/projects
npm install # Takes 2-3 minutes
git status # Takes 5-10 seconds
Files cross a 9P protocol bridge between the Linux kernel and Windows NTFS. Every single file operation pays this penalty.
# DO this instead
cd ~/projects/my-app
npm install # Takes 15-20 seconds
git status # Instant
Native ext4 filesystem inside the WSL2 VM. Full Linux I/O performance. This is where all your code should live.
# Test it yourself — create a test project in both locations
# On /mnt/c (Windows filesystem)
cd /mnt/c/Users/YourName/Desktop
mkdir test-speed && cd test-speed
time npm init -y && time npm install express
# Real time: ~45 seconds
# On ~ (Linux filesystem)
cd ~
mkdir test-speed && cd test-speed
time npm init -y && time npm install express
# Real time: ~4 seconds
\\wsl$\Ubuntu\home\yourname. You can also pin this path to Quick Access for easy drag-and-drop.
File Permissions
By default, chmod has no effect on files stored in /mnt/c/ because NTFS doesn't understand Unix permissions. This breaks SSH keys, scripts, and any tool that checks file modes.
chmod 600 ~/.ssh/id_rsa on a file stored at /mnt/c/ does nothing. SSH will refuse to use keys with "too open" permissions, and scripts won't be executable.
Solution: Enable metadata support in /etc/wsl.conf so WSL can store Unix permissions alongside NTFS files:
# Create or edit /etc/wsl.conf
sudo nano /etc/wsl.conf
# Add these lines:
[automount]
enabled = true
options = "metadata,umask=22,fmask=11"
mountFsTab = false
~/.ssh/ on the Linux filesystem, never on /mnt/c/. Copy them from Windows if needed: cp /mnt/c/Users/YourName/.ssh/id_rsa ~/.ssh/ && chmod 600 ~/.ssh/id_rsa
Memory Limits
WSL2 is hungry. By default, it claims 50% of your total RAM or 8 GB (whichever is less on newer builds). If you have 16 GB of RAM, WSL2 may take 8 GB and leave Windows starving.
Solution: Create a .wslconfig file on the Windows side to set memory limits:
# Create this file at: C:\Users\YourName\.wslconfig
# (Use Notepad or VS Code on Windows)
[wsl2]
memory=8GB
swap=8GB
processors=4
Adjust to your machine: If you have 16 GB RAM, memory=8GB is a good balance. For 32 GB machines, try memory=12GB. The processors value limits CPU cores visible to WSL.
wsl --shutdown, then reopen your terminal. See the "Forgetting wsl --shutdown" pitfall below.
DNS Resolution
DNS failures are one of the most frustrating WSL2 issues. Symptoms: apt update fails, curl can't resolve hosts, npm install times out — but your Windows browser works fine.
/etc/resolv.conf using Windows DNS settings. VPNs and corporate proxies often break this auto-detection, leaving WSL with no working DNS.
Solution A (Windows 11 22H2+): Enable DNS tunneling in .wslconfig:
[wsl2]
dnsTunneling=true
Solution B (manual fix for older versions): Disable auto-generation and set DNS manually:
# Prevent WSL from overwriting resolv.conf
sudo nano /etc/wsl.conf
# Add these lines:
[network]
generateResolvConf = false
# Remove the auto-generated symlink and create a real file
sudo rm /etc/resolv.conf
sudo nano /etc/resolv.conf
# Add reliable public DNS servers:
nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 1.1.1.1
wsl --shutdown from PowerShell. Then test with ping google.com to confirm DNS is working.
Networking / localhost
You start a dev server in WSL (like npm run dev on port 3000), open localhost:3000 in your Windows browser, and... nothing loads.
localhost forwarding can be unreliable or broken entirely.
Solution A (Windows 11 22H2+ recommended): Enable mirrored networking mode:
[wsl2]
networkingMode=mirrored
Mirrored mode makes WSL2 share the host's network interfaces. localhost works exactly as expected in both directions.
Solution B (find WSL IP manually):
# Find your WSL2 IP address
ip addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}'
# Example output: 172.28.160.1
# Use this IP in your Windows browser instead of localhost
# e.g., http://172.28.160.1:3000
.wslconfig file. Memory limits, DNS tunneling, and mirrored networking all go under the [wsl2] header.
# C:\Users\YourName\.wslconfig
[wsl2]
memory=8GB
swap=8GB
processors=4
dnsTunneling=true
networkingMode=mirrored
Forgetting wsl --shutdown
You edit .wslconfig or /etc/wsl.conf, restart your terminal, and nothing changed. That's because config changes require a full WSL restart, not just closing the terminal window.
.wslconfig and /etc/wsl.conf are only read at VM boot time.
Solution: Always run this from PowerShell (not from inside WSL) after changing any config:
# Run this in PowerShell (not WSL!)
wsl --shutdown
# Wait a few seconds, then reopen your Ubuntu terminal
# The VM will boot fresh with your new settings
| File | Location | Controls | Restart Needed |
|---|---|---|---|
.wslconfig |
Windows: C:\Users\YourName\ |
VM-level settings: memory, CPU, networking, DNS | wsl --shutdown |
wsl.conf |
Linux: /etc/wsl.conf |
Per-distro settings: automount, network, interop | wsl --shutdown |
.zshrc |
Linux: ~/.zshrc |
Shell config: aliases, plugins, PATH | source ~/.zshrc |
Tips & Workflow
Bringing it all together — your daily workflow, best practices, and where to go from here.
VS Code + WSL Remote
VS Code is the best editor for WSL development. Install it on the Windows side, and it transparently connects to your Linux environment via the WSL extension.
-
Install VS Code on Windows
Download from code.visualstudio.com. Run the Windows installer normally.
-
Install the WSL extension
Open VS Code, go to Extensions (
Ctrl+Shift+X), search for "WSL" by Microsoft, and install it. This lets VS Code run a server inside WSL. -
Open projects from WSL
In your WSL terminal, navigate to any project and type
code .to open it in VS Code. The bottom-left corner will show "WSL: Ubuntu" confirming the connection.# From your WSL terminal cd ~/projects/my-app code .
| Extension | What It Does |
|---|---|
| GitHub Copilot | AI autocomplete in the editor. Complements CLI tools by handling inline suggestions. |
| GitLens | See who changed each line, browse file history, compare branches — all inline. |
| Prettier | Auto-format code on save. Keeps AI-generated code consistent with your style. |
| ESLint | Catch errors and enforce code quality rules as you type. |
| Error Lens | Highlights errors and warnings inline, right next to the problematic code. |
Project Organization
A clean directory structure saves time and prevents confusion. Keep everything on the Linux filesystem.
~ # /home/yourname (Linux home)
|
+-- projects/ # All your code lives here
| +-- my-app/ # Individual project repos
| +-- portfolio/
| +-- api-server/
|
+-- learning/ # Tutorials, experiments, scratch
| +-- javascript-30/
| +-- python-basics/
|
+-- dotfiles/ # Your config files (version controlled)
| +-- .zshrc
| +-- .gitconfig
| +-- setup.sh # Script to set up a fresh machine
|
+-- .ssh/ # SSH keys (Linux filesystem only!)
/mnt/c/ for projects. If you have existing projects on the Windows side, move them: cp -r /mnt/c/Users/YourName/projects/my-app ~/projects/
Daily Workflow Example
Here's what a typical AI-assisted development session looks like once everything is set up:
-
Open your terminal
Launch Windows Terminal. It opens directly into your WSL Ubuntu shell with Zsh, Oh My Zsh, and Starship prompt ready to go.
-
Navigate to your project
cd ~/projects/my-app -
Open VS Code
code .VS Code connects to WSL automatically. Your integrated terminal inside VS Code is also running in WSL.
-
Start an AI coding session
Open the VS Code integrated terminal (or a separate terminal tab) and launch your AI tool:
# For deep, multi-file work claude # For quick, focused tasks codex "add input validation to the signup form" -
Work with the AI
Describe what you want in natural language. Review the changes the AI proposes. Accept, reject, or refine. Iterate until it's right.
-
Commit and push
# Stage and commit your changes git add -A git commit -m "Add input validation to signup form" # Push to GitHub as backup git push origin main
Git Best Practices for Beginners
Good Git habits make collaboration easier, AI tools more effective, and your future self grateful.
git checkout -b feature/signup. Merge to main when done. Keeps main stable and your experiments safe.git push origin main takes two seconds.# A good daily Git workflow
git status # See what changed
git diff # Review the actual changes
git add -A # Stage everything
git commit -m "Add user avatar upload" # Descriptive message
git push origin main # Back up to GitHub
Learning Path
You've got your environment set up. Now what? Here's a roadmap for building real skills:
-
Build a small project
Don't just follow tutorials — build something real. A personal website, a CLI tool, a simple API. Use Claude Code or Codex to help, but make sure you understand the code they generate.
-
Learn Git branching
Move beyond
git add/commit/push. Learngit branch,git merge,git rebase, and how to resolve merge conflicts. Try learngitbranching.js.org for interactive practice. -
Explore MCP integrations
Claude Code's MCP (Model Context Protocol) lets it connect to databases, APIs, and external tools. Try connecting it to a SQLite database or a REST API to see how it handles real-world data.
-
Compare Claude Code and Codex on the same task
Give both tools the same prompt on the same project. Compare their approaches, code quality, and speed. You'll develop intuition for when to reach for each tool.
-
Version-control your dotfiles
Create a
~/dotfilesrepo with your.zshrc,.gitconfig, and a setup script. When you get a new machine or reinstall WSL, you can be productive in minutes instead of hours.
Resources
Official documentation, essential repos, and community resources to bookmark.