Volume 1, No. 11 Wednesday, March 11, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


App Store

Claude Hits No. 1 on App Store as Users Boycott ChatGPT Over Pentagon Deal

Anthropic’s chatbot surges to 11.3 million daily users — a 180% jump — as “Cancel ChatGPT” campaigns spread across social media following OpenAI’s $200 million Pentagon contract.

Claude climbed to #1 on the Apple App Store in the US and 15 other countries after OpenAI’s $200M Pentagon contract triggered a user revolt. Claude’s DAU hit 11.3M — a 180% jump since January — as “Cancel ChatGPT” campaigns went viral on Reddit and X. Users shared migration guides and compared features as they switched.

The trigger was OpenAI’s deal with the Pentagon, which gave the Department of Defense access to GPT models for “all lawful purposes” including surveillance, intelligence analysis, and logistics planning. Users who had chosen ChatGPT as a personal assistant recoiled at the idea of their AI provider building military tools. The backlash was amplified when Anthropic CEO Dario Amodei’s earlier refusal of similar Pentagon terms — specifically objecting to language that could permit autonomous weapons — was widely reshared.

For the AI industry, it marks the first time consumer values drove a major platform-switching event. OpenAI has consistently led on user acquisition, but the Pentagon deal exposed a vulnerability: users who chose a product for its intelligence now judge it on its ethics. Whether the shift is permanent or a protest spike remains to be seen, but the precedent is set — commercial AI companies can lose users over policy decisions, not just product quality.


Open Source

NVIDIA Releases Nemotron 3 Super — 120B Open Agentic Model With 5× Throughput Gains

A hybrid Mamba-Transformer MoE activating just 12B parameters at inference, with a one-million-token context window and 10 trillion tokens of open training data.

NVIDIA launched Nemotron 3 Super today, a 120-billion-parameter open-source model purpose-built for agentic workloads. Despite its size, the hybrid Mamba-Transformer Mixture-of-Experts architecture activates only 12 billion parameters during inference, delivering up to 5x higher throughput than its predecessor while supporting a one-million-token context window.

What sets this release apart is its openness. NVIDIA is publishing over 10 trillion tokens of pre- and post-training datasets alongside the model weights, plus 15 reinforcement learning training environments. This gives the community not just a model to use but the complete infrastructure to reproduce and extend it — a level of transparency unusual for a company of NVIDIA’s scale.

The timing is strategic: Nemotron 3 Super arrives five days before NVIDIA’s GTC 2026 conference, where CEO Jensen Huang is expected to unveil the Vera Rubin GPU architecture and NemoClaw, an open-source enterprise AI agent platform. Together, these moves position NVIDIA as both the hardware and software backbone of the emerging agentic AI stack.


Funding

OpenAI Closes $110 Billion Round at $730 Billion Valuation

OpenAI finalized the largest private funding round in technology history: $110 billion at a $730 billion pre-money valuation, led by Amazon ($50B), SoftBank ($30B), and NVIDIA ($30B). As part of the deal, OpenAI will run models on Amazon Bedrock and expand its AWS compute commitment by $100 billion.

The round remains technically open, with an additional ~$10B expected from venture capital firms and sovereign wealth funds before the end of March. The valuation places OpenAI ahead of every private technology company in history and within striking distance of the world’s largest public tech firms — a remarkable position for a company that posted $3.7 billion in revenue last year against an estimated $8.5 billion in spending.

Open Source

AI “License-Washing” — chardet Rewrite Sparks Open-Source Legal Crisis

The maintainer of chardet, a widely-used Python character-detection library, used Claude to rewrite it in five days and relicensed it from LGPL to MIT, claiming a “clean room” implementation. The original author and open-source community responded with alarm: if AI rewrites escape GPL obligations, the entire copyleft governance model could be undermined.

The controversy cuts deeper than a single library. Copyleft licenses like the GPL depend on the idea that derivative works inherit the original license — but an AI-assisted rewrite produces code that may not legally qualify as derivative, even if it replicates the same functionality. Legal experts warn this creates a massive loophole that could allow any company to “license-wash” GPL code via AI, effectively circumventing decades of open-source protections. The US Supreme Court’s recent refusal to hear an AI copyright authorship appeal leaves the legal landscape uncharted.


Silicon

Meta Unveils MTIA 300–500 Chip Roadmap to Cut Nvidia Dependence

Meta announced a four-chip roadmap today — MTIA 300, 400 (Iris), 450 (Arke), and 500 (Astrid) — to deploy through 2027. The MTIA 300 is already in production for content ranking and recommendations; the MTIA 400 has completed lab testing for inference workloads.

The move is part of Meta’s broader strategy to reduce reliance on third-party chipmakers. With its AI research, Instagram recommendations, and WhatsApp automation all demanding massive compute, Meta is betting that custom silicon can deliver better performance-per-watt at lower cost than general-purpose GPUs — the same calculation that led Google to develop TPUs and Amazon to build Trainium and Inferentia.

Regulation

Federal AI Governance Deadlines Arrive Today: FTC and Commerce Dept. Due to Act

March 11 marks two major federal AI deadlines under Trump’s December 2025 executive order: the FTC must publish a statement on how the FTC Act applies to AI, and the Commerce Department must identify state AI laws it deems burdensome to national policy. The DOJ’s AI Litigation Task Force stands ready to challenge non-compliant state laws in federal court.

The stakes are enormous. Thirty-eight states passed AI legislation in the past year, creating a patchwork of rules that range from bias auditing requirements to outright bans on certain AI uses. Today’s Commerce Department report could trigger a wave of federal preemption battles, potentially invalidating state protections that go beyond the federal floor. Consumer advocates warn the process is designed to weaken, not harmonize, AI regulation.


Research Spotlight

MIT Doubles LLM Training Speed by Putting Idle Compute to Work

MIT researchers published a method that exploits idle processor time during reinforcement learning training to train a smaller “draft” model that predicts the outputs of the larger reasoning LLM. The larger model verifies the drafts, eliminating the primary rollout bottleneck that consumes up to 85% of RL training time. Tests across multiple reasoning LLMs showed a 70–210% speedup with no accuracy loss.

The elegance is in the economics: the method requires zero additional hardware. During the verification step of RL training, processors sit idle while waiting for results. The MIT approach fills that dead time by training the draft model, effectively getting a second model trained for free. At a time when frontier model training runs cost hundreds of millions of dollars, a 2x efficiency gain without new hardware represents enormous potential savings.


Research

Google Introduces “Nested Learning” — A New Paradigm for Continual AI

Google Research introduced “Nested Learning,” a framework that represents ML models as nested, multi-level optimization problems, each with its own context flow and memory update frequency. The proof-of-concept model, Hope — a variant of the Titans architecture — incorporates a Continuum Memory System and demonstrates superior performance on language modeling, continual learning, and long-context reasoning.

The significance lies in what it could solve: current LLMs are frozen after training, unable to learn from new interactions without full retraining. Nested Learning offers a path toward models that continuously update their knowledge at multiple timescales — fast adaptation for recent context, slower consolidation for long-term knowledge — potentially ending the paradigm of static models that go stale the moment training completes.

Computer Vision

Meta Releases SAM 3 — Segment Anything Now Understands Concepts

Meta AI released SAM 3, accepted to ICLR 2026, extending the Segment Anything model from pixel-level pointing to concept-level naming. The new Promptable Concept Segmentation task accepts natural language phrases, image exemplars, or both, and returns masks for all matching instances across images and video. SAM 3 doubles the accuracy of prior systems on Meta’s new SA-Co benchmark.

The leap from “point at a thing” to “name a concept” transforms what segmentation models can do. SAM 3 can find every “traffic sign partially obscured by vegetation” in a video stream or every “load-bearing wall” in an architectural photo set — tasks that previously required custom-trained models for each concept. Fully open-sourced with code, weights, and benchmark data.


If AI rewrites escape GPL obligations, the entire open-source governance model could be undermined. — Simon Willison, on the chardet license-washing controversy

In Brief

Nvidia GTC 2026 Opens March 16

Jensen Huang expected to unveil Vera Rubin GPU, a surprise “Feynman” architecture, and NemoClaw open-source agent platform to 30,000+ attendees.


Trending on GitHub

Repo Language Stars Description
openclaw/openclaw TypeScript 247k Self-hosted personal AI assistant with 50+ messaging integrations
obra/superpowers Shell ~77k Agentic skills framework and dev methodology for coding agents
KeygraphHQ/shannon Python ~29k Autonomous AI pentester with 96% exploit success rate on XBOW Benchmark
badlogic/pi-mono TypeScript ~18k AI agent monorepo: CLI, multi-provider LLM API, TUI/web UI, Slack bot
666ghj/MiroFish JS/TS ~16k Swarm intelligence engine simulating social dynamics with AI agent populations
msitarzewski/agency-agents Shell ~18k Plug-and-play AI agent personas for Claude Code, Copilot, and Gemini CLI