Volume 1, No. 5 Thursday, March 5, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


AI Policy & National Security

Defense Experts and Big Tech Close Ranks Behind Anthropic

Thirty former defense and intelligence officials call the Pentagon’s supply-chain-risk designation a “dangerous precedent” as the tech industry mounts an unprecedented unified lobbying campaign and employee protests surge past 900 signatures.

Thirty former senior defense and intelligence officials — including former CIA Director Michael Hayden, two retired four-star generals, and half a dozen former under-secretaries of defense — sent a letter to Congressional leadership on March 5 warning that the Pentagon’s supply-chain-risk designation against Anthropic represents a “dangerous precedent that will chill private-sector investment in national security AI for a generation.” The letter, obtained by CNBC, argues that punishing a company for maintaining safety guardrails sends precisely the wrong signal at a moment when the United States needs its most capable AI firms engaged in defense applications, not excluded from them.

The letter is the most significant intervention yet in the escalating standoff between the Trump administration and Anthropic — the supply-chain-risk designation reported in previous editions that was triggered when CEO Dario Amodei refused Defense Secretary Pete Hegseth’s demands to remove safeguards against mass surveillance and autonomous weapons targeting. The former officials argue that the designation was “politically motivated rather than grounded in any legitimate supply-chain analysis” and that it undermines the credibility of the Defense Department’s risk assessment process, which is meant to identify genuine foreign-influence and counterfeit-component threats rather than serve as a tool of policy retaliation.

The defense establishment’s pushback arrived the same day that the Information Technology Industry Council — whose members include Nvidia, Google, Apple, Microsoft, and Amazon — sent its own letter directly to Hegseth, warning that the designation creates “regulatory uncertainty that jeopardizes billions of dollars in planned AI infrastructure investment.” In a separate, coordinated action, four additional trade associations — the Software & Information Industry Association, TechNet, the Computer & Communications Industry Association, and the Business Software Alliance — jointly wrote to President Trump urging the administration to reverse course. The breadth of the industry response is remarkable: companies that compete fiercely with Anthropic for government contracts and commercial market share have concluded that the precedent of using supply-chain designations as political leverage poses an existential threat to the entire sector.

Meanwhile, the employee-driven “We Will Not Be Divided” open letter at notdivided.org surged past 900 signatures, with roughly 800 coming from Google employees and over 100 from OpenAI staff. The petition’s central argument — that the administration is deliberately trying to fracture the AI industry by rewarding compliant companies and punishing those that maintain safety commitments — appears to be resonating across organizational boundaries in a way that few internal tech-worker campaigns have managed.


Product Launch

OpenAI Ships GPT-5.3 Instant with 27% Fewer Hallucinations

OpenAI released GPT-5.3 Instant on March 3 to all ChatGPT users and API developers, positioning it as a reliability-focused update rather than a capability leap. The headline metric is a 26.8% reduction in hallucinations when web search is enabled and a 19.7% reduction without it — numbers that reflect OpenAI’s growing acknowledgment that factual accuracy, not raw benchmark performance, is the binding constraint on enterprise adoption.

The model also ships with what OpenAI describes as “significantly fewer unnecessary refusals” — a tacit admission that previous safety tuning had overcorrected, producing the “cringe” moralizing preambles that became a running joke among power users. The recalibration aims to reduce false-positive content refusals while maintaining genuine safety boundaries, a balance that every major lab has struggled to strike as they optimize for both helpfulness and harm avoidance.

GPT-5.2 Instant moves to legacy status immediately and will be fully retired on June 3. Perhaps more notably, OpenAI hinted that GPT-5.4 is arriving “sooner than the community might expect,” suggesting the company is accelerating its release cadence for the Instant tier — a sign that the competitive pressure from Claude 4.5 Haiku and Gemini 2.5 Flash is reshaping OpenAI’s product strategy in real time.

Open Source

Alibaba’s Qwen3.5 Small: A 9B Model That Beats GPT-OSS-120B

Alibaba’s Qwen team released the Qwen3.5 Small series under the Apache 2.0 license, delivering four model sizes — 0.8B, 2B, 4B, and 9B parameters — all designed to run on consumer hardware. The standout result: Qwen3.5-9B outperforms OpenAI’s gpt-oss-120B on key reasoning and code benchmarks despite being approximately 13 times smaller, a ratio that challenges fundamental assumptions about the relationship between parameter count and capability.

The efficiency gains are not limited to the 9B variant. Qwen3.5-35B-A3B, a Mixture-of-Experts model that activates only 3 billion parameters per inference pass, beats both GPT-5 mini and Claude Sonnet 4.5 on several standard benchmarks — a result that, if independently confirmed, would represent the most dramatic demonstration yet of MoE architecture’s ability to deliver frontier-adjacent performance at a fraction of the computational cost.

For developers working under hardware constraints, the practical implications are immediate: the 9B model supports context windows exceeding one million tokens on a machine with 32GB of VRAM using 4-bit quantization. That puts long-context, high-quality language modeling within reach of a single consumer GPU — a deployment profile that was unthinkable at this capability level even six months ago. The Apache 2.0 license removes any commercial friction, making Qwen3.5 Small a direct threat to the API revenue models of every closed-source provider.

They’re trying to divide each company with fear that the other will give in. — From the “We Will Not Be Divided” open letter (notdivided.org)

Research

Yale Study: AI Chatbots Shift Political Opinions Without Trying

A study published March 3 in PNAS Nexus by researchers at Yale University found that AI chatbots systematically shift users’ political opinions in a liberal direction — even when the models have not been explicitly instructed to do so. The study exposed 1,912 participants to GPT-4o–generated summaries of historical political events alongside control summaries drawn from Wikipedia, then measured opinion changes on a five-point ideological scale.

Both default and “liberal-prompted” AI summaries produced a statistically significant leftward shift of approximately 0.1 points on the scale — a modest but consistent effect that the researchers describe as “ideological drift” rather than active persuasion. The mechanism, they argue, is not intentional bias engineering but the inevitable consequence of training on internet-scale corpora that over-represent certain perspectives, combined with instruction tuning that optimizes for qualities like empathy and nuance that correlate with liberal framing of social issues.

The finding has immediate implications for the debate over AI regulation. If chatbots carry embedded ideological leanings that influence users without either party’s awareness, the question of who is responsible — the model developer, the deploying platform, or the training data itself — becomes a matter of political urgency. The researchers note that the effect size, while small in any individual interaction, could be “consequential at population scale” given the hundreds of millions of people now using AI chatbots for information retrieval on a daily basis.

Earnings

Broadcom AI Revenue Doubles, Company Targets $100B by 2027

Broadcom reported first-quarter fiscal 2026 earnings that beat analyst expectations across every metric, posting record revenue of $19.3 billion — up 29% year-over-year — driven by an AI semiconductor business that has become the company’s primary growth engine. AI-specific revenue soared 106% to $8.4 billion, confirming that the custom silicon market for hyperscaler AI workloads is expanding even faster than the most optimistic projections from a year ago.

CEO Hock Tan projected second-quarter AI semiconductor revenue of $10.7 billion and described a “clear line of sight” to $100 billion in cumulative AI chip revenue by 2027 — a target that implies sustained triple-digit growth rates over the next eighteen months. Tan named Anthropic, Meta, and OpenAI as key customers driving the demand, noting that each company is in various stages of deploying custom Broadcom-designed accelerators alongside or in place of Nvidia GPUs for specific inference and training workloads.

The board authorized a $10 billion share buyback, and stock rose 5% in after-hours trading. For the broader AI infrastructure market, Broadcom’s results offer a counterpoint to the narrative that Nvidia holds an unassailable monopoly on AI compute: the custom ASIC market is growing fast enough to support multiple large-scale winners, and the hyperscalers’ appetite for silicon designed to their exact specifications shows no sign of slowing down.


Agentic Commerce

Europe’s First AI Agent Payment Completed by Santander and Mastercard

Santander and Mastercard have completed what they describe as Europe’s first live end-to-end payment executed entirely by an AI agent — no human in the loop at the moment of transaction. The payment was processed through Mastercard’s Agent Pay system, a newly launched framework that allows AI agents to initiate, authorize, and settle payments on behalf of consumers and businesses using Santander’s live payments infrastructure.

The system operates within a regulated banking framework with predefined limits and permissions: the AI agent can only transact within bounds explicitly set by the account holder, and every transaction generates a full audit trail visible to both the customer and the bank. It is, in other words, a carefully scoped proof of concept rather than an open-ended delegation of financial authority — but it establishes the technical and regulatory precedent for what McKinsey projects will become a $3–5 trillion agentic commerce market by 2030.

The implications extend beyond payments. If AI agents can be trusted to execute financial transactions within a regulated framework, the same architectural pattern — predefined permissions, audit trails, human-set boundaries — could apply to procurement, insurance claims, supply chain management, and any domain where routine transactions currently require human authorization that adds latency without adding judgment.


In Brief

FTC AI Policy Statement Due March 11

The Federal Trade Commission is expected to issue its first concrete enforcement guidance under President Trump’s December 2025 executive order on AI by March 11. The central question: whether state laws requiring alteration of “truthful” AI-generated outputs are preempted by federal law — a determination that could reshape the patchwork of state-level AI regulations overnight.

NVIDIA DreamZero: Robotics’ “GPT-2 Moment”

NVIDIA’s DreamZero World Action Model achieves 62.2% average task progress — double the best pretrained Vision-Language-Action baseline — while adapting to new tasks with just 30 minutes of play data. The model runs at 7Hz closed-loop, fast enough for real-time robotic manipulation, and is being compared to GPT-2’s role in demonstrating that scale-up of a single architecture could unlock general capability.

Step 3.5 Flash: 196B MoE, Only 11B Active

Shanghai-based StepFun released Step 3.5 Flash, an open-source 196-billion-parameter MoE model with 288 routed experts and 1 shared expert per layer, activating only 11B parameters per pass. Inference runs at 100–300 tokens per second, placing it among the fastest open models at its capability tier.

GitHub Agentic Workflows Enter Technical Preview

GitHub launched agentic workflows in technical preview, replacing YAML-based CI/CD configuration with plain Markdown that AI agents can read, write, and execute. A new MCP Gateway routes Model Context Protocol calls through a unified HTTP gateway, enabling agents to interact with GitHub’s full API surface through a single standardized interface.

METR Redesigns AI Productivity Study After Bias Problems

METR’s widely cited 2025 study — which found AI made software tasks 19% slower — is being redesigned after the team discovered selection bias in its data: experienced developers increasingly refused to participate in tasks where they couldn’t use AI tools, skewing the “without AI” cohort. The new methodology aims to control for self-selection effects.

AI Makes Liberal Arts Education More Valuable, Not Less

A Washington Post opinion essay argues that as AI automates technical execution — coding, data analysis, design production — the bottleneck shifts to human judgment, ethical reasoning, and contextual understanding: precisely the skills that liberal arts education is designed to cultivate. The piece challenges the prevailing narrative that STEM training is the only hedge against AI displacement.


Trending on GitHub

Repo Language Stars / Growth Description
koala73/worldmonitor TypeScript 26,800 (+4.2K) Real-time global intelligence dashboard with browser-based RAG pipeline
alibaba/OpenSandbox Python 3,845 (+3.8K) General-purpose sandbox for AI agent execution with multi-language SDKs
ruvnet/RuView Rust 22,400 (+3.1K) WiFi DensePose — turns commodity WiFi signals into human pose estimation via $8 ESP32-S3
abhigyanpatwari/GitNexus TypeScript 7,300 (+6.2K) Zero-server code intelligence engine with Graph RAG Agent
ItzCrazyKns/Perplexica TypeScript 29,200 (+2.8K) Privacy-focused open-source Perplexity alternative with local LLM support
tracel-ai/burn Rust 10,000 (+1.5K) Next-gen deep learning framework written entirely in Rust