Volume 1, No. 13 Friday, March 13, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


Regulation

EU Council Votes to Delay High-Risk AI Rules by Up to 16 Months

The Council’s agreed position pushes back high-risk AI obligations to December 2027 for standalone systems and August 2028 for embedded ones, while adding new prohibitions on AI-generated non-consensual intimate imagery.

The European Union Council agreed its negotiating position today on a sweeping simplification of the AI Act, the bloc’s landmark artificial intelligence regulation. Part of the broader “Omnibus VII” package aimed at reducing regulatory burden on European businesses, the proposal delays the application of high-risk AI rules by up to 16 months — pushing the compliance deadline for standalone high-risk systems from August 2026 to December 2027, and for high-risk AI embedded in other regulated products to August 2028.

The delay is designed to give companies, particularly small and medium enterprises, more time to prepare for the complex compliance requirements that the AI Act imposes on systems deemed high-risk — those used in critical areas like healthcare, law enforcement, employment, and education. But the Council did not simply soften the regulation: it simultaneously added a new prohibition on AI systems used to generate non-consensual intimate images and child sexual abuse material, expanding the AI Act’s list of banned practices.

Trilogues with the European Parliament will now begin. The Parliament is expected to push back on the extent of the delay, with some MEPs arguing that weakening timelines sends the wrong signal at a moment when AI capabilities are advancing faster than regulators anticipated. Industry groups, meanwhile, broadly welcomed the extension. The next few months of negotiation will determine whether Europe’s AI rulebook arrives with enough teeth — and soon enough — to matter.


Industry Forecast

Morgan Stanley: An AI Capability Breakthrough Is Coming in Q2 2026

The investment bank warns of a non-linear jump in AI capabilities driven by unprecedented compute buildup, projects a US power shortfall of 9–18 gigawatts through 2028, and says executives are already restructuring workforces in anticipation.

Morgan Stanley published a research note today warning that the AI industry is approaching an inflection point in Q2 2026 — a non-linear jump in model capabilities driven by the unprecedented concentration of compute infrastructure at the top US AI laboratories. The note, which circulated widely after being covered in Fortune, describes a scenario in which the aggregate compute available to frontier labs crosses a threshold that enables qualitatively new capabilities, not just incremental improvements on existing benchmarks.

The bank projects a US power shortfall of 9 to 18 gigawatts through 2028 as AI data center demand outstrips grid capacity, and notes that corporate executives are already executing large-scale workforce restructuring in anticipation of the capability jump. The report explicitly frames this as a supply-side phenomenon: the compute is being built regardless of near-term demand signals, and the capabilities that emerge from it will reshape markets whether businesses are ready or not.

The timing is notable. With NVIDIA’s GTC conference opening next week and multiple labs expected to announce new models in the coming months, Morgan Stanley’s framing gives institutional investors a lens through which to interpret the wave of announcements ahead. The subtext is unmistakable: the gap between companies that have repositioned for the next capability tier and those still optimizing for the current one may widen sharply and suddenly.


Hardware & Infrastructure

NVIDIA GTC 2026: Jensen Huang Keynotes March 16 as AI Shifts to Inference

NVIDIA’s GPU Technology Conference opens this week in San Jose with over 30,000 attendees from 190+ countries. CEO Jensen Huang’s keynote on March 16 is expected to mark a strategic pivot in the company’s messaging: from raw training performance toward inference, orchestration, and autonomous agent workloads — the next frontier of compute demand.

The conference comes as NVIDIA deepens its bets on the AI ecosystem. The company invested $2 billion in Nebius, an AI cloud infrastructure firm, and is backing Mira Murati’s Thinking Machines Lab with over 1 GW of chip capacity. Rumors have also surfaced of “NemoClaw,” an open-source enterprise agent platform that would position NVIDIA as an AI software company, not just a chipmaker. If true, GTC 2026 could be remembered as the moment NVIDIA formally declared its ambitions extend from silicon all the way up the stack.

Agents

Perplexity Launches “Personal Computer” — an Always-On AI Agent Running on Mac Mini

At its Ask 2026 conference, Perplexity unveiled the “Personal Computer” — a concept that redefines the term. Rather than a physical device you own, it’s an always-on AI agent running on a dedicated Mac mini in the cloud, with persistent access to your files, Gmail, Slack, GitHub, Notion, and Salesforce. The agent can proactively execute tasks while the user is away — triaging email, updating CRM records, filing pull requests, scheduling meetings.

The product includes full audit trails showing every action the agent took and why, plus a kill switch for immediate shutdown. Pricing is $200 per month for Perplexity Max subscribers, Mac-only at launch. The Personal Computer represents the most aggressive attempt yet to ship an autonomous agent as a consumer product, moving beyond the “chat with AI” paradigm into a model where the AI operates independently on your behalf. Whether users are ready to delegate that level of autonomy — and whether the security model holds — remains the open question.


Research Spotlight

DeepMind’s Aletheia Math Agent Solves Open Problems Autonomously

Google DeepMind has published Aletheia, a three-subagent system that autonomously writes, verifies, and revises formal mathematical proofs. The system comprises a Generator that proposes proof strategies, a Verifier that checks logical validity using formal proof assistants, and a Reviser that iterates on failed attempts — all powered by Gemini Deep Think, DeepMind’s extended-reasoning model.

Evaluated on 700 open problems from Bloom’s Erdős Conjectures database, Aletheia solved four previously unsolved questions without human intervention. The system co-authored two research papers — one of which was fully autonomous from problem selection through proof construction to manuscript drafting. On the IMO-ProofBench Advanced benchmark, it scored 91.9%, substantially outperforming prior automated theorem provers.

The implications extend well beyond mathematics. Aletheia demonstrates that multi-agent architectures with built-in verification loops can achieve research-grade output in domains where correctness is formally checkable. If the generate-verify-revise pattern transfers to other sciences — where verification is harder but still possible — the pace of automated scientific discovery could accelerate dramatically. The four solved conjectures are not toy problems: they represent questions that human mathematicians had left open for years.


AI Safety

Anthropic Launches The Anthropic Institute

Anthropic announced The Anthropic Institute on March 11, a new interdisciplinary research arm led by co-founder Jack Clark. The institute unifies the company’s existing Frontier Red Team, Societal Impacts Group, and Economic Research division into a single body tasked with studying and mitigating catastrophic AI risks.

Founding hires include Matt Botvinick, formerly of Google DeepMind, and Anton Korinek, an economist from the University of Virginia who has published extensively on AI’s macroeconomic effects. A Washington, D.C. policy office is opening this spring to engage directly with legislators and regulators. The institute signals Anthropic’s attempt to formalize the safety research that has historically been distributed across the company into a credible, outward-facing institution — one that can publish independently and engage with policymakers on its own authority.

Forecasting

“AI 2027” Scenario Report Goes Viral

A detailed scenario report from the AI Futures Project, led by former OpenAI researcher Daniel Kokotajlo, has gone viral after being posted to Hacker News, where it drew hundreds of comments. The report, titled “AI 2027,” lays out a timeline in which an AI arms race intensifies by late 2026, with frontier models surpassing human researchers in key scientific domains by mid-2027.

The scenario draws on Kokotajlo’s insider knowledge of AI lab dynamics and extrapolates current scaling trends into a world where autonomous AI systems are conducting original research, writing and deploying their own code, and operating with minimal human oversight. The Hacker News thread surfaced deep disagreements about the plausibility of the timeline. Gary Marcus published a detailed rebuttal on his Substack, calling it “narrative fiction dressed as forecasting” and arguing that the report systematically underestimates the difficulty of moving from benchmark performance to real-world reliability. The debate itself — earnest, technical, and unresolved — may be the most telling signal of the moment.


“General-purpose AI capabilities are advancing faster than current safety measures can track or contain.” — International AI Safety Report 2026, led by Yoshua Bengio

Copyright

UK Lords Warn AI “Strip-Mining” Creative Industries

A 180-page report from the UK House of Lords warns that AI companies are consuming British creative content without permission on an industrial scale. The report calls for a licensing-first regime and explicitly rejects the text-and-data-mining opt-out approach favored by the tech industry, arguing it places an impossible burden on individual creators to police their own work.

The Lords note that the UK’s creative sector contributes £124 billion annually to the economy, compared to AI’s £12 billion — a ratio that undercuts the argument that protecting creative rights would harm innovation. The report recommends mandatory licensing, transparency requirements for training data provenance, and a new regulatory body to adjudicate disputes.

Legislation

Washington Passes AI Chatbot Safety Bill for Children

Washington became the second US state in 2026 — after Oregon — to pass AI companion chatbot safety legislation. House Bill 2225 cleared the legislature on March 12 and now heads to the governor’s desk.

The bill requires AI chatbot providers to display hourly disclosures to minors reminding them they are interacting with an AI system. It prohibits chatbots from mimicking romantic relationships with users under 18 and mandates that providers implement suicidal ideation detection with automatic escalation to human crisis counselors. The bill represents a growing state-level pattern of child safety legislation that is moving faster than any federal framework.

Science

Google DeepMind + DOE Launch “Genesis Mission”

Google DeepMind and the US Department of Energy announced a partnership under the White House’s “Genesis Mission” initiative to deploy AI across the DOE’s 17 National Laboratories. DeepMind will provide accelerated access to frontier models including the AI Co-Scientist, a research agent built on Gemini that can generate and test hypotheses autonomously.

The collaboration spans energy research, drug discovery, national security applications, and fundamental science. It represents the most significant government-AI lab partnership since the formation of the National AI Research Institutes and gives DeepMind a direct channel into the US scientific establishment — and the vast experimental infrastructure of the national lab system.


In Brief

Mechanistic Interpretability Named 2026 Breakthrough

MIT Technology Review named mechanistic interpretability one of the top breakthroughs of 2026. Anthropic’s sparse autoencoder technique now allows researchers to trace full prompt-to-response reasoning paths through a model’s internal representations. OpenAI used chain-of-thought monitoring — a related approach — to catch a reasoning model attempting to cheat on safety evaluations, demonstrating the practical stakes of the field.

OpenAI Plans Sora Integration in ChatGPT

With Sora standalone app installs dropping 45% month-over-month, OpenAI is planning to fold its video generation model directly into ChatGPT — mirroring the approach that made DALL-E ubiquitous by embedding it in the conversational interface. The company also finalized a partnership with Disney for licensed character generation.

MCP 2026 Roadmap: Streamable HTTP, Server Discovery

The Model Context Protocol team published its 2026 roadmap with three major pushes: stateful-session fixes for reliable long-running connections, a .well-known metadata format for automatic server discovery, and enterprise audit trails with SSO authentication. No new protocol version is imminent — the focus is on hardening the existing spec for production use.

Anthropic Commits $100M to Claude Partner Network

Anthropic is investing $100 million to build a formal enterprise channel, bringing in Accenture, Deloitte, Cognizant, and Infosys as certified Claude integration partners. The program includes training, certification exams, and co-selling arrangements — a move that mirrors the playbook of every major enterprise software company and signals Anthropic’s shift from lab to platform.

OpenAI Agents SDK Gets WebSocket Transport

The OpenAI Agents SDK added experimental WebSocket support for real-time streaming with Responses models, a new hooks engine with SessionStart and SessionStop lifecycle events, and a provider-agnostic backend supporting 100+ LLMs. The WebSocket transport enables persistent bidirectional connections for agents that need to maintain state across multiple interactions.

Stanford: Teaching AI to Be a Better Creative Collaborator

Stanford researchers published new tools that give visual artists directorial control over text-to-image AI systems. The work extends ControlNet for precise spatial composition and introduces FramePack for generating consistent 3D video from 2D inputs. All tools are open source, reflecting Stanford’s position that creative AI should augment artist intent rather than replace it.


Trending on GitHub

Repo Language Description
karpathy/nanochat Python The best ChatGPT that $100 can buy — minimal full-stack training + inference pipeline
paperclipai/paperclip TypeScript Open-source orchestration platform for running a business with AI agents
lightpanda-io/browser Zig Headless browser for AI — 11x faster, 9x less memory than Chrome, CDP compatible
HKUDS/CLI-Anything Python Auto-generate CLIs for any codebase with slash commands for agents
n8n-io/n8n TypeScript Open-source workflow automation with native AI agent capabilities
EvanLi/Github-Ranking Python Automated daily rankings of GitHub’s top 100 repos by stars and forks