Volume 1, No. 18 Wednesday, March 18, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


Breaking

Microsoft Threatens Legal Action Over $50B Amazon-OpenAI Cloud Deal

Redmond considers breach-of-contract lawsuit after OpenAI signs exclusive distribution agreement with AWS for its Frontier platform, upending the Azure partnership that has anchored Microsoft’s AI strategy.

Microsoft is actively considering a breach-of-contract lawsuit against both OpenAI and Amazon after the AI company signed a $50 billion exclusive cloud distribution deal with Amazon Web Services for its new Frontier inference platform, according to multiple people familiar with the deliberations who spoke on condition of anonymity. The agreement, announced earlier this month, makes AWS the sole cloud provider for Frontier — a product that directly competes with Azure’s own AI hosting infrastructure and represents OpenAI’s most ambitious foray yet into enterprise cloud services. For Microsoft, which has invested more than $13 billion in OpenAI and built its entire Copilot product line on preferential access to the company’s models, the deal strikes at the heart of a relationship that was supposed to be exclusive.

Senior Microsoft executives have told colleagues that the AWS arrangement violates “the spirit if not the letter” of the companies’ partnership agreement, which grants Microsoft exclusive commercial rights to OpenAI’s technology through Azure. OpenAI’s legal position appears to rest on a narrow interpretation: that Frontier is a separate product, not a repackaging of the models covered by the Microsoft agreement, and that AWS is providing infrastructure rather than distributing OpenAI’s core API. Microsoft’s lawyers are not persuaded. The company has retained outside counsel and begun preparing litigation materials, though no suit has been filed and back-channel negotiations continue.

The confrontation has broader implications for the AI industry’s emerging power structure. Microsoft’s investment in OpenAI was predicated on the assumption that the partnership would give Azure an insurmountable distribution advantage in enterprise AI. If OpenAI can route its most lucrative products through competing clouds — and if the courts uphold that maneuver — the strategic value of Microsoft’s billions shrinks dramatically. For Amazon, meanwhile, the deal represents a coup: after years of playing catch-up to Azure in the AI cloud wars, AWS has secured exclusive access to the world’s most commercially successful AI platform. The outcome of this dispute will likely reshape how Big Tech structures future AI partnerships and investment agreements.

Market Shift

Anthropic Surges to 73% of First-Time Enterprise AI Spend, Hits $19B ARR

Claude captures nearly three-quarters of new enterprise contracts as OpenAI’s share drops to 27%; Claude Code alone exceeds $2.5 billion in annualized revenue.

Anthropic is now capturing 73 percent of first-time enterprise AI spending, up from 50 percent in January, according to channel data compiled by investment bank Evercore ISI from surveys of enterprise software buyers and reseller partners. The figures represent a dramatic acceleration in Anthropic’s enterprise momentum: OpenAI’s share of new enterprise contracts has fallen from roughly 50 percent to just 27 percent over the same period, a swing that reflects not just Claude’s technical capabilities but a fundamental shift in how large organizations evaluate AI vendors. Enterprises that once defaulted to OpenAI as the safe incumbent choice are increasingly choosing Anthropic on the strength of Claude’s coding performance, its constitutional AI safety framework, and the company’s willingness to offer enterprise-grade support and deployment flexibility.

The financial picture is equally striking. Anthropic has reached approximately $19 billion in annualized recurring revenue, a figure that places it among the fastest-growing enterprise software companies in history. Claude Code — the company’s developer-facing coding assistant, which launched in late 2025 — is independently generating more than $2.5 billion in ARR, making it one of the most successful developer tools ever released. More than 500 customers are now spending in excess of $1 million per year on Anthropic’s products, a cohort that includes major financial institutions, technology companies, and consulting firms that were previously OpenAI exclusives.

The shift carries particular weight given the context in which it is occurring. Anthropic is gaining market share while simultaneously fighting a Pentagon blacklisting, navigating the regulatory uncertainty of the EU AI Act, and competing against an OpenAI that is spending aggressively on enterprise sales and preparing for a trillion-dollar IPO. That enterprises are choosing Claude in spite of these headwinds — or perhaps because the same principles that led to the Pentagon dispute also signal reliability and trustworthiness to corporate buyers — suggests that Anthropic has built something more durable than a product advantage. It has built a brand that enterprise procurement teams trust with their most sensitive workloads.

National Security

The Pentagon Fights

Trump DOJ Defends Anthropic Pentagon Blacklist in Court

The Trump administration’s Department of Justice filed a lengthy brief on Wednesday defending the Pentagon’s designation of Anthropic as a “supply-chain risk” to national security, arguing that the executive branch has broad discretion to determine which companies may participate in classified defense procurement. The brief, filed in the U.S. District Court for the District of Columbia, contends that Anthropic’s refusal to remove safety guardrails from its Claude models — specifically, restrictions that prevent the models from being used in autonomous weapons targeting and mass surveillance — constitutes a “material limitation on capability” that renders the company unable to fulfill the requirements of its $200 million defense contract.

The filing arrived on the same day that 150 retired federal judges submitted an extraordinary amicus brief supporting Anthropic’s position. The judges, spanning appointments from six different presidential administrations, argued that the government’s interpretation would establish a dangerous precedent: that any company selling technology to the federal government could be compelled to remove safety features as a condition of doing business, effectively giving the executive branch unilateral authority to dictate product design. A hearing on Anthropic’s motion for a preliminary injunction is scheduled for March 24. The case is being closely watched across the technology industry, where executives fear that a ruling in the government’s favor could force AI companies to choose between maintaining safety commitments and accessing the fastest-growing segment of the federal market.

NVIDIA Gets Green Light to Resume H200 Sales to China

NVIDIA has received the export licenses necessary to resume sales of its H200 accelerator chips to Chinese customers, ending months of uncertainty that had forced the company to halt shipments of its most advanced data center GPU to the world’s second-largest AI market. The approval comes with a significant condition: NVIDIA will share 25 percent of the revenue generated from Chinese H200 sales with the U.S. government, a novel arrangement that effectively converts export controls from a prohibition into a tax. Manufacturing for the Chinese market is restarting immediately at NVIDIA’s Taiwan-based contract fabs.

The deal represents a pragmatic compromise between competing pressures. NVIDIA CEO Jensen Huang has lobbied aggressively against the China chip bans, arguing publicly and in private meetings with administration officials that restricting sales does not prevent China from developing AI capability — as DeepSeek’s frontier-class V4 model, trained on domestic Huawei hardware, has amply demonstrated — but does cost American companies billions in revenue and accelerates China’s development of indigenous chip alternatives. The revenue-sharing model gives the administration a way to claim it is extracting national security value from the trade while allowing NVIDIA to recapture a market worth an estimated $8–12 billion annually. For China’s AI ecosystem, the return of H200 access removes the most pressing hardware bottleneck facing labs that have been rationing older-generation NVIDIA cards and experimenting with less mature domestic alternatives.

“We will sue them if they breach it.”

Senior Microsoft Executive, on OpenAI’s Amazon Deal — Sherwood News

Developer Tools

Platform Wars

Slack Launches Native MCP Server with Real-Time Search API

Slack has announced the general availability of its native Model Context Protocol server, making the workplace communication platform one of the largest enterprise applications to ship a first-party MCP integration. The server exposes Slack’s full functionality — channels, messages, threads, user profiles, file metadata, and workflows — as structured tool calls that any MCP-compatible AI agent can invoke, eliminating the need for the custom API wrappers and webhook plumbing that enterprises have been building to connect AI assistants to their Slack workspaces. Alongside the MCP server, Slack is launching a Real-Time Search API that provides sub-second full-text search across an organization’s entire message history.

The impact has been immediate and measurable. Slack reports a 25-fold increase in search queries and tool calls during the beta period, suggesting that AI agents are interacting with Slack data at a volume and frequency that dwarfs traditional human search behavior. The pattern makes intuitive sense: an agentic workflow that researches a customer issue might search Slack for relevant engineering discussions, pull context from product channels, and cross-reference support threads — executing dozens of searches in the time it takes a human to type one query. For the MCP ecosystem, Slack’s adoption provides another proof point that the protocol is becoming the default integration layer between enterprise software and AI agents, following Google Cloud’s auto-provisioning announcement earlier this week.

OpenAI Codex Security Enters Research Preview After Scanning 1.2M Commits

OpenAI has launched Codex Security in research preview, a specialized application of its Codex coding agent that scans open-source repositories for security vulnerabilities. The tool has already processed 1.2 million commits across critical open-source infrastructure and identified 10,561 high-severity issues, including previously unknown vulnerabilities in OpenSSH, GnuTLS, and Chromium — three of the most widely deployed pieces of software on the internet. Unlike traditional static analysis tools that flag potential issues based on pattern matching, Codex Security uses the same reasoning capabilities that power Codex’s coding assistant to understand the semantic context of code and identify vulnerabilities that emerge from the interaction of multiple components.

In a notable move, OpenAI is offering free ChatGPT Pro subscriptions to maintainers of open-source projects that participate in the program, effectively subsidizing the security infrastructure of the software that underpins the modern internet. The initiative arrives at a moment when the security of open-source supply chains has become a top-tier policy concern, following a string of high-profile incidents including the XZ Utils backdoor that was narrowly caught in 2024. For OpenAI, Codex Security serves a dual purpose: it demonstrates the practical value of AI-powered code analysis in a domain where the stakes are unambiguously high, and it builds goodwill with the open-source community at a time when the company’s relationship with developers remains fraught over questions of training data provenance.

Robotics

ABB & NVIDIA Close the Sim-to-Real Gap: 99% Correlation Between Digital Twins and Physical Robots

ABB Robotics and NVIDIA have announced a breakthrough in sim-to-real transfer for industrial robots, demonstrating 99 percent correlation between the behavior of simulated robots running in NVIDIA Omniverse and their physical counterparts on factory floors. The achievement is built on RobotStudio HyperReality, a new version of ABB’s robot programming platform that integrates Omniverse’s physics simulation engine to create digital twins so accurate that programs developed and tested entirely in simulation can be deployed to physical robots with no manual tuning or adjustment — a capability that has been the holy grail of industrial robotics for decades.

The practical implications are enormous. Today, deploying a new robot program in a manufacturing environment typically requires days of on-site calibration, during which the production line is idle. If simulation fidelity reaches the point where that calibration step can be eliminated, factories can reprogram their robots remotely and continuously, adapting production to changing demand without ever stopping the line. ABB is already piloting the technology with Foxconn, the world’s largest electronics contract manufacturer, where it is being used to train robots for precision assembly tasks that previously required dedicated human technicians.

For the broader robotics industry, the 99-percent correlation figure represents a threshold that could unlock exponential scaling. When simulation is good enough to replace real-world testing, the cost of developing new robot behaviors drops by orders of magnitude — and the pace of iteration shifts from weeks of physical experimentation to hours of GPU computation. NVIDIA’s Isaac Sim platform already supports millions of parallel simulations; with ABB’s validation that those simulations now match reality, the bottleneck in industrial robot deployment moves from engineering to imagination.

Policy

Rules of Engagement

EU AI Act Omnibus Amendments Head to Committee Vote

The European Parliament’s Internal Market Committee has reached a preliminary political agreement on the Omnibus amendments to the EU AI Act, a package that would push the compliance deadlines for high-risk AI systems to 2027 for standalone applications and 2028 for systems embedded in regulated products such as medical devices and automotive safety systems. The delay represents a significant concession to industry lobbying, which has argued that the original timelines were unworkable for companies that need to redesign complex AI-integrated products to meet the Act’s requirements for risk assessment, human oversight, and technical documentation.

Beyond the timeline shifts, the Omnibus package introduces a new prohibition on AI-generated non-consensual explicit imagery — a category that the original Act addressed only obliquely through its general transparency requirements. The ban applies regardless of whether the imagery depicts a real identifiable person, closing a loophole that had allowed deepfake pornography to proliferate in jurisdictions where existing obscenity laws did not clearly cover synthetic content. The committee vote is expected within two weeks, after which the package moves to trilogue negotiations with the Council. For companies deploying AI in Europe, the extended deadlines offer breathing room but the direction of travel remains unmistakable: comprehensive regulation is coming, and the only question is how fast.

McGill Study: AI Chatbots Credit Journalism Sources Only 18% of the Time

Researchers at McGill University have published what may be the most rigorous study to date on how AI chatbots use journalistic content, testing ChatGPT, Gemini, Claude, and Grok against 2,267 Canadian news stories and finding that the systems provided attribution to the original reporting source only 18 percent of the time. When the chatbots did cite sources, they overwhelmingly favored wire services and aggregators over the publications that actually broke the stories — a pattern that, if it persists at scale, threatens to sever the economic link between original reporting and the audience that consumes it.

The study has already prompted a political response. Canadian Culture Minister Pascale St-Onge said Wednesday that the findings require a “serious conversation” about the relationship between AI systems and news media, suggesting that regulatory intervention may be necessary if voluntary measures fail. Canada’s Online News Act, which requires platforms to compensate publishers for linking to their content, does not currently cover AI chatbots — a gap that the McGill research makes painfully visible. The study is likely to fuel similar regulatory discussions in the European Union, where the AI Act’s transparency requirements could be interpreted to mandate source attribution, and in Australia, where the News Media Bargaining Code is already under review.

Dispatches

In Brief

AI Bubble Concerns Rise as Bloomberg Questions $2T Infrastructure Spend

A Bloomberg analysis questions whether the more than $2 trillion committed to AI infrastructure globally can generate returns sufficient to justify the investment, drawing parallels to the fiber-optic overbuild of the late 1990s. The piece notes that while AI revenues are growing rapidly, they remain a fraction of the capital being deployed. Bloomberg

NSF AI Education Act: Bipartisan Bill Aims to Train 1M Workers by 2030

Senators Cantwell and Moran have introduced bipartisan legislation directing the National Science Foundation to establish AI education and workforce training programs with the goal of preparing one million American workers for AI-augmented roles by 2030. The bill authorizes $2.5 billion over five years. Senate Commerce Committee

Cambridge Discovers Light-Powered Drug Synthesis via “Anti-Friedel-Crafts” Reaction

University of Cambridge chemists have developed a photochemical “anti-Friedel-Crafts” reaction that uses LED lamps instead of toxic metal catalysts to synthesize drug compounds, published in Nature Synthesis. The technique reverses a century-old selectivity rule, enabling cleaner routes to pharmaceutical intermediates. ScienceDaily

Google AI Co-Scientist Validates Leukemia Drug Candidates in Wet-Lab Experiments

Google Research reports that drug candidates proposed by its AI Co-Scientist system have been validated in wet-lab experiments, demonstrating inhibition of acute myeloid leukemia tumor viability. The results mark one of the first public confirmations that AI-generated drug hypotheses can survive empirical testing. Google Research

Zhipu AI Releases GLM-5 Under MIT License at $1/$3.20 Per Million Tokens

Chinese AI lab Zhipu AI has released GLM-5, a frontier-class model achieving competitive performance on major benchmarks, under the permissive MIT license at $1 per million input tokens and $3.20 per million output tokens. The release continues a pattern of Chinese labs undercutting Western pricing. LLM Stats

Oregon Passes Chatbot Companion Disclosure Bill with Child Safeguards

Oregon’s SB 1546 requires AI chatbots to disclose that they are not human when interacting with users, with additional safeguards for minors including mandatory age verification and restrictions on emotionally manipulative design patterns. The bill now heads to the governor’s desk. Troutman Privacy

Open Source

GitHub Trending

The most-starred repositories across GitHub this week

Repo Language Stars / Growth Description
bytedance/deer-flow Python ~28,600 (+4,339/wk) SuperAgent harness v2.0 for multi-hour research/coding tasks
alibaba/OpenSandbox Python/Go ~12,000 (+12,011/mo) AI agent sandbox platform with Docker/K8s runtimes
alibaba/zvec C++/Rust ~4,500 (+1,400/day) “SQLite of vector databases” — in-process vector DB for on-device RAG
THU-MAIC/OpenMAIC TS/Python ~4,500 Multi-agent interactive classroom from Tsinghua University
NVIDIA/NemoClaw TS/Python ~4,200 Enterprise security wrapper for OpenClaw agents
aiming-lab/AutoResearchClaw Python ~4,100 23-stage autonomous research pipeline — idea to conference paper
RightNow-AI/picolm C Rising 2,500-line LLM inference on a $10 Raspberry Pi Zero 2W