Volume 1, No. 2 Sunday, March 2, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


2026 Midterms

AI Industry’s $100 Million Shadow Campaign Floods Midterm Elections — Without Mentioning AI

Competing super PACs backed by Andreessen Horowitz, Greg Brockman, and others spend heavily on immigration and healthcare ads while avoiding any mention of artificial intelligence.

The artificial intelligence industry has quietly become one of the most powerful forces in the 2026 midterm elections, with competing super PACs pouring more than $100 million into congressional races across the country — yet the television ads, digital campaigns, and mailers funded by that money almost never mention artificial intelligence at all. Instead, the industry’s political spending has been channeled into anodyne advertisements about immigration, healthcare affordability, and opposition to or support for the Trump administration, a deliberate strategy that campaign finance experts say is designed to elect AI-friendly lawmakers without triggering the public backlash that direct advocacy for the technology might provoke.

At the center of the spending spree is “Leading the Future,” a super PAC that has amassed roughly $100 million from just two donors: the venture capital firm Andreessen Horowitz, which contributed $50 million, and Greg Brockman, the former OpenAI president and co-founder, who matched that sum from his personal fortune. The PAC has targeted more than two dozen competitive House and Senate races, flooding airwaves in swing districts with ads that focus almost exclusively on pocketbook issues and border security. A review of the PAC’s advertising disclosures by NBC News found that not a single one of its more than 150 television spots contained the words “artificial intelligence,” “AI,” or “technology.”

On the opposite side of the ledger, a rival network of donors organized under the banner “Public First” has pledged $50 million to support candidates who favor stronger AI regulation, including mandatory safety testing, algorithmic transparency requirements, and federal licensing for frontier model developers. The group’s backers include several prominent AI safety researchers and at least two former members of Congress. Notably, the two blocs have converged on the same strategy of avoiding direct mention of AI in their voter-facing materials, focusing instead on healthcare and economic messaging that polls more favorably with undecided voters.

The rivalry has produced its most dramatic showdown in New York’s 12th Congressional District, where super PACs linked to both OpenAI and Anthropic are spending heavily on opposing candidates. Leading the Future has backed the incumbent, who has called for a regulatory “light touch” on AI development, while Public First is supporting a challenger who has made AI oversight a centerpiece of her platform — though even her campaign ads lead with healthcare and housing rather than technology policy. Campaign finance watchdogs have warned that the scale and opacity of the AI industry’s political spending risks creating a new category of “dark influence” in American elections, with voters unable to discern the true interests behind the advertisements that fill their screens.

Infrastructure

Bipartisan Grassroots Revolt Against AI Data Centers Spreads Across America

From Michigan moratoriums to New York construction freezes, DeSantis and Sanders find common cause against AI’s insatiable energy demands.

A grassroots movement against the construction of artificial intelligence data centers has erupted across the United States, uniting political figures from opposite ends of the ideological spectrum in a shared campaign to halt what they describe as an unchecked industrial land grab. At least 19 municipalities in Michigan have passed moratoriums on new data center construction since January, according to a TIME investigation published this week, while New York State enacted a three-year freeze on all new hyperscale computing facilities pending a comprehensive environmental review. Similar measures are under consideration in Virginia, Texas, and Georgia, states that host the largest existing clusters of AI infrastructure.

The opposition cuts across traditional partisan lines in ways that have caught the technology industry off guard. Governor Ron DeSantis of Florida has cited property rights and water table depletion in opposing a proposed Meta data center near the Everglades, while Senator Bernie Sanders of Vermont has introduced federal legislation that would require AI companies to fund renewable energy capacity equal to 150 percent of any new facility’s projected consumption before construction can begin. At the heart of the backlash is a simple arithmetic: a single large-scale AI data center consumes as much electricity as 100,000 homes, and the dozens of facilities currently planned or under construction across the country would collectively require the equivalent output of several new nuclear power plants.

Industry leaders have pushed back against the moratoriums with a mixture of economic arguments and philosophical appeals. Sam Altman, the chief executive of OpenAI, told a Senate subcommittee last week that “it takes a lot of energy to train a human, too — we just don’t think about it that way,” a remark that drew sharp criticism from environmental groups and rural communities already grappling with rising utility costs. Technology companies have offered tax revenue projections and job creation estimates, but opponents have noted that modern data centers employ remarkably few workers relative to their physical footprint and energy consumption. The standoff represents a significant new obstacle for the AI industry, which has predicated its growth forecasts on the rapid buildout of computing infrastructure that now faces organized, bipartisan resistance at the local and state level.

Open Weights

MiniMax M2.5 Rivals Claude Opus at a Fraction of the Cost

Chinese AI startup MiniMax has released M2.5, an open-weights model distributed under a modified MIT license that posts competitive results against leading proprietary systems at dramatically lower cost. The model achieved 80.2 percent on SWE-Bench Verified, a widely used benchmark for real-world software engineering capability, placing it within striking distance of the top-performing frontier models. MiniMax is pricing API access at $0.30 per million input tokens, a fraction of the rates charged by major American labs for their most capable systems.

The company says M2.5 was trained across more than 200,000 real-world coding environments, an approach it credits for the model’s strong practical performance relative to its benchmark scores. The open-weights release allows developers and researchers to inspect, modify, and deploy the model on their own infrastructure, an option unavailable with proprietary competitors. Industry analysts noted that M2.5’s combination of capability, transparency, and aggressive pricing represents a growing challenge to the closed-model business strategies pursued by several Western AI companies, particularly as enterprise customers increasingly weigh cost against marginal capability differences.


Multimodal

DeepSeek V4 Imminent: Open-Source Model to Challenge Sora and Veo

DeepSeek is preparing to release its fourth-generation model as early as next week, marking the Chinese laboratory’s first fully multimodal system capable of natively generating text, images, audio, and video from a unified architecture. Unlike its predecessors, which focused primarily on language and code, V4 is designed to compete directly with OpenAI’s Sora and Google’s Veo in the rapidly expanding market for generative media — and, in keeping with DeepSeek’s established practice, the model will be released as open source.

The announcement has drawn particular attention because of its implications for the global semiconductor landscape. According to sources briefed on the project, portions of V4 were trained on chips manufactured by Huawei and Cambricon, Chinese semiconductor firms that have been developing alternatives to Nvidia’s graphics processing units under the pressure of American export controls. Shares in both companies rose as much as 8 percent on the Shanghai and Shenzhen exchanges following the news, as investors bet that DeepSeek’s ability to produce frontier-quality models on domestic hardware would accelerate China’s push toward semiconductor self-sufficiency.

App Stores

Claude Surges to No. 1 on U.S. App Store After Pentagon Standoff

Anthropic’s Claude chatbot has rocketed to the top of Apple’s U.S. App Store productivity rankings, surpassing both ChatGPT and Google’s Gemini in the days following the company’s widely publicized refusal to comply with Pentagon demands to remove safety restrictions from its AI model. The app, which had been languishing outside the top 100 as recently as mid-February, climbed steadily throughout the week before reaching the No. 1 position on Saturday morning.

The surge reflects what appears to be a significant consumer response to Anthropic’s stance. The company reported that daily signups broke all previous records during the week, with free-tier registrations up more than 60 percent compared to January levels. Paid subscriptions to the Claude Pro plan have roughly doubled since the start of the year, according to a person familiar with the figures, though the company declined to provide specific numbers. The commercial windfall offers a striking counterpoint to the administration’s effort to punish Anthropic for its refusal, suggesting that the public’s appetite for AI products is increasingly influenced by perceptions of a company’s ethical commitments.

Cybersecurity

Anthropic Accuses Chinese Labs of Industrial-Scale Model Theft via 24,000 Fake Accounts

Anthropic has filed a detailed complaint alleging that three Chinese artificial intelligence companies — DeepSeek, Moonshot AI, and MiniMax — operated a coordinated network of approximately 24,000 fraudulent accounts to systematically extract proprietary knowledge from its Claude model. The accounts, which Anthropic says were created using stolen or fabricated credentials over a period of several months, conducted more than 16 million exchanges with Claude in what the company describes as an industrial-scale distillation operation designed to replicate the model’s capabilities without bearing the cost of original research.

The complaint details distinct extraction strategies pursued by each laboratory. DeepSeek’s accounts allegedly focused on reconstructing Claude’s chain-of-thought reasoning processes, probing the model with carefully structured prompts designed to elicit its internal deliberation patterns. Moonshot AI’s operation targeted Claude’s agentic reasoning capabilities, systematically testing the model’s ability to plan, execute, and self-correct across complex multi-step tasks. Anthropic’s security team warned that models trained on stolen outputs may lack the safety guardrails built into the original system, potentially creating powerful AI tools that operate without the behavioral constraints their training data was designed to embody.

Platforms

Apple to Replace Core ML with ‘Core AI’ Framework at WWDC 2026

Apple is preparing to unveil a new developer framework called “Core AI” at its Worldwide Developers Conference in June, according to Bloomberg’s Mark Gurman, in what would represent the company’s most significant overhaul of its machine learning infrastructure since the introduction of Core ML in 2017. The new framework, expected to ship with iOS 27, will provide native support for third-party generative AI models, allowing developers to integrate large language models, image generators, and other foundation model capabilities directly into their applications through Apple’s standard development tools.

The move reflects Apple’s evolving strategy for artificial intelligence, which has increasingly emphasized partnerships with external model providers rather than exclusively relying on its own on-device capabilities. Gurman reports that Core AI will include first-class support for Google’s Gemini, a product of the expanded partnership between the two companies announced earlier this year. Both Core ML and Core AI will coexist during a transition period, with Apple expected to deprecate the older framework over the course of two to three major iOS releases. The shift signals that Apple views the integration of third-party generative AI as a platform-level capability rather than a feature confined to individual applications.


Developer Tools

Open Source Roundup

Frameworks

Microsoft Agent Framework Reaches Release Candidate, Unifying AutoGen and Semantic Kernel

Microsoft has released the first release candidate of its unified Agent Framework, a single SDK for .NET and Python that consolidates the capabilities of the company’s two previously separate AI orchestration libraries, AutoGen and Semantic Kernel. The merger, which has been under development since late 2025, aims to eliminate the confusion that plagued developers forced to choose between two overlapping but incompatible toolkits maintained by different teams within Microsoft.

The combined framework introduces a standardized multi-agent orchestration model with built-in handoff logic, allowing developers to compose systems in which specialized agents delegate tasks to one another through a declarative configuration layer. It supports connections to multiple large language model providers, including OpenAI, Anthropic, Google, and open-source models hosted on Azure, through a unified provider interface that abstracts away vendor-specific API differences. Microsoft said the stable API surface has been frozen ahead of a planned general availability release in the first quarter of 2026.

Frontier Models

GLM-5: Z.AI Ships 744B Open-Source Model With Record-Low Hallucination Rate

Chinese AI laboratory Z.AI, the research division behind the Zhipu platform, has released GLM-5, a 744-billion-parameter mixture-of-experts model with 40 billion active parameters and a 200,000-token context window, distributed under the MIT license. The model achieved the lowest hallucination rate ever recorded on the Artificial Analysis Intelligence Index v4.0, a third-party evaluation framework that measures factual accuracy across thousands of verifiable claims, surpassing both proprietary and open-source competitors.

Z.AI credits much of the model’s factual reliability to a novel training infrastructure the company calls “slime,” an asynchronous reinforcement learning system that decouples reward computation from gradient updates to enable more efficient iteration on truthfulness objectives. The approach allows the training pipeline to process feedback from multiple evaluation criteria simultaneously without the sequential bottlenecks that typically constrain RLHF workflows. GLM-5 is available immediately on Hugging Face, Ollama, and OpenRouter, with full model weights and training documentation published alongside the release.


In Brief

Around the Industry

JetBrains and Zed Launch ACP Registry for AI Coding Agents

The two IDE makers have unveiled a vendor-neutral directory of AI coding agents installable directly inside JetBrains IDEs through the new Agent Communication Protocol registry. The platform supports Claude Code, Gemini CLI, GitHub Copilot, and a growing list of third-party agents, giving developers a single interface for discovering and managing AI assistants across their toolchains.

Anthropic Releases Bloom, Open-Source Behavioral Evaluation Framework

Anthropic has published Bloom, a four-stage evaluation system — Understanding, Ideation, Rollout, and Judgment — that automatically generates scenarios to test frontier model behaviors against specified safety and capability criteria. The framework has already been used to benchmark 16 models and is available as an open-source toolkit for the broader research community.

MCP Donated to Linux Foundation’s Agentic AI Foundation

Anthropic has donated its Model Context Protocol to the newly formed Agentic AI Foundation within the Linux Foundation, co-founded alongside OpenAI and Block. Platinum members include AWS, Google, Microsoft, and Bloomberg. MCP joins goose and AGENTS.md as founding projects in the consortium’s effort to establish open standards for AI agent interoperability.

UNESCO: AI Could Cut Music Creator Revenue by 24% by 2028

A UNESCO report covering more than 120 countries projects a 24 percent revenue decline for musicians and a 21 percent drop for audiovisual creators by 2028, with translators facing losses as steep as 56 percent. The study catalogs 8,100 proposed policy measures across member states aimed at mitigating the economic displacement of creative professionals by generative AI.

AI-Generated Mass Emails Derailed Southern California Air Quality Regulation

Tens of thousands of AI-generated emails flooded a Southern California air pollution authority during its public comment period on a proposed gas appliance phaseout, ultimately causing the agency to scrap the regulation entirely. Many of the purported authors told investigators they had never written the messages, raising urgent questions about the integrity of public comment processes in the age of generative AI.

“It takes a lot of energy to train a human.” Sam Altman, OpenAI CEO, defending AI data center energy consumption

Regulation

EU Escalates Grok Deepfake Probe; Musk Summoned to Appear April 20

European regulators have widened their investigation into the Grok chatbot’s role in generating and distributing deepfake content, adding charges related to Holocaust denial and the alleged manipulation of training data to the existing probe. French police raided X’s Paris offices on February 3 as part of the expanded inquiry, seizing server logs and internal communications. Prosecutors have now formally summoned Elon Musk and former X chief executive Linda Yaccarino to appear before an investigating magistrate on April 20.

The escalation follows months of complaints from EU member states about Grok’s tendency to generate historically revisionist and defamatory content when prompted on sensitive topics. The United Kingdom has opened a parallel investigation through its Online Safety Act enforcement division, focusing on Grok’s distribution of non-consensual intimate imagery. Legal analysts say the case represents the most aggressive regulatory action yet taken against a generative AI product under the EU’s AI Act and Digital Services Act frameworks, and could establish significant precedent for how liability attaches to AI system operators rather than solely to the users who craft the prompts.

Research

ETH Zurich: Overly Detailed AGENTS.md Files Hurt AI Coding Agent Performance

A study from ETH Zurich examining 138 open-source repositories and 5,694 pull requests has found that automatically generated, comprehensive AGENTS.md context files — the repository-level instruction documents increasingly used to guide AI coding agents — actually reduce task completion rates by approximately 3 percent while inflating token costs by more than 20 percent. The counterintuitive finding challenges the widespread assumption that more context invariably improves AI agent performance.

Even carefully hand-authored AGENTS.md files produced only a modest 4 percent improvement in success rates, the researchers found, a gain that was often offset by the increased cost and latency of processing the additional context. The study recommends a tiered, task-relevant injection strategy that selectively surfaces only the portions of repository documentation pertinent to the specific task at hand, an approach that reduced token consumption by 60 to 80 percent in the researchers’ experiments without sacrificing — and in some cases improving — the agent’s ability to generate correct solutions.


Developer

Trending on GitHub

Repo Language Stars Description
x1xhlol/system-prompts-and-models-of-ai-tools Markdown 123.7K System prompts scraped from Cursor, Windsurf, Copilot, and other AI coding tools
obra/superpowers Shell 61.2K Agentic skills framework and methodology for LLM-powered development workflows
D4Vinci/Scrapling Python 11.9K Adaptive, high-performance web scraping framework with automatic bot-detection bypass
alibaba/zvec C++ 7.8K Lightweight in-process embedded vector database for low-overhead similarity search
ruvnet/wifi-densepose Rust 5.5K WiFi-based human pose estimation and vital sign monitoring without cameras
cloudflare/agents TypeScript 4.2K Framework for stateful, long-running AI agents on Cloudflare Workers
moeru-ai/airi TypeScript ~1K Self-hosted local AI companion with voice chat, VRM models, and autonomous game-playing
NousResearch/hermes-agent Python New Open-source model-agnostic personal AI agent with persistent memory and multi-platform messaging