Landscape Overview
The agent orchestration ecosystem has stratified into five distinct layers. Protocols define how agents connect to tools (MCP) and each other (A2A). Frameworks provide the programming model. Infrastructure handles durability and scale. Platforms offer visual/managed building. And the AAIF provides neutral governance above it all.
block-beta
columns 1
block:L5:1
columns 3
space AAIF["AAIF\n(Linux Foundation)"] space
end
block:L4:1
columns 3
N8N["n8n"] ZAPIER["Zapier"] VERTEX_B["Vertex AI\nAgent Builder"]
end
block:L3:1
columns 2
TEMPORAL["Temporal"] VERTEX_E["Vertex\nAgent Engine"]
end
block:L2:1
columns 6
LANGGRAPH["LangGraph"] CREWAI["CrewAI"] OPENAI_SDK["OpenAI SDK"] CLAUDE_SDK["Claude SDK"] ADK["ADK"] AUTOGEN["AutoGen"]
end
block:L1:1
columns 4
MCP_P["MCP"] A2A_P["A2A"] ACP_P["ACP"] AGENTS_P["AGENTS.md"]
end
style AAIF fill:#64D2FF,stroke:#64D2FF,color:#0a0010
style N8N fill:#30D158,stroke:#30D158,color:#0a0010
style ZAPIER fill:#30D158,stroke:#30D158,color:#0a0010
style VERTEX_B fill:#30D158,stroke:#30D158,color:#0a0010
style TEMPORAL fill:#FF9F0A,stroke:#FF9F0A,color:#0a0010
style VERTEX_E fill:#FF9F0A,stroke:#FF9F0A,color:#0a0010
style LANGGRAPH fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style CREWAI fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style OPENAI_SDK fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style CLAUDE_SDK fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style ADK fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style AUTOGEN fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style MCP_P fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style A2A_P fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style ACP_P fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style AGENTS_P fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style L5 fill:#111111,stroke:#2c2c2e
style L4 fill:#111111,stroke:#2c2c2e
style L3 fill:#111111,stroke:#2c2c2e
style L2 fill:#111111,stroke:#2c2c2e
style L1 fill:#111111,stroke:#2c2c2e
The Protocol Layer
Four complementary protocols define how agents interact with the world and each other. MCP connects agents to tools vertically, A2A connects agents to agents horizontally, ACP is merging into A2A under Linux Foundation governance, and AGENTS.md provides repo-level discoverability.
graph LR
MCP["MCP\n(Anthropic)"]
A2A["A2A\n(Google)"]
ACP["ACP\n(IBM)"]
AGENTS["AGENTS.md\n(OpenAI)"]
AAIF["AAIF\n(Linux Foundation)"]
MCP -->|"agent-to-tool\n(vertical)"| AAIF
A2A -->|"agent-to-agent\n(horizontal)"| AAIF
ACP -->|"merging into A2A"| A2A
AGENTS -->|"repo-level config"| AAIF
style MCP fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style A2A fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style ACP fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style AGENTS fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style AAIF fill:#64D2FF,stroke:#64D2FF,color:#0a0010
MCP -- "USB-C for AI"
The Model Context Protocol defines a client-server architecture for connecting agents to tools, databases, and APIs. With 97M+ monthly SDK downloads, MCP has been adopted by every major vendor including OpenAI, Google, Microsoft, and all IDE-based coding agents.
A2A -- Agent-to-Agent
Google's protocol for multi-agent communication. Features Agent Cards for capability discovery, structured messaging with parts (text, files, data), streaming support, and push notifications for long-running tasks.
ACP -- Agent Communication Protocol
RESTful HTTP protocol from IBM Research for agent interoperability. Officially merging with A2A under the Linux Foundation's AAIF, consolidating the agent-to-agent communication layer.
AGENTS.md
Simple file standard (like robots.txt for AI agents). Placed in repository roots, it declares agent capabilities, supported protocols, and auth requirements. Already in 60K+ repos, supported by Cursor, Devin, Gemini CLI, and GitHub Copilot.
Framework Philosophies
Frameworks differ fundamentally in their core abstraction. Some model workflows as directed graphs (LangGraph), others as role-playing teams (CrewAI), others as minimal SDK primitives (OpenAI, Anthropic). And Anthropic's own research team argues the most successful implementations use none of them -- just raw API calls with composable patterns.
graph TD
subgraph MINIMAL["Minimal / DIY"]
RAW["Raw API Calls"]
CHAIN["Prompt Chaining"]
SPAWN["Subprocess Spawning"]
RAW --> CHAIN --> SPAWN
end
subgraph GRAPH["Graph Engines"]
LG["LangGraph"]
NODES["Nodes + Edges + State"]
LG --> NODES
end
subgraph ROLE["Role-Based"]
CR["CrewAI"]
CREW["Agents + Crews + Tasks"]
CR --> CREW
end
subgraph SDK["SDK-Native"]
SDKS["OpenAI SDK\nClaude SDK"]
PRIMS["Agent + Handoff + Tool"]
SDKS --> PRIMS
end
style RAW fill:#5E5CE6,stroke:#5E5CE6,color:#ffffff
style CHAIN fill:#5E5CE6,stroke:#5E5CE6,color:#ffffff
style SPAWN fill:#5E5CE6,stroke:#5E5CE6,color:#ffffff
style LG fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style NODES fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style CR fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style CREW fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style SDKS fill:#FF375F,stroke:#FF375F,color:#ffffff
style PRIMS fill:#FF375F,stroke:#FF375F,color:#ffffff
style MINIMAL fill:#111111,stroke:#5E5CE6
style GRAPH fill:#111111,stroke:#BF5AF2
style ROLE fill:#111111,stroke:#BF5AF2
style SDK fill:#111111,stroke:#FF375F
Anthropic's own "Building Effective Agents" research paper recommends starting with the simplest possible approach: direct API calls with prompt chaining. They found that the most successful production agent implementations often avoid frameworks entirely, using composable patterns like routing, parallelization, and orchestrator-worker delegation built from scratch. Frameworks add value when you need persistence, complex state management, or visual debugging -- but raw API calls give you maximum control and minimum abstraction overhead.
LangGraph & CrewAI
The two most popular open-source agent frameworks take radically different approaches. LangGraph models workflows as stateful directed graphs with conditional edges and checkpoints. CrewAI models them as teams of role-playing agents with managers, delegation, and task pipelines.
graph LR
STATE["State\nObject"]
A["Node A\n(LLM Call)"]
COND{"Conditional\nEdge"}
B["Node B\n(Analyze)"]
C["Node C\n(Summarize)"]
D["Node D\n(Tool Call)"]
CKPT[("Checkpoint\nPersist")]
DONE["END"]
STATE --> A
A --> COND
COND -->|"route A"| B
COND -->|"route B"| C
B --> D
C --> D
D --> CKPT
CKPT --> DONE
style STATE fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style A fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style COND fill:#FF375F,stroke:#FF375F,color:#ffffff
style B fill:#1c1c1e,stroke:#BF5AF2,color:#f5f5f7
style C fill:#1c1c1e,stroke:#BF5AF2,color:#f5f5f7
style D fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style CKPT fill:#FF9F0A,stroke:#FF9F0A,color:#0a0010
style DONE fill:#30D158,stroke:#30D158,color:#0a0010
graph TD
MGR["Manager\nAgent"]
RES["Researcher\nAgent"]
WRT["Writer\nAgent"]
EDT["Editor\nAgent"]
R_OUT["Research\nOutput"]
DRAFT["Draft"]
FINAL["Final\nOutput"]
MGR -->|"delegates"| RES
MGR -->|"delegates"| WRT
MGR -->|"delegates"| EDT
RES --> R_OUT
R_OUT --> WRT
WRT --> DRAFT
DRAFT --> EDT
EDT --> FINAL
style MGR fill:#FF375F,stroke:#FF375F,color:#ffffff
style RES fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style WRT fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style EDT fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style R_OUT fill:#1c1c1e,stroke:#0A84FF,color:#f5f5f7
style DRAFT fill:#1c1c1e,stroke:#0A84FF,color:#f5f5f7
style FINAL fill:#30D158,stroke:#30D158,color:#0a0010
LangGraph
47M+ downloads, the most mature stateful agent system. Model-agnostic, supports scatter-gather patterns, subgraphs, reducers for parallel state merging, and human-in-the-loop interrupts. High complexity ceiling for advanced workflows, but steeper learning curve. Checkpoint persistence enables time-travel debugging.
CrewAI
44.6K GitHub stars, fastest path from zero to working multi-agent prototype. Agents have roles, goals, and backstories. Crews assemble agents with sequential or hierarchical process flows. First-class MCP tool support. Over 100K+ developers certified through CrewAI University.
OpenAI & Anthropic SDKs
Both leading model providers have released their own agent SDKs with distinctly different philosophies. OpenAI emphasizes minimal primitives (Agent, Handoff, Guardrail) for fast multi-agent assembly. Anthropic builds around isolated subagent contexts and deep MCP integration, powering Claude Code itself.
graph LR
AGENT_A["Agent A\n(instructions + tools)"]
LOOP["Agent Loop\n(run)"]
TOOL["Tool Call"]
HAND["Handoff\nto Agent B"]
GUARD["Guardrail\n(validates)"]
RESP["Response"]
AGENT_A --> LOOP
LOOP --> TOOL
LOOP --> HAND
LOOP --> GUARD
GUARD -->|"pass"| RESP
TOOL -->|"result"| LOOP
HAND -->|"transfer"| LOOP
style AGENT_A fill:#30D158,stroke:#30D158,color:#0a0010
style LOOP fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style TOOL fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style HAND fill:#FF9F0A,stroke:#FF9F0A,color:#0a0010
style GUARD fill:#FF375F,stroke:#FF375F,color:#ffffff
style RESP fill:#1c1c1e,stroke:#30D158,color:#f5f5f7
graph LR
ORCH["Orchestrator"]
SUB["Subagent\n(isolated context)"]
SKILLS["Skills +\nMCP Tools"]
RESULT["Result"]
SYNTH["Orchestrator\nSynthesizes"]
ORCH -->|"spawns"| SUB
SUB --> SKILLS
SKILLS --> RESULT
RESULT --> SYNTH
style ORCH fill:#FF375F,stroke:#FF375F,color:#ffffff
style SUB fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style SKILLS fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style RESULT fill:#1c1c1e,stroke:#30D158,color:#f5f5f7
style SYNTH fill:#FF375F,stroke:#FF375F,color:#ffffff
OpenAI Agents SDK
Three primitives: Agent (instructions + tools), Handoff (transfer between agents), and Guardrail (input/output validation). Build working multi-agent systems in under 100 lines. Temporal integration for durable execution. Guardrails run in parallel with the main agent execution loop, catching violations without blocking.
Claude Agent SDK
Powers Claude Code itself. Orchestrator spawns subagents with isolated context windows, preventing cross-contamination between parallel tasks. Achieves ~70% token reduction via the orchestrator-worker pattern. Deep MCP integration means every tool server is available to subagents. Agent Skills system for reusable capability modules.
Microsoft & Google
The two cloud giants have converged on similar strategies: open-source frameworks that deploy to their managed cloud platforms. Microsoft merged its dual-track efforts (Semantic Kernel + AutoGen) into a unified Agent Framework. Google's ADK is model-flexible and deploys to Vertex AI.
graph TD
subgraph MS["Microsoft"]
SK["Semantic Kernel\n(.NET, 27K stars)"]
AG["AutoGen\n(Research)"]
MAF["Microsoft\nAgent Framework"]
AZURE["Azure AI\nFoundry"]
SK -->|"enterprise features"| MAF
AG -->|"agent abstractions"| MAF
MAF -->|"deploys to"| AZURE
end
subgraph GOOG["Google"]
ADK_F["ADK\n(Open Source)"]
LIT["LiteLLM\n(Model Flexible)"]
VERTEX["Vertex AI\nAgent Engine"]
ADK_F -->|"supports"| LIT
ADK_F -->|"deploys to"| VERTEX
end
style SK fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style AG fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style MAF fill:#FF375F,stroke:#FF375F,color:#ffffff
style AZURE fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style ADK_F fill:#30D158,stroke:#30D158,color:#0a0010
style LIT fill:#1c1c1e,stroke:#30D158,color:#f5f5f7
style VERTEX fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style MS fill:#111111,stroke:#BF5AF2
style GOOG fill:#111111,stroke:#30D158
Microsoft Agent Framework
Best for .NET/Azure shops. Merges Semantic Kernel's enterprise-grade state management with AutoGen's multi-agent research abstractions. Purpose-built for long-running human-in-the-loop scenarios. GitHub Copilot SDK integration. 27K GitHub stars for Semantic Kernel. RC status targeting GA in Q1 2026.
Google ADK
"Production-ready agents in under 100 lines." Model-flexible via LiteLLM -- supports Gemini, Anthropic, Meta, and Mistral models. Native A2A protocol support for multi-agent discovery. Tight Google Cloud Platform integration with Vertex AI Agent Engine for managed deployment and scaling.
Infrastructure Layer
Temporal is the only solution purpose-built for making agent workflows survive infrastructure failures. Its Event History persists an agent's entire decision chain, allowing workflows to resume after crashes, restarts, or infrastructure changes. OpenAI integrated Temporal into their Agents SDK for durable multi-agent execution.
sequenceDiagram
participant Agent
participant Temporal
participant ToolA
participant ToolB
Agent->>Temporal: Start workflow
Temporal->>Agent: Assign task
Agent->>ToolA: Call tool
ToolA-->>Agent: Result
Note over Agent,Temporal: Agent crashes!
Temporal->>Agent: Replay event history
Agent->>ToolB: Resume from checkpoint
ToolB-->>Agent: Result
Agent->>Temporal: Complete workflow
Temporal
99.999% uptime target. Workflows can run for hours, days, or months without losing state. Language-agnostic with SDKs for Go, Java, Python, and TypeScript. Event History persists every decision point, enabling replay after any failure. Significant operational complexity but invaluable for mission-critical agent work requiring guaranteed execution.
Vertex Agent Engine
Google's managed infrastructure for running agents at scale. Handles session state, tool execution, and agent orchestration. Integrated with Google ADK and the A2A protocol. Supports multi-agent systems with built-in authentication and logging. Removes the operational burden of self-hosted infrastructure.
Agent workflows are fundamentally different from request/response APIs. A coding agent might spend 20 minutes editing files, running tests, and iterating on fixes. A research agent might take hours to gather, analyze, and synthesize information. Without durable execution, a network blip or process restart means starting over from scratch. Temporal's event-sourced architecture means that even if the worker process dies mid-workflow, the new worker picks up exactly where the old one left off -- no lost work, no repeated API calls, no wasted tokens.
Visual & No-Code Platforms
Visual and managed platforms bridge the gap between raw framework code and business-accessible agent building. n8n leads with deep LangChain integration and self-hosting. Zapier's MCP server gives any MCP-compatible AI tool access to 8,000+ app integrations. Google's Vertex AI Agent Builder provides the full managed lifecycle.
graph LR
N8N["n8n\n(Visual Builder)"]
AI_NODES["70+ AI Nodes"]
LANG_INT["LangChain\nIntegration"]
SELF_HOST["Self-Hostable"]
ZAP["Zapier"]
MCP_S["MCP Server"]
APPS["8,000+ Apps\n40K+ Actions"]
VERTEX_B["Vertex AI\nAgent Builder"]
ADK_A["ADK Agents"]
MANAGED["Managed\nRuntime"]
SCALE["Scale &\nGovern"]
USERS["End Users /\nDeployed Agents"]
N8N --> AI_NODES
AI_NODES --> LANG_INT
LANG_INT --> SELF_HOST
SELF_HOST --> USERS
ZAP --> MCP_S
MCP_S --> APPS
APPS --> USERS
VERTEX_B --> ADK_A
ADK_A --> MANAGED
MANAGED --> SCALE
SCALE --> USERS
style N8N fill:#30D158,stroke:#30D158,color:#0a0010
style AI_NODES fill:#30D158,stroke:#30D158,color:#0a0010
style LANG_INT fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style SELF_HOST fill:#1c1c1e,stroke:#30D158,color:#f5f5f7
style ZAP fill:#30D158,stroke:#30D158,color:#0a0010
style MCP_S fill:#0A84FF,stroke:#0A84FF,color:#ffffff
style APPS fill:#1c1c1e,stroke:#30D158,color:#f5f5f7
style VERTEX_B fill:#30D158,stroke:#30D158,color:#0a0010
style ADK_A fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style MANAGED fill:#FF9F0A,stroke:#FF9F0A,color:#0a0010
style SCALE fill:#FF9F0A,stroke:#FF9F0A,color:#0a0010
style USERS fill:#FF375F,stroke:#FF375F,color:#ffffff
n8n
Open-source visual workflow automation. 70+ AI nodes covering LLMs, embeddings, vector DBs, OCR, and image generation. $2.5B valuation after $180M Series C (Oct 2025). 5x revenue growth since AI pivot. Best visual builder for AI agent workflows with deep LangChain integration and full self-hosting support.
Zapier
450+ AI-focused app integrations. MCP-native -- AI tools trigger Zapier workflows without custom code. 8,000+ apps and 40,000+ actions available through a single MCP server connection. Closed-source, usage-based pricing. The fastest path from AI agent to real-world action.
Vertex AI Agent Builder
Managed platform for the full agent lifecycle. Hosts ADK agents on Agent Engine Runtime with built-in authentication, logging, and scaling. Best for organizations already on GCP who want enterprise governance without managing infrastructure.
The DIY Pattern Library
Anthropic's influential "Building Effective Agents" blog post (December 2024) argued that the most successful agent implementations they observed across dozens of teams were NOT using complex frameworks. They identified five composable patterns implementable with raw API calls. Their recommendation: "Find the simplest solution possible and only increase complexity when needed."
graph TD
RAW["Raw LLM API"]
PC["Prompt\nChaining"]
PC_D["A outputs →\nB inputs →\nC outputs"]
RT["Routing"]
RT_D["Classify →\ndispatch to\nspecialist"]
PL["Parallelization"]
PL_D["Fan out →\nconcurrent\nsubtasks → merge"]
OW["Orchestrator-\nWorkers"]
OW_D["One LLM dispatches →\nworkers execute →\nsynthesize"]
EO["Evaluator-\nOptimizer"]
EO_D["Generate →\nevaluate →\niterate"]
RAW --> PC
RAW --> RT
RAW --> PL
RAW --> OW
RAW --> EO
PC --> PC_D
RT --> RT_D
PL --> PL_D
OW --> OW_D
EO --> EO_D
style RAW fill:#5E5CE6,stroke:#5E5CE6,color:#ffffff
style PC fill:#5E5CE6,stroke:#5E5CE6,color:#ffffff
style RT fill:#5E5CE6,stroke:#5E5CE6,color:#ffffff
style PL fill:#5E5CE6,stroke:#5E5CE6,color:#ffffff
style OW fill:#5E5CE6,stroke:#5E5CE6,color:#ffffff
style EO fill:#5E5CE6,stroke:#5E5CE6,color:#ffffff
style PC_D fill:#1c1c1e,stroke:#5E5CE6,color:#f5f5f7
style RT_D fill:#1c1c1e,stroke:#5E5CE6,color:#f5f5f7
style PL_D fill:#1c1c1e,stroke:#5E5CE6,color:#f5f5f7
style OW_D fill:#1c1c1e,stroke:#5E5CE6,color:#f5f5f7
style EO_D fill:#1c1c1e,stroke:#5E5CE6,color:#f5f5f7
Claude Code itself is a case study in the DIY approach at scale. Its orchestrator-worker pattern spawns subagents with isolated context windows -- no framework, just well-structured subprocess management. The v1.0.60 native agent system achieved ~70% token reduction vs. earlier approaches.
graph TD
subgraph Gastown["Gastown (Steve Yegge, Jan 2026)"]
MAYOR["Mayor
(Orchestrator)"]
BEADS["Beads
(Git-backed state)"]
REFINERY["Refinery
(Merge Queue)"]
end
subgraph Workers["Polecats (20-30 parallel agents)"]
W1["Claude Code
Instance"]
W2["Claude Code
Instance"]
W3["Codex / Goose
Instance"]
end
subgraph Infra["Infrastructure"]
WT["Git Worktrees
(Isolation)"]
GUPP["GUPP
(Git handoffs)"]
WITNESS["Witness
(Health monitor)"]
end
MAYOR --> BEADS
BEADS --> W1
BEADS --> W2
BEADS --> W3
W1 --> WT
W2 --> WT
W3 --> WT
WT --> GUPP
GUPP --> REFINERY
WITNESS --> Workers
style MAYOR fill:#FF375F,stroke:#FF375F,color:#ffffff
style BEADS fill:#5E5CE6,stroke:#5E5CE6,color:#ffffff
style REFINERY fill:#FF9F0A,stroke:#FF9F0A,color:#ffffff
style W1 fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style W2 fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style W3 fill:#BF5AF2,stroke:#BF5AF2,color:#ffffff
style WT fill:#1c1c1e,stroke:#5E5CE6,color:#f5f5f7
style GUPP fill:#1c1c1e,stroke:#5E5CE6,color:#f5f5f7
style WITNESS fill:#1c1c1e,stroke:#FF9F0A,color:#f5f5f7
Created by Steve Yegge (40+ years at Amazon, Google, Sourcegraph), Gastown coordinates 20-30 Claude Code instances working in parallel on the same codebase. Written in Go (~189K LOC), it uses seven specialized agent roles: the Mayor (orchestrator), Polecats (ephemeral workers), Refinery (merge queue), Witness (health monitor), and more. All state persists through Beads -- Yegge's Git-backed issue tracker -- making agent identity and work state fully decoupled from LLM context windows. The community has grown to 500+ active Discord members with plans to federate across thousands of developers for collaborative agent-driven projects.
Prompt Chaining
Sequential LLM calls where each output feeds into the next input. Add validation gates between steps for quality control. The most deterministic pattern and easiest to debug. Use for: document generation, data processing pipelines.
Routing
Initial LLM classifies input and dispatches to the right specialist. Each specialist is optimized for its domain with tailored prompts and tools. Use for: customer support triage, multi-domain Q&A, intent-based workflows.
Parallelization
Break task into independent subtasks, run simultaneously, merge results. Two variants: sectioning (divide by topic) and voting (same task, multiple perspectives). Use for: code review, content analysis, multi-source research.
Orchestrator-Workers
One LLM dynamically decomposes the task and delegates to workers. Workers operate independently; orchestrator synthesizes results. The pattern behind Claude Code. Use for: complex refactoring, multi-file edits, research synthesis.
Evaluator-Optimizer
One LLM generates output, another evaluates it against criteria. Loop until quality threshold is met. Use for: code that must pass tests, translation refinement, content that must meet specific rubrics.
Standards Convergence Timeline
The convergence story of 2023-2026 is remarkable. What began as isolated experiments -- Microsoft Research's AutoGen paper, Anthropic's internal agent patterns, Google's protocol work -- has consolidated into a layered ecosystem with neutral governance. The formation of the Agentic AI Foundation in December 2025, co-funded by AWS, Anthropic, Google, Microsoft, and OpenAI, signaled that interoperability is no longer optional.
timeline
title Agent Orchestration Timeline
August 2023 : AutoGen paper (Microsoft Research)
Late 2023 : CrewAI open-sourced
Mid 2024 : MCP open-sourced by Anthropic
December 2024 : Anthropic "Building Effective Agents" blog
March 2025 : OpenAI Agents SDK released
: Claude Agent SDK released
: ACP released by IBM
April 2025 : Google A2A protocol announced
: Google ADK released
June 2025 : A2A donated to Linux Foundation
August 2025 : ACP merges with A2A
: AGENTS.md reaches 60K repos
October 2025 : Microsoft Agent Framework preview
: n8n $2.5B valuation
: Claude Agent Skills system
December 2025 : MCP donated to Linux Foundation
: AAIF formed
January 2026 : Gastown released by Steve Yegge
February 2026 : MCP hits 97M monthly downloads
March 2026 : Microsoft Agent Framework RC
Acronym Reference
| Acronym | Meaning |
|---|---|
| A2A | Agent-to-Agent Protocol |
| AAIF | Agentic AI Foundation |
| ACP | Agent Communication Protocol |
| ADK | Agent Development Kit (Google) |
| API | Application Programming Interface |
| CLI | Command-Line Interface |
| DIY | Do It Yourself |
| GA | General Availability |
| GCP | Google Cloud Platform |
| GUPP | Git Up, Pull, Push (Gastown handoff protocol) |
| gRPC | Google Remote Procedure Call |
| HTTP | Hypertext Transfer Protocol |
| IPC | Inter-Process Communication |
| LLM | Large Language Model |
| MCP | Model Context Protocol |
| OCR | Optical Character Recognition |
| PyPI | Python Package Index |
| RC | Release Candidate |
| REST | Representational State Transfer |
| SDK | Software Development Kit |
| SSO | Single Sign-On |