Open Source
OpenAI Goes Open: Releases 120B and 20B Reasoning Models Under Apache 2.0
The company once synonymous with closed models publishes full weights on Hugging Face and inference code on GitHub — the 120B achieves near-parity with o4-mini on reasoning benchmarks and runs on a single 80GB GPU.
OpenAI has made its most dramatic open-source move to date, releasing gpt-oss-120b and gpt-oss-20b — two open-weight reasoning models under the Apache 2.0 license, available on Hugging Face with full inference code on GitHub. The 120B model achieves near-parity with o4-mini on reasoning benchmarks and runs on a single 80GB GPU (H100 or MI300X) via MXFP4 quantization.
The release marks a strategic pivot for a company that built its brand on proprietary models. The open-source pressure from Meta’s Llama, Mistral, and Alibaba’s Qwen has fundamentally reshaped OpenAI’s competitive calculus. With the smaller 20B variant targeting edge deployments, the release directly challenges the value proposition of every closed-model API in the industry — including OpenAI’s own premium tiers.
The strategic logic is clear: by flooding the open-source ecosystem with competitive models, OpenAI can commoditize the inference layer while preserving its lead in the products and services built on top. It also pre-empts regulatory pressure for model transparency and gives enterprise customers who demand on-premise deployment a reason to stay in the OpenAI ecosystem rather than defecting to Llama or Qwen.