OpenAI has released two new AI models, marking its first open release given that GPT-2 more than 5 years in the history. The newly launched models, GPT OSS-120b and GPT OSS-20b, are available for download by Hugging Face under the permissive Apache 2.0 licence, making them accessible for developers and businesses alike. It comes just in time while OpenAI is focus to release GPT-5.
OpenAI’s new open models, GPT OSS-120b and 20b
The two models cater to special cases: the larger GPT-OSS-120b is designed for deployment on a single Nvidia GPU, even as the leaner GPT-OSS-20 b can run on client-grade laptops with 16GB of RAM. Both are in simple terms text based and shortage multimodal skills like photo or audio technology.
OpenAI says the models are tailor-made for agent-model obligations and consist of guide for advanced reasoning workflows. While these open models cannot handle complicated information like pictures, they can direction queries to OpenAI’s more effective closed models through cloud APIs, correctly acting as smart intermediaries.
New Open Models Work on MoE Architecture
The models rely upon a Mixture-of-Experts (MoE) architecture, which permits them to activate only a small subset of parameters as per token, round 5.1 billion for the 120b model, ensure more efficiency and responsiveness. A post-training procedure involving high-compute reinforcement studying and enhances their reasoning skills, aligning them with OpenAI’s o-series of frontier models.
OpenAI claims its open models set a new benchmark in the open-weight category. On Codeforces, a extensively used programming benchmark, GPT-OSS-120b scored 2622, and the smaller 20b model scored 2516 — each outperforming DeepSeek’s R1 but behind OpenAI’s o3 and o4-mini.
Read also: Russia Slams Trump For Imposing High Tariffs Due to Russian Oil Trade Relationships