Why OpenAI’s open-source models matter

Aug 7, 2025 - 17:56
 0  0
Why OpenAI’s open-source models matter

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

Why OpenAI’s new open-weight models matter

OpenAI is opening up again.

The company’s release of two “open-weight” models—gpt-oss-120b and gpt-oss-20b—this month marks a major shift from its 2019 pivot away from transparency, when it began keeping its most advanced research under wraps after a breakthrough in model scaling and compute. Now, with GPT-5 on the horizon, OpenAI is signaling a return—at least in part—to its original ethos.

These new models come with all their internal weights exposed, meaning developers can inspect and fine-tune how they work. That doesn’t make them “open-source” in the strictest sense—the training data and source code remain closed—but it does make them more accessible and adaptable than anything OpenAI has offered in years.

The move matters, not just because of the models themselves, but because of who’s behind them. OpenAI is still the dominant force in generative AI, with ChatGPT as its flagship consumer product. When a leader of that stature starts releasing open models, it sends a signal across the industry. “Open models are here to stay,” says Anyscale cofounder Robert Nishihara. “Now that OpenAI is competing on this front, the landscape will get much more competitive and we can expect to see better open models.”

Enterprises—especially ones in regulated industries like healthcare or financial—like to build on open-source models so that they can tailor them for their needs, and so they can run the models on in-house servers or in private clouds rather than undertaking the high cost and security risks of sending their (possibly sensitive or proprietary) data out to a third-party LLM such as OpenAI’s GPT-4.5, Anthropic’s Claude, or Google’s Gemini. OpenAI’s oss models are licensed under Apache 2.0, meaning developers can use, modify, and even commercialize them, as long as they credit OpenAI and waive any patent claims. 

None of that would matter if the models weren’t state of the art, but they are. The larger gpt-oss-120b (120 billion parameters) model matches OpenAI’s o4-mini on core reasoning benchmarks while running on a single graphics processing unit (GPU), OpenAI says. The smaller gpt-oss-20b model performs on par with the company’s o3-mini, and is compact enough to run on edge devices with just 16 GB of memory (like a high-end laptop). 

That small size matters a lot. Many in the industry believe that small models running on personal devices could be the wave of the future. On-device models, after all, don’t have to connect to the cloud to process data, so they are more secure and can keep data private more easily. Small models are also often trained to do a relatively narrow task (like quality inspection in a factory or language translation from a phone).  

The release could also accelerate the broader ecosystem of open AI infrastructure. “The more popular open models become, the more important open-source infrastructure for deploying those models becomes,” Nishihara says. “We’re seeing the rise of open models complemented by the emergence of high-quality open-source infrastructure for training and serving those models—this includes projects like Ray and vLLM.”

There’s also a geopolitical subtext.  The Trump administration has increasingly framed AI as a strategic asset in its rivalry with China, pushing American companies to shape global norms and infrastructure. Open-weight models from a top U.S. lab—built to run on Nvidia chips—could spread quickly across regions like Africa and the Middle East, countering the rise of free Chinese models tuned for Huawei hardware. It’s a soft-power play, not unlike the U.S. dollar’s dominance as a global currency.

Google’s new Genie 3 world models could wild new forms of gaming, entertainment

With the right prompt, AI models can generate words, voices, music, images, video, and other things. And the quality of those generations continues to grow. Google DeepMind has pushed the boundaries even further with its “world models,” capable of generating live, interactive environments that users can navigate and modify in real time. Words alone don’t fully capture the capabilities of DeepMind’s new Genie 3 model. A demo video shows a number of lifelike worlds (a desert, a scuba diving scene, a living room, a canal city, etc.) At one point, the user adds a whimsical element to the “canal city” world by writing the prompt: “A man in a chicken suit emerges from the left of the shot and runs down the towpath hugging the wall.” And the man in the chicken suit immediately appears in the world. Then, the user drops a dinosaur into the nearby canal. Splash.

The most obvious application of this kind of AI is in gaming, where a model could generate an endless stream of environments and game scenarios for the gamer. It’s a natural research focus for DeepMind, which focused its early AI research on video game environments. The potential for world modeling is enormous. Future versions of the Genie model could enable “choose your adventure” experiences in video or AR formats, where storytelling adapts dynamically to the viewer’s preferences, interests, and impulses. As Google notes, companies working on self-driving cars or robotics could also benefit, using these models to simulate real-world conditions that would be costly or impractical to recreate physically. 

The AI industry responds to AI tool abuse by students

As the new school year approaches, educators and parents continue to worry that students are using AI tools to do their schoolwork for them. The danger is that students can rely heavily on AI to generate answers to questions, while failing to learn all the contextual stuff they would encounter during the process of finding answers on their own. A growing body of research suggests that relying on AI harms overall academic performance. Now OpenAI and Google have each responded to this worrisome situation by releasing special “study modes” inside their respective AI chatbots. 

OpenAI’s tool is called ChatGPT “study mode,” while Google offers a similar feature within its Gemini chatbot called Guided Learning. The tools’ format and features seem remarkably similar. Both break down complex problems into smaller chunks and then walk the student through them using a question-and-answer approach. Google says its questions are designed to teach students the “how” and “why” behind a topic, encouraging learning throughout the exchange. OpenAI says its tool uses “Socratic questioning, hints, and self-reflection prompts to guide understanding and promote active learning.” Both OpenAI and Google say that the teaching approach and format are based on research by learning experts. 

Still, the student is ultimately in control of what AI tools they use. OpenAI says that users can easily toggle between regular chatbot mode and the study mode. Google says it believes students need AI for both traditional question searches and for guided study. So these new learning tools may provide an alternative mode of learning using AI, but they’re not likely to significantly shift the argument around AI’s threat to real learning.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, the future of work, and design? Sign up for Fast Company Premium.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0