Red Hat’s take on open-source AI: Pragmatism over utopian dreams

MR.Cole_Photographer/Getty Images

Open-source AI is changing everything people thought they knew about artificial intelligence. Just look at DeepSeek, the Chinese open-source program that blew the financial doors off the AI industry. Red Hat, the world’s leading Linux company, understands the power of open source and AI better than most.

Red Hat’s pragmatic approach to open-source AI reflects its decades-long commitment to open-source principles while grappling with the unique complexities of modern AI systems. Instead of chasing artificial general intelligence (AGI) dreams, Red Hat balances practical enterprise needs with what AI can deliver today. 

Also: Mistral AI says its Small 3 model is a local, open-source alternative to GPT-4o mini

Simultaneously, Red Hat is acknowledging the ambiguity surrounding “open-source AI.” At the Linux Foundation Members Summit in November 2024, Richard Fontana, Red Hat’s principal commercial counsel, highlighted that while traditional open-source software relies on accessible source code, AI introduces challenges with opaque training data and model weights. 

During a panel discussion, Fontana said, “What is the analog to [source code] for AI? That is not clear. Some people believe training data has to be open, but that’s highly impractical for LLMs [large language models]. It suggests open-source AI may be a utopian aim at this stage.” 

This tension is evident in models released under licenses that are restrictive yet labeled “open-source.” These fake open-source programs include Meta’s LLama, and Fontana criticizes this trend, noting that many licenses discriminate against fields of endeavor or groups while still claiming openness.  

A core challenge is reconciling transparency with competitive and legal realities. While Red Hat advocates for openness, Fontana cautions against rigid definitions requiring full disclosure of training data: Disclosing detailed training data risks targeting model creators in today’s litigious environment. Fair use of publicly available data complicates transparency expectations.  

Also: Red Hat bets big on AI with its Neural Magic acquisition

Red Hat CTO Chris Wright emphasizes pragmatic steps toward reproducibility, advocating for open models like Granite LLMs and tools such as InstructLab, which enable community-driven fine-tuning. Wright writes: “InstructLab lets anyone contribute skills to models, making AI truly collaborative. It’s how open source won in software — now we’re doing it for AI.”

Wright frames this as an evolution of Red Hat’s Linux legacy:  “Just as Linux standardized IT infrastructure, RHEL AI provides a foundation for enterprise AI — open, flexible, and hybrid by design.”

Red Hat envisions AI development mirroring open-source software’s collaborative ethos. Wright argues: “Models must be open-source artifacts. Sharing knowledge is Red Hat’s mission — this is how we avoid vendor lock-in and ensure AI benefits everyone.” 

Also: The best AI for coding in 2025 (and what not to use – including DeepSeek R1)

That won’t be easy. Wright admits that “AI, especially the large language models driving generative AI, cannot be viewed in quite the same way as open source software. Unlike software, AI models principally consist of model weights, which are numerical parameters that determine how a model processes inputs, as well as the connections it makes between various data points. Trained model weights are the result of an extensive training process involving vast quantities of training data that are carefully prepared, mixed, and processed.”

Although models are not software, Wright continues:

“In some respects, they serve a similar function to code. It’s easy to draw the comparison that data is, or is analogous to, the source code of the model. Training data alone does not fit this role. The majority of improvements and enhancements to AI models now taking place in the community do not involve access to or manipulation of the original training data. Rather, they are the result of modifications to model weights or a process of fine-tuning, which can also serve to adjust model performance. Freedom to make those model improvements requires that the weights be released with all the permissions users receive under open-source licenses.”

However, Fontana also warns against overreach in defining openness, advocating for minimal standards rather than utopian ideals. “The Open Source Definition (OSD) worked because it set a floor, not a ceiling. AI definitions should focus on licensing clarity first, not burden developers with impractical transparency mandates.”

This approach is similar to the Open Source Initiative (OSI)‘s Open Source AI Definition (OSAID) 1.0, but it’s not the same thing.  While the Mozilla Foundation, the OpenInfra Foundation, Bloomberg Engineering, and SUSE have endorsed the OSAID, Red Hat has yet to give the document its blessing. Instead, Wright says, “Our viewpoint to date is simply our take on what makes open-source AI achievable and accessible to the broadest set of communities, organizations, and vendors.” 

Also: The best Linux laptops of 2025: Expert tested and reviewed

Wright concludes: “The future of AI is open, but it’s a journey. We’re tackling transparency, sustainability, and trust — one open-source project at a time.” Fontana’s cautionary perspective grounds this vision, which is that open-source AI must respect competitive and legal realities. The community should refine definitions gradually, not force-fit ideals onto immature technology.

The OSI, while focusing on a definition, agrees. OSAID 1.0 is only the first imperfect version. The group is already working toward another version. In the meantime, Red Hat will continue its work in shaping AI’s open future by building bridges between developer communities and enterprises while navigating AI transparency’s thorny ethics.

Read original article here

Denial of responsibility! Pioneer Newz is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a Comment