According to a new statement, the White House realizes open source is key to artificial intelligence (AI) development — much like many businesses using the technology.
On Tuesday, the National Telecommunications and Information Administration (NTIA) issued a report supporting open-source and open models to promote innovation in AI, while emphasizing the need for vigilant risk monitoring.
The report recommends that the US continue to support AI openness while working on new capabilities to monitor potential AI risks but refrain from restricting the availability of open model weights.
Also: Switzerland federal government requires releasing its software as open source
According to the NTIA report, open AI models offer several key benefits:
-
Broader accessibility: “Open-weight” models allow developers to build upon and adapt previous work, making AI tools more accessible to small companies, researchers, nonprofits, and individuals.
-
Innovation promotion: The openness of AI systems affects competition and innovation in these revolutionary tools. By embracing openness, the report aims to provide a roadmap for responsible AI innovation and American leadership.
-
Accelerated development: Open models may accelerate the diffusion of AI’s benefits and the pace of AI safety research.
-
Democratization of AI: Open models broaden the availability of AI tools, potentially democratizing access to powerful AI capabilities across various sectors and user groups.
-
Transparency and understanding: Open models can contribute to a broader understanding of AI systems, a crucial factor for effective and reliable development.
-
Economic benefits: The wide availability of US-developed open foundation models can serve the national interest by promoting innovation and competitiveness.
-
Research advancement: Open models facilitate academic research on the internals of AI models, enabling deeper study and improvement of the technology.
-
Local deployment: Open weights allow users and organizations to run models locally on their edge devices, which can benefit certain applications and use cases.
-
Customization: Open models enable creative modifications to suit specific user needs and applications.
These conclusions were informed by responses from government employees, industry leaders, and individuals to a Request for Comment on AI model issues. For example, the Electronic Privacy Information Center (EPIC) recommended the NTIA balance the advantages, disadvantages, and regulatory hurdles of AI models along the entire spectrum of openness.
GitHub agrees — the company believes in using open source and open weights for AI while considering the evidence of possible harm alongside the benefits.
As Open Source Initiative (OSI) Executive Director Stefano Maffulli, who is working on defining open-source AI, said in a comment to ZDNET, “It’s gratifying to see the input we provided during the comment period reflected in the report. OSI believes that marginal risk is the appropriate framework to evaluate the risks of open models.”
“Furthermore, we’ve encouraged a cautious regulatory hand, and the report adopts a monitoring framework to inform ongoing assessments and possible policy action. In short, address bad actors and bad behavior,” he added.
US Secretary of Commerce Gina Raimondo stated that the report provides a roadmap for responsible AI innovation and American leadership by embracing openness. She emphasized that the Biden-Harris Administration is leveraging all available resources to maximize AI’s potential while minimizing its risks.
Also: Meta inches toward open source AI with new LLaMA 3.1
In the release, Alan Davidson, NTIA administrator and assistant secretary of commerce for communications and information, highlighted the importance of open AI systems in affecting “competition, innovation, and risks in these revolutionary tools.” He stressed the government’s crucial role in supporting AI development while building capacity to understand and address new risks.
The report calls for the government to establish an ongoing program to collect evidence of risks and benefits. Armed with this information, they’ll evaluate the evidence and then act. This may include possible restrictions on model weight availability if warranted.
As the Federal Trade Commission (FTC) warned earlier this month, “certain open-weight models have already enabled concrete harms, particularly in the area of nonconsensual intimate imagery and child sexual abuse material (CSAM).” In short, AI has already proven itself to have very mixed potential.
The NTIA’s recommendations aim to promote innovation and access to AI technology, while positioning the US government to respond quickly to risks that may arise from future models.