US Government Partners with Tech Giants for AI National Security Reviews
US announces deals with tech firms for national security review of AI models before release
The Guardian
Image: The Guardian
The US government has formed partnerships with Google DeepMind, Microsoft, and xAI to review their AI models before public release. This initiative, led by the Center for AI Standards and Innovation (CAISI), aims to assess national security risks associated with advanced AI technologies, particularly in cybersecurity and biosecurity.
- 01The US government is collaborating with major tech firms to review AI models pre-release.
- 02The initiative is spearheaded by the Center for AI Standards and Innovation (CAISI).
- 03Focus areas include cybersecurity, biosecurity, and chemical weapons risks.
- 04Previous agreements have been made with OpenAI and Anthropic for similar evaluations.
- 05Concerns are rising about the potential dangers of powerful AI models.
Advertisement
In-Article Ad
The US government has announced agreements with Google DeepMind, Microsoft, and xAI to review early versions of their AI models prior to public release. This initiative, led by the Center for AI Standards and Innovation (CAISI) within the US Department of Commerce, aims to assess the capabilities of advanced AI technologies and their implications for national security. CAISI director Chris Fall emphasized the importance of rigorous measurement science in understanding frontier AI. The agreements focus on identifying national security risks related to cybersecurity, biosecurity, and chemical weapons. CAISI has previously completed over 40 evaluations of unreleased models, highlighting the common practice of developers sharing models with reduced safety measures to facilitate comprehensive risk assessments. As concerns grow over the potential dangers posed by powerful AI models, such as Anthropic's Mythos, these collaborations are seen as essential for ensuring public safety. Microsoft has also announced a similar agreement in the UK, reinforcing the need for collaborative efforts in AI testing for national security.
Advertisement
In-Article Ad
This initiative aims to enhance national security by mitigating risks associated with advanced AI technologies, which could impact public safety and cybersecurity.
Advertisement
In-Article Ad
Reader Poll
Do you support government oversight of AI model releases?
Connecting to poll...
More about Center for AI Standards and Innovation
Read the original article
Visit the source for the complete story.


