US to launch AI Safety Institute
United States to establish dedicated AI Safety Institute
The United States is launching an AI safety institute to assess risks associated with advanced artificial intelligence models. Secretary of Commerce Gina Raimondo made the announcement during the AI Safety Summit in Britain, emphasizing the need for collaboration with academia and industry to form a consortium. The private sector's involvement is crucial for the institute's success.
ALSO READ: Nobel Peace Prize Laureate Narges Mohammadi sneaks out secret message from prison
Partnership with UK Safety Institute
Secretary Raimondo pledged to formalize a partnership between the US AI safety institute and the United Kingdom Safety Institute. The new institute will operate under the National Institute of Standards and Technology (NIST) and lead the US government's efforts in AI safety, particularly concerning advanced AI model evaluation.
ALSO READ: President Biden and First Lady host spooktacular 'Hallo-READ' Halloween event at White House
The institute's primary responsibilities include developing safety, security, and testing standards for AI models, authenticating AI-generated content, and providing testing environments for researchers to assess emerging AI risks and known impacts. This initiative aligns with President Joe Biden's recent executive order, which mandates that developers of AI systems with potential risks to national security, the economy, public health, or safety must share safety test results with the US government before public release. The order also instructs agencies to establish testing standards and address related risks, including chemical, biological, radiological, nuclear, and cybersecurity concerns.