On 1 April 2024, the United Kingdom and the USA signed a Memorandum of Understanding (MoU) detailing how both governments will work together to develop tests for advanced artificial intelligence (AI) models.
The MoU follows through on the commitments made by the UK and US at the AI Safety Summit in November 2023 (the Summit), where both governments announced the creation of their respective AI Safety Institutes (the Institutes” and confirmed their intention to work together toward the “safe, secure, and trustworthy” development and use of AI. For further information on the Summit, please refer to our recent articles (available here and here, respectively).
The MoU intends to provide a foundation for the Institutes to “develop a shared approach to model evaluations, including the underpinning methodologies, infrastructures and processes” and “perform at least one joint testing exercise on a publicly accessible model”. In addition, the Institutes will “collaborate on AI safety technical research, to advance international scientific knowledge of frontier AI models and to facilitate sociotechnical policy alignment on AI safety and security”.
Given the above, the MoU demonstrates a US and UK focus in testing the use of AI to understand the risks associated with it, as well as ensuring that both countries work together to develop international standards for AI safety testing.
The notice and press release issued by the UK Government regarding the MoU can be found here and here respectively.