Legislation & Government

Latest Stories

AI Safety Partnership: UK and US Forge Alliance to Ensure Responsible AI Development

Written by: Molly-Anna MaQuirl | Posted: 05-04-2024

AI Safety Partnership: UK and US Forge Alliance to Ensure Responsible AI Development

This is an AI-generated image created with Midjourney by Molly-Anna MaQuirl

On Monday, 1st April 2024, representatives from the UK and US governments made history when they signed a ‘Memorandum of Understanding’ (MOU), forging an agreement to work closely together on the emerging technology of Artificial Intelligence (AI). The landmark MOU solidifies a commitment to collaboration in developing tests and regulating risks that AI poses. The partnership extends to advanced research, safety evaluations, and even guidance for AI regulation which could inform laws in the future. 

This latest AI news marks significant advances in commitments both governments previously made at the AI Safety Summit at Bletchley Park in November 2023. The deal is the first bilateral governmental agreement on AI safety and comes at a time when world leaders are seriously reflecting on the potential risks that come with AI. This includes a wide spectrum of concerns, such as using it in cyber-attacks, deepfakes, copyright concerns, misinformation, and other broader ethical risks.

What the Agreement Covers

With the goal of establishing a unified testing and safety framework, the agreement ensures the sharing of capabilities to analyze risks.

The organizations will perform joint testing exercises on publicly accessible AI models while using the pool of expertise from both the UK and US AI Safety Institutes who will share knowledge and insights. The institutes also plan to implement exchanges between employees to further collaboration.

The agreement takes effect immediately, wasting no time in the search for a secure and safe approach to AI. Meanwhile, there are also similar partnerships in review all across the globe to try to promote safe practices across a developing AI industry.

Upon signing the MOU, Michelle Donelan, the UK Secretary of State for Science, Innovation, and Technology, spoke about the deal's importance: “We have always been clear that ensuring the safe development of AI is a shared global issue. Only by working together can we address the technology’s risks head-on and harness its enormous potential to help us all live easier and healthier lives.”

AI governance is a complex issue, and while the EU has already introduced stringent laws, Donelan said that the UK was reluctant to follow suit due to AI ever-changing evolution. Prime Minister Rishi Sunak has spoken about his desire for the UK to be at the forefront of the technology and drive the new AI industry forward.

The AI Safety Organizations Involved

The UK and US governments have been fast to act on AI safety, also recently including the introduction of the AISA (AI Safety Institute) in 2024 in the UK and the equivalent organization currently being established in the United States.

The organizations are government-backed, and the AISA is chaired by tech entrepreneur Ian Hogarth. Researchers from the University of Oxford and the DeepMind project by Google are also involved, and DeepMind is among many groups that are providing their AI models for review by the safety organization. OpenAI, Microsoft, and Meta will also voluntarily share their models for analysis in the interest of public safety.

While some AI fears should be confined to the world of science fiction, there are some potentially harmful uses and dangers that do realistically come with this powerful technology, and this landmark memorandum showcases the commitment of two global superpowers to protect its citizens.

Explore More News

AI Safety Partnership: UK and US Forge Alliance to Ensure Responsible AI Development

Written by: Molly-Anna MaQuirl | Posted: 05-04-2024

AI Safety Partnership: UK and US Forge Alliance to Ensure Responsible AI Development

This is an AI-generated image created with Midjourney by Molly-Anna MaQuirl

On Monday, 1st April 2024, representatives from the UK and US governments made history when they signed a ‘Memorandum of Understanding’ (MOU), forging an agreement to work closely together on the emerging technology of Artificial Intelligence (AI). The landmark MOU solidifies a commitment to collaboration in developing tests and regulating risks that AI poses. The partnership extends to advanced research, safety evaluations, and even guidance for AI regulation which could inform laws in the future. 

This latest AI news marks significant advances in commitments both governments previously made at the AI Safety Summit at Bletchley Park in November 2023. The deal is the first bilateral governmental agreement on AI safety and comes at a time when world leaders are seriously reflecting on the potential risks that come with AI. This includes a wide spectrum of concerns, such as using it in cyber-attacks, deepfakes, copyright concerns, misinformation, and other broader ethical risks.

What the Agreement Covers

With the goal of establishing a unified testing and safety framework, the agreement ensures the sharing of capabilities to analyze risks.

The organizations will perform joint testing exercises on publicly accessible AI models while using the pool of expertise from both the UK and US AI Safety Institutes who will share knowledge and insights. The institutes also plan to implement exchanges between employees to further collaboration.

The agreement takes effect immediately, wasting no time in the search for a secure and safe approach to AI. Meanwhile, there are also similar partnerships in review all across the globe to try to promote safe practices across a developing AI industry.

Upon signing the MOU, Michelle Donelan, the UK Secretary of State for Science, Innovation, and Technology, spoke about the deal's importance: “We have always been clear that ensuring the safe development of AI is a shared global issue. Only by working together can we address the technology’s risks head-on and harness its enormous potential to help us all live easier and healthier lives.”

AI governance is a complex issue, and while the EU has already introduced stringent laws, Donelan said that the UK was reluctant to follow suit due to AI ever-changing evolution. Prime Minister Rishi Sunak has spoken about his desire for the UK to be at the forefront of the technology and drive the new AI industry forward.

The AI Safety Organizations Involved

The UK and US governments have been fast to act on AI safety, also recently including the introduction of the AISA (AI Safety Institute) in 2024 in the UK and the equivalent organization currently being established in the United States.

The organizations are government-backed, and the AISA is chaired by tech entrepreneur Ian Hogarth. Researchers from the University of Oxford and the DeepMind project by Google are also involved, and DeepMind is among many groups that are providing their AI models for review by the safety organization. OpenAI, Microsoft, and Meta will also voluntarily share their models for analysis in the interest of public safety.

While some AI fears should be confined to the world of science fiction, there are some potentially harmful uses and dangers that do realistically come with this powerful technology, and this landmark memorandum showcases the commitment of two global superpowers to protect its citizens.