UK gov’t outlines AI risks in new report ahead of AI Safety Summit

Artificial intelligence poses a wide range of risks, including the possibility for AI tools to produce disinformation, disrupt certain sectors of the labor market, and magnify biases that exist within the data sets systems have been trained on, according to a new discussion paper from the UK government, published ahead of its AI Safety Summit next week.

The report doubles down on calls for a global consensus to tackling potential harms, and will be distributed to attendees of the summit with the aim of informing discussions and helping to build a shared global understanding of the risks posed by frontier AI, the government said in a statement released alongside the report.

The term frontier AI refers to highly capable foundation models that could exhibit dangerous capabilities.

The report consists of three parts, opening with an outline of the capabilities and risks of frontier AI, including how things might improve in the future as well as the risks that are currently present.

The second part looks at the safety and security risks of generative AI, while the last section  focuses on what the Government Office for Science considers the key uncertainties in frontier AI, considering potential scenarios that could take place by 2023.

“There are a range of views in the scientific, expert and global communities about the risks in relation to the rapid progress in frontier AI, which is expected to continue to evolve in the coming years at rapid speed,” the government said, adding that the document draws on various sources, including UK intelligence assessments.

On the more dramatic end of the scale, the report also warned about the possibility of threat actors using AI to perform cyberattacks, run disinformation campaigns and design biological or chemical weapons.

“As time goes by and frontier AI gets used more extensively, new concerns and risks will inevitably emerge,” said Paul Henninger, head of connected technology at KPMG UK, adding that the more transformative the use of AI becomes, the more value it will create and the more real the risks become.

“We have time to prepare for the change to come but we need the work on frontier AI safety being done today to be suitable for tomorrow’s solutions. Next week’s AI Safety Summit will kick off this activity, but organizations will welcome regular updates to guidance as new use cases develop, and the technology evolves,” he said.

What can we expect from the upcoming AI Safety Summit?

The summit will see representatives from government and industry come together and attempt to develop a shared understanding of the technology’s risks and put forward a process for international collaboration, including how best to support national and international frameworks.

The UK government hopes attendees will be able to identify areas for potential collaboration on AI safety research and showcase how ensuring the safe development of AI will enable AI to be used for good globally.

Around 100 people are expected to be in attendance, including representatives from AI vendors including OpenAI, Google Deepmind, and Anthropic, alongside global governments such as France, Germany, Ireland, Italy and the Netherlands. An invite was extended to the Chinese government and while it has been reported that the country intends to send a representative, further details have not yet been released.

Today, the US government confirmed Vice President Kamala Harris would be attending on behalf of President Joe Biden’s administration.

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said Prime Minister Rishi Sunak in a speech this morning.

“The responsible thing for me to do is to address those fears head on, giving you the peace of mind that we will keep you safe, while making sure you and your children have all the opportunities for a better future that AI can bring,” he said.

Copyright © 2023 IDG Communications, Inc.


This website uses cookies. By continuing to use this site, you accept our use of cookies.