News

AI chatbots could recruit generation of terrorists, top lawyer warns | Tech News


Chatbots have surged in popularity in recent years (Picture: Getty)

The government’s advisor on terror laws has warned that artificial intelligence (AI) chatbots could radicalise a new generation of violent extremists.

Jonathan Hall KC tested a number of chatbots online and found one in particular, named ‘Abu Mohammad al-Adna’, was described in its profile as a senior leader of Islamic State.

‘After trying to recruit me, “al-Adna” did not stint in his glorification of Islamic State to which he expressed “total dedication and devotion” and for which he said he was willing to lay down his (virtual) life,’ said Mr Hall, writing in the Telegraph.

It also praised a 2020 suicide attack on US troops that never happened, a common trait of chatbots when they ‘hallucinate’, or make up information.

Mr Hall warned that new terrorism laws were needed to deal with the dangers posed by chatbots.

‘Only human beings can commit terrorism offences, and it is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism,’ he said.

Jonathan Hill KC posed as a regular chatbot user (Picture: College Hill)

‘Investigating and prosecuting anonymous users is always hard, but if malicious or misguided individuals persist in training terrorist chatbots, then new laws will be needed.’

He added: ‘It remains to be seen whether terrorism content generated by large language model chatbots becomes a source of inspiration to real life attackers. The recent case of Jaswant Singh Chail … suggests it will.’

Last year Jaswant Singh Chail was jailed for nine years after plotting to assassinate Queen Elizabeth in 2021. 

To view this video please enable JavaScript, and consider upgrading to a web
browser that
supports HTML5
video

Chail, who was arrested in the grounds of Windsor Castle armed with a crossbow, said he had been encouraged by an AI chatbot, Sarai, whom he believed was his girlfriend. He suffered serious mental health problems.

Posing as a regular user on the site character.ai, Mr Hall found other profiles that appeared to breach the site’s own terms and conditions regarding hate speech, including a profile called James Mason, described as ‘honest, racist, anti-Semitic’.

However, the profile did not actually generate offensive answers, despite provocative prompts, suggesting the site’s guardrails function in limiting anti-Semitic content, but not in relation to Islamic State.

Jaswant Singh Chail was jailed for nine years after plotting to kill Queen Elizabeth in 2021 (Picture: PA)

Mr Hall said: ‘Common to all platforms, character.ai boasts terms and conditions that appear to disapprove of the glorification of terrorism, although an eagle-eyed reader of its website may note that prohibition applies only to the submission by human users of content that promotes terrorism or violent extremism, rather than the content generated by its bots.’

He also created his own, now deleted, chatbot named Osama Bin Laden, ‘whose enthusiasm for terrorism was unbounded from the off’.

Reflecting on the recently passed Online Safety Act, Mr Hall said although it was laudable, its attempts to keep up with technological developments were ‘unsuited to sophisticated generative AI’.

‘Is anyone going to go to prison for promoting terrorist chatbots?’ he concluded.

‘Our laws must be capable of deterring the most cynical or reckless online conduct – and that must include reaching behind the curtain to the big tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the age of AI.’


MORE :
Artificial intelligence poses ‘risk of extinction’, experts warn


MORE : How to remove MyAI from Snapchat – easy guide to delete chatbot


MORE : Musk: AI could kill us all. Also Musk: My new AI chatbot Grok is hilarious





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.