UK spies will need to master artificial intelligence to counter hostile actors who “will undoubtedly seek to use [it] to attack the UK,” according to a new report from the Government Communications Headquarters (GCHQ).
According to the report, this would be through many means, including the use of deepfakes to generate disinformation that could impact the electoral process, and disrupting national infrastructure through the increased use of Internet of Things (IoT) devices.
Deepfake technology, whereby artificial intelligence is used to map the facial features of one person onto an image of another to spread disinformation, has not yet been used in the electoral process to any noticeable extent, but political figures such as President Obama have been featured in deepfake videos.
The UK’s critical national infrastructure has also been left insecure against cyber attacks. In 2018, Ciaran Martin, the head of the UK’s National Cyber Security Centre (NCSC), said a major attack on the UK is a matter of “when, not if.” The use of IoT devices could be hacked en masse to provide the computing power to target larger systems, or be used as a gateway to more sensitive information. Malware could use artificial intelligence to disguise itself, the report says.
With regards to privacy, the report says the use of artificial intelligence could mean that less data is reviewed by humans, but also points out that “the degree of intrusion is equivalent regardless of whether data is processed by an algorithm or a human operator.” The report also says the development of artificial intelligence systems needs to be done in a way that ordinary people would be able to understand and assess its risks regarding its “margins of error and uncertainty associated with a calculation.”
In fields such as counter-terrorism, the report’s authors say there is likely “limited value” in “predictive intelligence.” Instead, AI will be used to sift through large amounts of data and allow humans to, in theory, make better judgments. This is because terrorist attacks do not happen often enough to provide datasets, and the background and ideologies behind these attacks are so varied that it is difficult to construct a reliable model.
“Adoption of AI is not just important to help intelligence agencies manage the technical challenge of information overload. It is highly likely that malicious actors will use AI to attack the UK in numerous ways, and the intelligence community will need to develop new AI-based defence measures,” Alexander Babuta, one of the authors, told the BBC.