There often isn’t much turn-around time between when a new technology goes live and hackers begin to try to exploit it. The same is true with artificial intelligence. The hype and adoption of AI has happened at an explosive speed. It’s only natural that we can expect to see a burst of security attacks that begin to target AI vulnerabilities.
Data poisoning is one of the techniques known as ‘adversarial machine learning‘ that attackers are beginning to use against AI. Data poisoning involves inserting data into the data set that can influence the results of the machine learning, biasing the results towards a specific outcome.
But that is likely just the beginning. A report by security firm Adversa found that every ML model in the top 60 most commonly used models in the industry are prone to at least one vulnerability.
Alex Polyakov, CEO of Adversa, said that “unfortunately, our investigation shows that the AI industry is alarmingly unready for the wave of coming real-world attacks against AI systems. Public perception of how trustworthy AI is will be a core criterion determining whether societies and businesses will adopt AI for good or face another AI winter.”
Oliver Rochford, researcher and former Gartner analyst, said that “building trust in the security and safety of machine learning is crucial. We are asking people to put their faith in what is essentially a black box, and for the AI revolution to succeed, we must build trust. And we can’t bolt security on this time. We won’t have many chances at getting it right. The risks are too high – but so are the benefits.”