Artificial Intelligence

Artificial intelligence machine was able to dupe Medicaid.gov


Public feedback is a crucial element of shaping and carrying out state and federal level programs. Your responses, as a civilian, inform how these governmental agencies go forward (or not) with policy decisions. At least, that’s what the idea of public feedback is based on. But deepfake text manipulation — just like deepfake videos and photos — has the ability to dupe even the smartest of observers, Wired reports.

A Harvard medical student named Max Weiss proved this in 2019. Back then, Idaho had plans to change its Medicaid program. It needed federal approval to do so, which required public input fed into Medicaid.gov. The state government sought public responses and became Weiss’ little science experiment, in which he used an OpenAI program, GPT-2, to generate nearly-believable responses on the issue. Out of approximately 1,000 comments put into Medicaid.gov that round, half of them came from Weiss’ artificial intelligence machine. When he asked volunteers to differentiate between the real and fake ones, Wired says the volunteers “did no better than random guessing.”

The nightmare of automated responses — With its sophisticated and advanced language system, Weiss’ bot created responses that had no problem sneaking under Medicaid.gov’s radar.

It’s not a particularly difficult undertaking. The bot is repeatedly trained on human speech, phrasing, grammar, and syntax. It then tries to emulate that speech and create its own iterations in real-time. In response to the experiment of being duped by artificial intelligence, the Centers for Medicare and Medicaid Services assured the public that the agency had implemented security programs to block such manipulation.

The need for manipulated text-detection tools — Image and text generation by artificial intelligence can be hit or miss. Sometimes the results are odd and creepy (like this bot that took captions and tried to create photos from them). Other times, these experiments can lead to silly or cute results. But deepfake text manipulation opens a host of security and privacy threats for not only governments but also everyday internet users.

Automated text campaigns have caused headaches for the federal government even before Weiss. In 2017, the Federal Communications Commission found that more than a million responses sent over net neutrality weren’t real.

As these bots get more advanced with intensive training, cybersecurity analysts will have to work on manipulated text-detection tools and programs that can spot the real input from the fake entries. In the era of political misinformation that has led to mass polarization and people believing conspiracy theories, these agencies can’t afford the potential pitfalls of not getting out in front of this problem.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.