FTC asks normal folks if they’d like AI impersonation scam protection, too

The FTC is moving to make not only the fraudulent AI impersonation of government and business folk illegal but is also now asking the American public if they’d like some protection too. 

The US consumer watchdog announced as much on Thursday, alongside the introduction of a final rule that will give the Commission the ability to directly file federal lawsuits against AI impersonation scammers who target businesses and government agencies. The changes will also make it possible for the agency to target the makers of the code used in such scams more quickly.

The initial proposal doesn’t cover the impersonation of private individuals, however. So the FTC is releasing this [PDF] supplemental notice asking for public comment on whether they should be covered by the new rules as well.

“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale,” said FTC chair Lina Khan. “With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever.” 

Beyond simply making it illegal to impersonate another individual to commit fraud, the proposal also includes a provision to hold businesses accountable for misuse of technology they create.  

The so-called “means and instrumentalities” provision in the proposal would give the FTC the ability to hold companies who create AI tech that could be used to impersonate people accountable if they “had reason to know that the goods and services they provided will be used for the purpose of impersonations,” the FTC said.

Despite a provision to hold developers accountable for misuse of their tech, it’s not clear who, or to what extent, organizations could be prosecuted. 

According to the proposal, it’s illegal for a scammer to call or message a person while posing as another individual, send physical mail misrepresenting affiliation, creating a website, social media profile or email address impersonating a person or placing ads that pose as a person or their affiliates.

Whether the orgs transmitting fraudulent messages could be held liable along with companies that facilitate the creation of AI voices and video isn’t clear. We’ve asked to the FTC for clarification, but haven’t heard back.

The FCC made its own moves to combat AI impersonation earlier this month, deciding that it was illegal to use AI-generated voices in robocalls. Unlike this newly-proposed FTC rule, the FCC simply clarified that existing telephone consumer protection laws covered the use of AI-generated voices. ®


This website uses cookies. By continuing to use this site, you accept our use of cookies.