Dev

Meta: If you’re in our house running AI-massaged political ads, you need to ‘fess up


Meta will require advertisers to disclose whether their political ads on its platforms contain any AI-generated or digitally altered content.

The Facebook giant’s President of Global Affairs Nick Clegg – yeah, the guy who was once UK Deputy Prime Minister – announced the requirements on Tuesday, and argued they’re an extension of Meta’s existing stance on software-aided adjustments to content. That stance being to tackle the rise of synthetic media.

“While much of our approach has remained consistent for some time, we’re continually adapting to ensure we are on top of new challenges, including the use of AI,” reads a public notice attributed to Clegg. “Starting in the new year, advertisers will also have to disclose when they use AI or other digital techniques to create or alter a political or social issue ad in certain cases.”

Those cases cover ads that include:

  • AI-generated images or videos to create photorealistic people, false audio, or fake events;
  • Images or videos altered using AI or other technologies to make it appear as if a real person has said or done things they did not actually undertake in real life;
  • Manipulated images and videos to alter an event that has occurred.

The policies were introduced after, but not necessarily in direct response to, a letter sent to Meta supremo Mark Zuckerberg by US Senator Amy Klobuchar (D-MN) and House Representative Yvette Clarke (D-NY) in which the pair expressed concerns over political deepfakes. The politicians criticized the Instagram developer’s lack of transparency for handling AI-generated electoral adverts, and warned of “election-related misinformation and disinformation.”

Meta’s policies also now require advertisers to disclose the source of funding for their ad campaigns, with the information to be added as a “paid for by” disclaimer. The adverts will also be stored in Meta’s public Ad Library for seven years.

Finally, all new political, electoral, or social issue ads will be prevented from being broadcast on Meta during the final week of the 2024 United States presidential election. Old ads that were released before this period will continue to run.

“Our rationale for this restriction period remains the same as it has since 2020: in the final days of an election, we recognize there may not be enough time to contest new claims made in ads,” according to Clegg.

But Meta won’t apply the same rule for other polls being staged next year, despite Clegg stating the AI ad policies were introduced ahead of elections scheduled for 2024 in India, Indonesia, Mexico and the European Union.

Meta and other social networks infamously carry enormous quantities of untrue information, much of it allowed in the name of free speech. Your ill-informed aunt or racist brother spouting off baseless conspiracy theories is one thing, though. Synthetic media, content dressed up to appear to be real situations that never actually happened, is quite another, arguably. It’s material that’s potentially convincing enough to mislead or manipulate voters.

Yes, Photoshopping, and image and footage editing, have been around for years and years and years. One thing that’s different here is that AI enables the mass production of propaganda and convincing-looking fakes. Or so critics say; you personally might not be quite so alarmist.

While Meta has acted to flag ads that use AI, the mega-corp’s overall stance on ML-made content is evolving.

Last month Meta’s Oversight Board, an independent panel of experts scrutinizing the super-biz’s social media platforms, considered a complaint about a video that claimed President Joe Biden is “a sick pedophile” and that people who had voted for him were “mentally unwell.”

The video altered existing footage of Biden visiting a polling booth with one of his granddaughters to make it look as if the President had touched the child’s chest inappropriately.

Despite a user complaining, the dodgy video was not removed. The Oversight Board decided to make the vid a test case for whether Meta’s policies adequately cover altered videos that “could mislead people into believing politicians have taken actions, outside of speech, that they have not.”

The Register has asked Meta for comment. ®



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.