Facebook, Instagram to tag modified content’s AI origin
2 min readIf AI-modified content misleads the public on important issues, the parent business, Meta, will classify it as “high-risk.”
Facebook’s parent company, Meta, made major policy revisions addressing digitally created and manipulated media public on Friday, in advance of elections that would test Meta’s ability to control misleading content generated by artificial intelligence.
When AI-generated movies, photos, and audio are uploaded on Facebook and Instagram, the firm will start labelling them as “Made with AI” in May. The prior policy, which only addressed a small percentage of modified videos, was expanded upon by this update, according to a blog post by Vice President of Content Policy Monika Bickert.
According to Bickert, Meta will flag digitally altered content that poses a “particularly high risk of materially deceiving the public on a matter of importance” in a more apparent and distinctive way, regardless of whether AI was used to create the content or not. According to a spokesman, Meta would immediately begin utilising these more noticeable “high-risk” markings.
With this approach, the firm is shifting its approach to handling modified content, moving away from just deleting a select few posts and towards keeping the content visible and providing readers with information about how it was created.
In the past, Meta revealed plans to use invisible identifiers included in the files to recognise images made with third-party generative AI tools, but they did not include a start date.
According to a spokesman, Facebook, Instagram, and Threads content sharing will be subject to the labelling strategy. Its other services, such Quest virtual reality headsets and WhatsApp, are governed by other regulations.
The revisions are being implemented several months ahead of the US presidential election in November, a period that technology researchers caution could be reshaped by generative AI technologies. Political campaigns have started using AI tools, notably in locations like Indonesia, pushing the limits of guidelines from providers like Meta and the leading generative AI market player, OpenAI.
In February, Meta’s oversight board criticized the company’s current rules on manipulated media as “incoherent” after examining a video from last year on Facebook that edited real footage to falsely imply inappropriate behavior by US President Joe Biden.
The video was allowed to remain online because Meta’s current policy on “manipulated media” prohibits misleadingly altered videos only if they were created by artificial intelligence or if they make individuals appear to say things they never said.
The oversight board suggested that the policy should also cover non-AI content, which can be “just as misleading” as AI-generated content, as well as audio-only content and videos showing individuals doing things they never said or did.