Meta labels AI-generated content to prevent misinformation ahead of the US Presidential election

Technology company Meta announced it will label artificial intelligence (AI)-generated media in an effort to make content transparent amid the risk of false information edited by deepfake technology. increasingly increasing.

Facebook and Instagram label content generated by artificial intelligence

Meta, the owner of Facebook and Instagram, has announced major changes to its policies on content created and altered using artificial intelligence technology.

Accordingly, from next May, videos, images and sounds created by artificial intelligence posted on Facebook and Instagram will be labeled “Made with AI”.

Ms. Monika Bickert – Meta’s Vice President of Content Policy added that, not only addressing a narrow portion of edited videos, Meta will also apply “High-risk” labels. for digitally altered content that poses “a particularly high risk of misleading the public about a material matter,” regardless of whether the content was created using artificial intelligence or other tools.

Meta said its recommendations were made through a consultation process with civil society organizations, academics, intergovernmental organizations and other experts. At the same time, conducting public opinion research with more than 23,000 people from 13 countries. The majority of participants (82%) support warning labels for misleading content generated by artificial intelligence.

This new rule will change the way Meta handles “manipulated” content, shifting from a focus on removing a limited set of posts to maintaining content while providing viewers with information about how it was created.

This gives users more information about the content, helping them make better judgments and have context if they see similar content elsewhere

A company spokesperson said the labeling rules will apply to content posted on Facebook, Instagram and Threads. Other services, including WhatsApp and Quest virtual reality headsets, will be governed by separate rules.

Efforts to identify misleading content generated by artificial intelligence and address its potential risks

This is one of tech giant Meta’s efforts to address the increasingly worrying issue of the spread of artificial intelligence-generated content and the potential risks to the public.

READ Also:  Afghanistan: Broadband, fiber optics service partially restored in Balkh

These changes were made in the context of technology researchers warning that much content on social networks could be altered by artificial intelligence technologies ahead of the US presidential election. coming out next November.

Earlier this year, a political consultant made large-scale automated calls using an AI-generated voice, simulating President Joe Biden’s voice, calling on people in the state of New Hampshire (USA). do not vote in the primary election.

Previously, Meta announced plans to detect and mark images created with other companies’ generative artificial intelligence tools using invisible markers built into the file but has not yet Announce the date of implementation of this plan.

Besides Meta, Google, through YouTube, also began requiring creators to disclose videos edited with artificial intelligence last month, such as publishing “truthful” headshots or footage or Edited footage of actual events or locations.

Music video and social network platform TikTok also works to identify content powered by artificial intelligence. Last year, TikTok said it would launch a tool to help creators label edited content, while banning deepfake – technology that creates videos, images or sounds powered by artificial intelligence to mislead viewers about actual events or characters.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.