The American giant Meta announced that it will identify “in the coming months” any image generated by artificial intelligence (AI) that appears on its social networks Facebook, Instagram and Threads.
“In the coming months we will label images that users post on Facebook, Instagram and Threads, as long as we can detect the indicators, in accordance with industry standards, that reveal that they are generated by AI” announced Nick Clegg, the head of international affairs of Meta, in a blog.
The company already identifies generated images with the help of its own tool, Meta AI, which was launched in December.
From now on “we want to be able to do the same with content created with tools from other companies” such as Google, OpenAI, Microsoft, Adobe, Midjourney or Shutterstock, added the person in charge of Meta.
“We are building this tool right now, and in the coming months we will begin applying labels in all the languages supported by each application,” the leader stressed.
The rise of generative AI raises fears that people will use these tools to sow political chaos, especially through misinformation.
Almost half of the world’s population will hold elections this year.
In addition to the political risk, the development of generative AI programs could produce an uncontrollable flow of degrading content, according to many activists and regulators, such as pornographic images (“deepfakes”) of famous women, a phenomenon that also affects many anonymous people. .
“Minimize”
For example, a fake image of American star Taylor Swift was viewed 47 million times on social network X in late January before being removed. According to the American press, the publication remained online on the platform for approximately 17 hours.
While Nick Clegg admits that this large-scale labeling, via invisible markers, “will not totally eliminate” the risk of false image production, “it will certainly minimize” its proliferation “within the limits of what technology currently allows.” .
“It’s not perfect, the technology is not yet fully developed, but of all the platforms it is the most advanced attempt yet to provide transparency in a meaningful way to billions of people around the world,” Clegg told AFP.
“I sincerely hope that by doing this and leading the way, we encourage the rest of the industry to work together and try to develop the common technical standards that we need,” added the leader of Meta, who is willing to “share” his open technology “ as widely as possible.

The Californian company OpenAI, creator of ChatGPT, also announced in mid-January the launch of tools to combat disinformation, and affirmed its willingness to not allow the use of its technological tools for political purposes.
“We want to ensure that our technology is not used in a way that harms” the democratic process, explained OpenAI, which noted that its DALL-E 3 image generator contains “safeguards” to prevent users from generating images of real people, especially candidates. .
Related
#Meta #identify #images #generated #social #networks
2024-02-15 07:53:32