The Glaze program, developed by the University of Chicago, adds marks to their works that confuse the AI when it reuses artists’ work.
Faced with wild data exploitation by some artificial intelligence (AI) developers, artists deliberately trap their creations to render them unusable, with the help of university researchers. Paloma McClain is an American illustrator. Various generative artificial intelligence software already allow you to create images inspired by your own style, even if the artist has never given her consent and will not derive any economic benefit from it. “It bothered me», explains the designer, based in Houston (Texas). “I’m not a famous artist, but I was uncomfortable with the idea of my work being used as a protagonist» an artificial intelligence model. To remedy this, she ran her works through the Glaze software, a program that adds pixels to her illustrations of him, invisible to the human eye, to interrupt the work of the artificial intelligence.
After this processing, the images created are blurry, faces blurred, without comparison with the originals. “We are trying to provide the technological tools to protect human creators from the abuse of generative AI modelssays Ben Zhao, a University of Chicago researcher whose team created Glaze.
Read alsoA new Beatles song released thanks to artificial intelligence on November 2
“Serious problem”
Alerted in November 2022, this computer science professor developed the software in just four months, using previous work intended to disrupt facial recognition. “We worked at full speed because we knew the problem was seriousBen Zhao says. Many people were suffering.»
Generative AI giants have made agreements to secure rights to use certain content, but the vast majority of data, images, text or sounds used to develop models were made without explicit consent. Since its launch, Glaze has been downloaded more than 1.6 million times, according to the researcher, whose unit is preparing to launch a new program, called Nightshade.
It focuses on plain language queries (prompts) that the user of a generative AI model sends to obtain a new image. The aim is to derail the algorithm, which will then offer, for example, a picture of a cat when a dog was requested.
Another initiative is that of the start-up Spawning, which has developed Kudurru, a software that detects mass collection attempts on image platforms. The artist then has the choice whether to block access to his work or send an image different from the one requested”,which is equivalent to poisoning» the AI model being developed and affecting its reliability, describes Jordan Meyer, co-founder of Spawning. More than a thousand websites are already integrated into the Kudurru network.
Spawning also created Have I Been Trained? (haveibeentrained.com), a site that tracks whether images have been fed into an AI model and gives the owner the ability to protect them from future unauthorized use. Beyond the image, researchers at Washington University in St. Louis (Missouri) were interested in sound and developed AntiFake.
“The goal is for people to be able to protect their content, whether they’re individual artists or companies with a lot of intellectual property,”
Ben Zhao, researcher at the University of Chicago
This software enriches an audio file with additional noises, imperceptible to the human ear, making a believable imitation of a human voice impossible, explains Zhiyuan Yu, a doctoral student behind the project. The program aims in particular to preventdeepfake», hyper-realistic photomontages or videomontages that exploit the appearance of a character, often famous, to make him do or say something.
According to Zhiyuan Yu, the team, supervised by Professor Ning Zhang, was recently contacted by the producers of a successful podcast who wanted to protect it from possible misappropriation and, although for the moment it has only been used for spoken language, AntiFake could also protect the voice of the singers, believes the researcher, whose software is freely accessible.
Ben Zhao’s unit was approached by “several companies looking to use Nightshade to preserve their images and intellectual property», according to the Chicago academic. He is not against the fact that large companies also use his program.
«The goal is for people to be able to protect their content, whether they are individual artists or companies with a lot of intellectual property», claims Ben Zhao. In the case of Spawning the idea is not only to hinder but also, secondly, to “allow people to organize themselves to sell their data for a fee», specifies Jordan Meyer, who announces the launch of a platform at the beginning of 2024. «The best solution“, In his opinion, “it would be a world where all data used for AI is subject to consent and payment. We hope to push developers in this direction.»
2024-01-01 05:00:00
#Artists #poison #works #protect #artificial #intelligence #plunderes