Stable Attribution: A New Tool Could Ruin Generative AI, Or It Could Save It

Jon Stokes takes a closer look at how artists have responded to having their work used to train an AI that can then produce output in their style without their involvement, consent, or compensation.

What is to be done? Many people have thoughts and feelings, and some people are now writing software to “fix” the problem. With big changes, Stable Attribution might conceivably play a role in an eventual path forward. But for now, it’s an all-out war.

There’s so much to talk about just with this latest development that in order to bring you up to speed on the history of this story, I’ll have content myself with this very brief summary of events:

1. Stable Diffusion launches and goes viral after having been trained on the LAION-5 dataset, which contains many images scraped from Artstation and other online art venues.

2. Many artists are really, really mad about this because nobody asked them if an AI could be trained on their work. Also, the AI is shockingly good at producing new works in their style.

3. The artists protest in various ways. One of the protests was hilariously ineffective and premised on a complete misunderstanding of how ML actually works. (They thought that by replacing their art on Artstation with special protest art, Stable Diffusion users would suddenly get those protest images out of the model. I am not making this up.)

4. Online art services tried various ways to calm the situation down.

5. Stable Diffusion announces that Stable Diffusion 3 will include an artist opt-out function, where artists can go to a website and tell Stability that they don’t want their work used to train models.

6. There’s a lawsuit against Stable Diffusion which (falsely) paints the software as a digital “collage” tool and tries to make the case that every image output by SD is legally a derivative work of the images in the training dataset — this means if you use SD images commercially, you’d owe a lot of people a lot of money if the suit succeeds.

7. Stable Attribution launches with the shocking claim that if you upload an SD-generated image to it, then it can tell you which images in the dataset the model used to make that image.


Read the rest at