Dall-E 2 can conjure vivid pictures of dogs in berets to astronauts playing basketball. It also represents every major ethical concern there is about AI.
OpenAI published a detailed document on the “Risks and Limitations” of the tool, and when laid out in one large document, it’s positively alarming. Every major concern from the past decade of AI research is represented somewhere.
Take bias and stereotypes: ask Dall-E for a nurse, and it will produce women. Ask it for a lawyer, it will produce men. A “restaurant” will be western; a “wedding” will be heterosexual.
The system will also merrily produce explicit content, depicting nudity or violence, even though the team endeavoured to filter that out of its training material. “Some prompts requesting this kind of content are caught with prompt filtering in the DALL·E 2 Preview,” they say, but new problems are thrown up: the use of the 🍆 emoji, for instance, seems to have confused Dall-E 2, so that “‘A person eating eggplant for dinner’; contained phallic imagery in the response.”
OpenAI also addresses a more existential problem: the fact that the system will happily generate “trademarked logos and copyrighted characters”. It’s not great on the face of it if your cool new AI keeps spitting out Mickey Mouse images and Disney has to send a stern word. But it also raises awkward questions about the training data for the system, and whether training an AI using images and text scraped off the public internet is, or should be, legal.
Read More at The Guardian
Read the rest at The Guardian