Brin’s “We definitely messed up.”, at an AI “hackathon” event on 2 March, followed a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of colour.
The problem is that the training data is biased and these AIs pick up on biases extremely well and reinforce them.
For example, people of color tend to post fewer pictures of themselves on the internet, mostly because remaining anonymous is preferable to experiencing racism.
So, if you’ve then got a journalistic picture, like from the food banks mentioned in the article, suddenly there will be relatively many people of color there, compared to what the AI has seen from its other training data.
As a result, it will store that one of the defining features of how a food bank looks like, is that it has people of color there.
To try to combat these biases, the bandaid fix is to prefix your query with instructions to generate diverse pictures. As in, literally prefix. They’re simply putting words in your mouth (which is industry standard).
That is quite the bold statement. Source?
I don’t think I came up with that myself, but yeah, I’ve got nothing. Would have been multiple years, since I’ve read about that.
Maybe strike the “mostly”, but then it seemed logical enough to me that this would be a factor, similar to how some women will avoid revealing their gender (in certain contexts on the internet) to steer clear from sexual harassment.
For that last part, I can refer you to a woman from which I’ve heard first-hand that she avoids voice chat in games, because of that.
It fits their internal narrative
Just a guess
Is that really such a wild narrative? It felt rather benign to me.
Asserting an particular behavior exists within a certain group, without evidence, and then asserting why that behavior exists, again without evidence, as support of an argument that the recently extracted from a colon comments don’t actually relate to or prove?
Yeah, pretty fucking bold.
Sometimes you do want something specific. I can understand if someone just asked for a person x, y, z and then gets a broader selection of men, women, young, old, black or white. But if one asks for a middle-aged white man, I would not expect it to respond with a young, Black women, just to have variety. I’d expect other non-stated variables to be varied. It’s like asking for a scene of specifically leafy green trees, then I would not expect to see a whole lot of leafless trees.
Yeah, the problem with that is that there’s no logic behind it. To the AI, “white person” is equally as white as “banker”. It only knows what a white person looks like, because it’s been shown lots of pictures of white people and those were labeled “white person”. Similarly, it’s been shown lots of pictures of white people and those were labeled “banker”.
There is a way to fix that, which is to introduce a logic before the query is sent to the AI. It needs to be detected whether your query contains explicit reference to skin color (or similar), and if so, that query prefix needs to be left out.
Where it gets wild, is that you can ask the AI whether your query contains such explicit references to skin color and it will genuinely do quite well at answering that correctly, because text processing is its core competence.
But then it will answer you “Yes.” or “No.” or “Potato chips.” and you have to program the condition to then leave out the query prefix.
Yes, it could be that, and may explain why the Nazi images came out like they did. But it sounded more like to me, Google was forcing diversity into the images deliberately. But sometimes that does not make sense. For general requests, yes. Otherwise they can just as well decide that grass should not always be green or brown, but sometimes also just make it blue or purple for variety.
Nah, in this case I think it’s a classic case of over correction and prompt manipulation. The bus you’re talking about is right, so to try to combat that they and other ai companies manipulate your prompt before feeding it to the llm. I’m very sure they are stripping out white male and or subbing in different ethnicities to try to cover the bias
TFW you accidentally leave the hidden diversity LoRa weight at 1.00.
And it is a bandaid fix, to be sure, because there will be endless problems caused by the festering bandaid. It only took Gemini a few minutes to make a stink.
This whole problem is down to the laziness of refusing to properly curate the training data, and it opens the door for some seriously nefarious manipulation of the user.
At this point, I’d support legislation mandating the training sets and “system” data added to the queries be open and auditable for commercial AI products.