Cross-posted from : https://feddit.de/post/5357539
Original link: https://www.theguardian.com/technology/2023/nov/02/whatsapps-ai-palestine-kids-gun-gaza-bias-israel
Cross-posted from : https://feddit.de/post/5357539
Original link: https://www.theguardian.com/technology/2023/nov/02/whatsapps-ai-palestine-kids-gun-gaza-bias-israel
Plenty of actual photographs exist with Palestinian children wielding rifles and Hamas headbands. Perhaps the AI is just trained with those images as well?
By that logic I demand stickers of obesity, respiratory issues and heart issues being portrayed when I search “American”. Preferably where each character has a fat hamburger shoved in their face.
“American” can be interpreted as the adjective as well, not just the people. So you mostly find flags, eagles and the statue of liberty.
You have to search for “average American” to get what you’re looking for.
Why would you demand a negative thing for another group to counter a negative thing for one group? That makes no sense.
But also, “American children” has plenty of cultural material to build an image from. Probably some of it is obese and filled with junk food, but a good portion is most probably something else. In contrast, the only public photo material of palestinian children is either from adults carrying them away from some atrocity or adults giving them assault rifles and parading them for the cameras. In short, they seem to only exist as propaganda material.
They’re not actually asking for it, they’re making a point about the problem. The person they’re responding to is basically going “those images exist tough shit.”
Why does it matter what the excuse is?
You shouldn’t get a stereotype (or in this case I suppose propaganda?) when you give a neutral prompt.
Actually… you kind of should. A neutral prompt should provide the most commonly appearing match from the training set… which is basically what stereotypes are; an abstraction from the most commonly appearing match from a person’s experience.
Should, would, could. AI is trained on what it scrapes off the internet. It is only feeding the Augmented Idiocy which is already a problem.
To me, it should only “matter” for technical reasons - to help find the root of the problem and fix it at the source. If your roof is leaking, then fix the roof. Don’t become an expert on where to place the buckets.
You’re right, though. It doesn’t matter in terms of excusing or justifying anything. It shouldn’t have been allowed to happen in the first place.
I do agree that technical mistakes are interesting but with AI the answer seems to always be creator bias. Whether it’s incomplete training sets or (one-sidedly) moderated results, it doesn’t really matter. It pushes the narrative to certain direction, and people trust AIs to be impartial because they presume it’s just a machine that interprets reality when it never is.
…as seen by the machine.
It’s amazing how easily people seem to forget that last part; they wouldn’t trust a person to be perfectly impartial, but somehow they expect an AI to be.
It’s amazing how easily people seem to forget that machines uses tools its creator provides. You can’t trust AI to be impartial because it never is as it is a collection of multiple choices made by people.
This is such a bore, having this same conversation over and over. Same thing happened with NFTs and whatever is currently at the height of its tech hype cycle. Don’t buy into the hype and realize both AIs potential and shortcomings.
Here’s what daily Palestinian kids TV programming looks like: https://youtu.be/KXcQ892cKso
Here’s a Palestinian youth summer camp: https://youtu.be/vCWMBvxWKL0