trash
17
@kevincox
link
fedilink
78M

AI trained on racist data will mirror racism of the input dataset.

Imagine that you create an AI to determine if someone is lying based on a video. If that dataset is human-curated and is labeled with racist tendencies (for example people who look a certain way are labeled as lying more even if that isn’t the truth) then the AI will learn that.

But even a perfectly true dataset can train a racist AI. Imagine that the previous dataset only has lying examples for people who look a certain way (or the vast majority of those examples are lying) whereas another group of people is only lying 10% of the time. The AI will probably extrapolate that all of the first group are lying because they have seen no (or few) counterexamples.

Technology
!technology
Create a post

Subscribe to see more stories about technology on your homepage


  • 1 user online
  • 1 user / day
  • 12 users / week
  • 131 users / month
  • 448 users / 6 months
  • 5.58K subscribers
  • 2.62K Posts
  • 7.82K Comments
  • Modlog