I found this while browsing Reddit, and going past the first reaction of “this is terrible” I think it can spark an interesting discussion on machine learning and how our own societal problems can end up creating bad habits and immortalizing those issues when building such systems.

So, what do you guys think about this?

  • joojmachineOP
    link
    fedilink
    arrow-up
    5
    ·
    4 years ago

    I’m not that knowledgeable about ML but from what I’ve seen, I wholeheartedly agree. For tasks where any bias is an issue it shouldn’t be used, unless it can be developed in a way that properly deals with those biases. The lack of doing so always end up reinforcing the issues you mentioned.