I found this while browsing Reddit, and going past the first reaction of “this is terrible” I think it can spark an interesting discussion on machine learning and how our own societal problems can end up creating bad habits and immortalizing those issues when building such systems.

So, what do you guys think about this?

  • @k_o_t
    link
    7
    edit-2
    3 years ago

    in the case of google translate (or any translation tool for that matter) it’s not even an issue with the ml algorithm itself: separate translations can be created specifically for languages that have non-gendered pronounces, to say something like they or he/she or whatever, for other non-concrete cases it’s a different issue of course

    i am actually against using ml wherever it is remotely makes sense, imo the entire movement has been made worse by the hype around it and skewed it’s applications away from topics where its use could be very helpful and bring improvements to society (things like science and medicine), toward things which are easy to monetize, and we now have people with phds in ml trying to discover new ways to keep users longer on youtube to watch more ads

    my point being that, if you could throw away all the unnecessary applications of ml where gender/race/ethnicity bias could be a problem (like automated job hiring, crime profiling, information gathering for monetization purposes), there aren’t that many things left, and the ones that left the easy fix would be just getting more non standard data, where [semi]supervised learning is concerned of course

    but maybe i’m wrong, i’m curious what you think

    • @joojmachineOP
      link
      53 years ago

      I’m not that knowledgeable about ML but from what I’ve seen, I wholeheartedly agree. For tasks where any bias is an issue it shouldn’t be used, unless it can be developed in a way that properly deals with those biases. The lack of doing so always end up reinforcing the issues you mentioned.