• P03 Locke@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    8
    ·
    1 year ago

    Discrimination is the wrong word. Technology has no morals or sense of justice. It is bias in the data that developers should have accounted for.

    • steltek@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      It’s totally accurate though. It’s like the definition of systemic racism really. Think about housing or financial policy that disproportionately fails for minorities. They aren’t some Klan manifesto. Instead they just include banal qualifications and exemptions that end up at the same result.

    • HardlightCereal@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      You need to learn some critical race theory. Racist systems turn innocent intentions into racist actions. If a PhD student trains an AI model on only white people because the university only has white students, then that AI model is going to fail black people because black people were already failed by university admissions. Innocent intention plus racist system equals racist action.

    • slumberlust@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      This seems shortsighted. You are basically asking people to police their own biases. That’s a tall ask for something no one can claim immunity from.

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I am asking a group of scientists who should be very well-versed in statistics and weights, you know, one of the biggest components in a machine learning model, to account for how biased their data is when engineering their model.

        It’s really not a hard ask.

    • Cortell@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Ask the people who create the data sets that machine learning models train on how they feel about racism and get back to us

    • Corhen@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      It can be an imported bias/descrimination. I still think that words fair.

      Do you have a more accurate word?

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I already said it: bias. It’s a common problem with LLMs and other machine learning models that model engineers need to watch out for.