• NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    13
    ·
    2 months ago

    Rhetorical question (because we clearly can infer the answer) but… have you ever seen a black person?

    A bit of melanin does not make you into some giant void that breaks all cameras. Black folk aren’t doing long exposure shots for selfies or group photos. Believe it or not but RDCWorld doesn’t need to use nightvision cameras to film a skit.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      3
      ·
      2 months ago

      You can keep hand waving away the statement of fact that lower precision input is lower precision input.

      And yes, for actual photography (where people are deliberately still for long enough to offset the longer exposure required), you do actually need different lighting and different camera settings to get the same quality results. But real cameras are also capable of capturing far more dynamic range without guessing heavily on postprocessing.

      • xor@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        2 months ago

        And you can keep hand waving away the fact that lower precision because of less light is not the primary cause of racial bias in facial recognition systems - it’s the fact that the datasets used for training are racially biased.

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 months ago

          Yes, it is. The idea that giant corporations “aren’t trying” is laughable, and it’s a literal guarantee that massively lower quality, noisier inputs will result in a lower quality model with lower quality outputs.

          Less photons hitting the sensors matters. A lot.