• Communist
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    6
    ·
    7 months ago

    How could this even happen by accident?

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      7 months ago

      Because it has five billion images?

      The potentially at issue images comprise less than one percent of one percent of one percent of the total.

      • Communist
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        7 months ago

        Don’t they need to label the data?

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 months ago

          No, it’s not manually labeled. It connects the text to the image based on things like alt text or the comment next to it in a social media post, and then ran them through a different AI (CLIP) which rated how well the text description matched the image and they filter out the ones with a low score.

          The point of the OP research is that they should add another step to check CSAM databases and not rely on social media curation to have avoided illegal material (which they should, even though it’s a very very small portion of the overall dataset).

          But at no time was a human reviewing CSAM, labeling it, and including it in the data.

    • sir_reginald@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      7 months ago

      removing these images from the open web has been a headache of webmasters and admins for years in sites which host user uploaded images.

      if the millions of images in the training data were automatically scraped from the internet, I don’t find it surprising that there was CSAM there.