Dropbox removed ability to opt your files out of AI training::undefined

  • nameisnotimportant
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 year ago

    If someone has a way to poison their AI training by adding junk along my regular files I’m interested. Sadly I use it at work and I cannot decide to migrate to another cloud so I better sabotage them

      • nameisnotimportant
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Thank you for your contribution, I was referring to a practical way (script, binary, …) to achieve this not academic literature, I don’t have much time to invest in this and my IT level is insufficient

        • Natanael@slrpnk.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Any specific tools will require knowledge of the system you’re targeting, so I don’t expect to see many public ML poisoning tools targeting anything but open source ML libraries, but adversarial sample tools to fool classifiers (including repainting stuff like those face transformation filters) might get more common because it’s much much easier to test

    • 31337@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Create a lot of text files filled with offensive and false information. Maybe 4chan and OANN transcripts :)

      It will always be a cat-and-mouse game. Once the trainers recognize the attack, they can use the attack to further improve their models. A long time ago I watched a speech from a guy who worked on Yahoo! Mail’s spam detection. They realized spammers would create email accounts, send spam to them, then have the accounts mark their spam as “not spam.” They came up with a method to automatically identify these accounts, and used them to further improve their spam detection model (if these accounts marked something as “not spam” it was likely spam).