• Jo Miran
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    As I mentioned before, I use scripts to replace my comments with random excerpts from text in the public domain. I do this multiple times before finally deleting them. The result is that it becomes very difficult for the AI or anyone to figure out what is a legitimate comment and what is a line from Lady Chatterley’s Lover or a scientific paper of the ecological impact from the Japanese whaling industry. It’s easier to just filter out my username from their data sets.

    • Pips@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      They have almost definitely archived data and around the time of the API bullshit, made sure they didn’t delete those archives. They have that content if they want to use it.

      • Jo Miran
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I’ve done the “switch, switch, switch, delete” at least twice a year for most of the twelve years I was there. The idea was to pollute the data, not delete it. Even if you started during the API bullshit, you still would have had plenty of time to corrupt your data enough. Remember, the idea is to make it so that it is difficult to tell what is a legitimate comment and what are excerpts from random text.

    • frostysauce@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Most people don’t reread their past comments and edit them. They could simply ignore any edits after the average time a person would notice a typo or something needing clarification, say anywhere between 5 minutes and 24 hours, or just ignore all edits. So your effort is wasted and you’re still training the AI.