I have experience in running servers, but I would like to know if it’s possible to do it, I just need a GPT 3.5 like private LLM running.

  • TheBigBrother@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    9
    ·
    edit-2
    5 months ago

    I was talking about that with a friend some days ago, and they made an experiment, they just made the AI correct punctuation errors of a text document, no words at all which you can easily add manually, and the anti-AI system target 99% AI made, I don’t know how to explain that, maybe the text was AI generated also IDK or there is a watermark in some place, a pattern or something.

    Edit: you point will be that there is no way to fool the anti-AI systems running a private LLM?

    • entropicdrift@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      5 months ago

      Just that they’re no easier to use to fool an anti-AI system than using ChatGPT, Gemini, Bing, or Claude. Those AI detectors also give false positives on works made by humans. They’re unreliable in the first place.

      Basically, they’re “boring text detectors” more than anything else.

      • TheBigBrother@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        17
        ·
        edit-2
        5 months ago

        I have a friend who is running a business of doing homework on demand, he is using AI to do the work, he got a work returned because AI generated content was detected on it, he used to employ real people to do the work but anyway real people used AI too sometimes, so he knows I’m a “hacker” LMAO and asked me if I knew any way to fool the anti-AI systems, I thought about running a private LLM and training it with real human generated content like ebooks depending on the subject of the work, do you think it could be possible to fool these things with this method?

        • entropicdrift@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          15
          ·
          5 months ago

          So first of all, you shouldn’t involve yourself in your friend’s business. Fraud is generally frowned upon.

          But secondly, you know that ChatGPT was trained on the entire internet, right? Like, every book. I don’t think “more books” is gonna help.

          I hope you take your computer skills and make something of yourself. Try not to get any more involved in this scheme, seriously. You don’t need this crap marring your reputation.

          Besides, there are better reasons/ways to fight the system than helping other people avoid learning.

          • TheBigBrother@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            20
            ·
            edit-2
            5 months ago

            TBH I’m going down the rabbit hole hard, it’s the way I am, if I get an idea I am not happy until it start making money, as I see it is not completely bad, education it’s a fucking shitty mess, just a way to get money away of people(making them paying a loan for 30 years) and perpetuating the fake idea of social status, If we get some of these bucks in the way I didn’t see what’s wrong about it, anyway these dumb people will do their things one way or another.

            • LengAwaits@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              5 months ago

              This is some top tier mental gymnastics. Holy shit, I hope you’re a troll. You’re literally on the internet discussing your plans to commit fraud. Mensa-level shit, here.

              People are going to buy CP one way or another… that means you should make it and sell it to them, right?

              Grow the fuck up, and maybe train a LLM on ethics, you’re going to need some education on the subject if you hope to stay out of prison.

        • hperrin@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          Your “friend’s” business is very unethical. Maybe your friend should think about what they’re doing with their life, and quit doing this.

    • al4s@feddit.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      LLMs work by always predicting the next most likely token and LLM detection works by checking how often the next most likely token was chosen. You can tell the LLM to choose less likely tokens more often (turn up the heat parameter) but you will only get gibberish out if you do. So no, there is not.

      • TheBigBrother@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        12
        ·
        edit-2
        5 months ago

        I think hosting my own LLM wouldn’t work, at some point and as someone said it, the big models are already trained on all the internet stuff, so there is no point into feeding it with more stuff like ebooks, I have to find a way to make the AI write dumber or make it analize the way an author write to then make it emulate the author.