• cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    1
    ·
    4 months ago

    It’s rather hard to open source the model when you trained it off a bunch of copyrighted content that you didn’t have permission to use.

    • chebra@mstdn.io
      link
      fedilink
      arrow-up
      6
      ·
      4 months ago

      @cmnybo @marvelous_coyote That’s… not how it works. You wouldn’t see any copyrighted works in the model. We are already pretty sure even the closed models were trained on copyrighted works, based on what they sometimes produce. But even then, the AI companies aren’t denying it. They are just saying it was all “fair use”, they are using a legal loophole, and they might win this. Basically the only way they could be punished on copyright is if the models produce some copyrighted content verbatim.

        • chebra@mstdn.io
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          @ReakDuck Yup, and that’s a much better avenue to fight against the AI companies. Because fundamentally, this is almost impossible to avoid in the ML models. We should stop complaining about how they scraped copyrighted content, this complaint won’t succeed until that legal loophole is removed. But when they reproduce copyrighted content, that could be fatal. And this applies also to reproducing GPL code samples by copilot for example.

          • ReakDuck
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            Yeah, you just summarize my thoughts I had before chatGPT came to light.

            Ok, not really. My thoughts were: could I store a Picture made illegaly into an LLM and later on ask it to show it again? Because I never stored it as a file and LLMs seem to not count as a storage.

            I could store Pictures I would not be allowed to.

    • flamingmongoose@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      4
      ·
      4 months ago

      BERT and early versions of GPT were trained on copyright free datasets like Wikipedia and out of copyright books. Unsure if those would be big enough for the modern ChatGPT types