• juliebean@lemm.ee
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    9
    ·
    1 year ago

    ‘Reading my book infringes on my copyright.’ say confused writers.

          • TheBurlapBandit@beehaw.org
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 year ago

            LLMs forcing us to take a look at ourselves and see if we’re really that special.

            I don’t think we are.

            • Dominic@beehaw.org
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              For now, we’re special.

              LLMs are far more training data-intensive, hardware-intensive, and energy-intensive than a human brain. They’re still very much a brute-force method of getting computers to work with language.

          • monobot
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            So is suing, but thay managed to sue AI, so why not let it read too?

            • 133arc585
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              edit-2
              1 year ago

              What are you trying to say?

              So is suing, but thay managed to sue AI, so why not let it read too?

              How are they suing “AI”? They aren’t suing a concept. They are in fact suing a company whose product is an AI product. And it is of course the responsibility of the company who created it if they broke any laws in the process.

              • monobot
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                Yes, I just don’t particularly like copyright law. While I do think there are forms of it that can be useful, in current shape it might even do more harm than good.

                And one library card could solve this case.

                • 133arc585
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  1 year ago

                  And that’s fine, I don’t like it either. But your comment was nonsense, hence why I asked what you were even trying to say.

                  • monobot
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    1 year ago

                    Yeah, I tried to say too much in not enough words.

                    I guess I commented on stuff I don’t really care about.

    • HughJanus
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      2
      ·
      1 year ago

      This is what I never understood about the whole training on AI thing.

      When a human creates an artwork, they don’t do it out of a vacuum. They’ve had a lifetime of inspiration from artwork they’ve discovered that inspires then to create something wholly new. AI does the same thing

      • luciole (he/him)@beehaw.org
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        1
        ·
        1 year ago

        The AIs we are talking about are large language models. They take human work as input and produce facsimiles. They are owned by individuals or companies that have no permission to exploit in this way intellectual property tied to other people’s livelihoods to copy them.

        LLMs are not sentient, they don’t have inspiration, they are not creative and therefore do not create in the sense an artist would. They are an elaborate mathematical equation.

        “Training” an AI has nothing to do with training an actual living being. It’s just tuning: adjusting an algorithm incrementally until the operator is satisfied with the result. I think it’s defendable to amount this form of extraction to plagiarism.

        • SinAdjetivos@beehaw.org
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          I partially agree with you, but I think you’re missing the end goal of Facebook et al.

          As HughJanus pointed out it’s not really any different than a person reading a book and by that reasoning using copyrighted material to train models like these falls well within the existing framework of “fair use”.

          However, that depends entirely on “the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes.” I agree completely with you that OpenAI’s products/business (the most blatant violator) does easily violate “fair use” due to that clause. However they’re doing it, at least partially, to “force the issue” on the open question of “how much can public information be privatized?” with the goal of further privatizing and increasing commercial applications of raw data.

          As you pointed out LLMs can only create facsimiles and not the original work, and by that logic they can’t exactly replicate the inputs either.

          No I don’t think artists can claim that they own any and all “cheap facsimiles” of their works, but by that same reasoning none of these models produced should be allowed to be the enforceable property of any individual/company either.

          For further reading check out:

          • Kelly v. Arriba Soft Corporation on why “thumbnails” (and by extension LLMs, “eigen-images”, etc.) are inherently transformatve and constitute fair use.
          • Bridgeport Music, Inc. v. Dimension Films for the negative impacts that ruling has had and how it still doesn’t protect the artists from their stuff being used for training and LLM.
          • “Variational auto-encoders” for understanding on how the latest LLMs actually do achieve a significant amount of “originality” and I would argue are able to be minimally creative.
        • i_am_not_a_robot@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          1 year ago

          Most likely, if you ask ChatGPT to summarize a famous book, it does not need to have ever trained on the book itself. The easiest way for an LLM to create a summary of something is to base its summary off existing summaries created by humans. If it’s ruled in court that ChatGPT is infringing on the copyright of a book’s author only by repeating information it acquired from other summaries created by humans, what implications does that have for the humans who wrote the other summaries?

          • monobot
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            Same as everybody else… “standing on shoulders of giants”.

          • HughJanus
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            Yes because creating the AI didn’t require any work and only 1 person is allowed to “cash in”.

      • Dominic@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        AIs are trained for the equivalent of thousands of human lifetimes (if not more). There’s no precedent for anything like this.

    • SinJab0n@mujico.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Dude, tell me, why do u think they have being doing this only with books and art but no music?

      Thats because music really has people protecting their assets. U can have ur opinion about it, but that’s the only reason they haven’t ABUSED companies and people’s work in music.

      It’s not reading, it’s the equivalent of me taking a movie, making a function, charge for it, and then be displeased when the creators demand an explanation.

      • Dominic@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        There are a few reasons why music models haven’t exploded the way that large-language models and generative image models have. Maybe the strength of the copyright-holders is part of it, but I think that the technical issues are a bigger obstacle right now.

        • Generative models are extremely data-inefficient. The Internet is loaded with text and images, but there isn’t as much music.

        • Language and vision are the two problems that machine learning researchers have been obsessed with for decades. They built up “good” datasets for these problems and “good” benchmarks for models. They also did a lot of work on figuring out how to encode these types of data to make them easier for machine learning models. (I’m particularly thinking of all of the research done on word embeddings, which are still pivotal to large language models.)

        Even still, there are fairly impressive models for generative music.