Those claiming AI training on copyrighted works is “theft” misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they’re extracting general patterns and concepts - the “Bob Dylan-ness” or “Hemingway-ness” - not copying specific text or images.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in “vector space”. When generating new content, the AI isn’t recreating copyrighted works, but producing new expressions inspired by the concepts it’s learned.

This is fundamentally different from copying a book or song. It’s more like the long-standing artistic tradition of being influenced by others’ work. The law has always recognized that ideas themselves can’t be owned - only particular expressions of them.

Moreover, there’s precedent for this kind of use being considered “transformative” and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

While it’s understandable that creators feel uneasy about this new technology, labeling it “theft” is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn’t make the current use of copyrighted works for AI training illegal or unethical.

For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. https://twit.tv/shows/floss-weekly/episodes/744

  • lettruthout@lemmy.world
    link
    fedilink
    English
    arrow-up
    212
    arrow-down
    9
    ·
    10 days ago

    If they can base their business on stealing, then we can steal their AI services, right?

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      10 days ago

      How do you feel about Meta and Microsoft who do the same thing but publish their models open source for anyone to use?

      • lettruthout@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        1
        ·
        10 days ago

        Well how long to you think that’s going to last? They are for-profit companies after all.

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 days ago

          I mean we’re having a discussion about what’s fair, my inherent implication is whether or not that would be a fair regulation to impose.

      • WalnutLum
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        10 days ago

        Those aren’t open source, neither by the OSI’s Open Source Definition nor by the OSI’s Open Source AI Definition.

        The important part for the latter being a published listing of all the training data. (Trainers don’t have to provide the data, but they must provide at least a way to recreate the model given the same inputs).

        Data information: Sufficiently detailed information about the data used to train the system, so that a skilled person can recreate a substantially equivalent system using the same or similar data. Data information shall be made available with licenses that comply with the Open Source Definition.

        They are model-available if anything.

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          10 days ago

          For the purposes of this conversation. That’s pretty much just a pedantic difference. They are paying to train those models and then providing them to the public to use completely freely in any way they want.

          It would be like developing open source software and then not calling it open source because you didn’t publish the market research that guided your UX decisions.

          • WalnutLum
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            9 days ago

            You said open source. Open source is a type of licensure.

            The entire point of licensure is legal pedantry.

            And as far as your metaphor is concerned, pre-trained models are closer to pre-compiled binaries, which are expressly not considered Open Source according to the OSD.

            • masterspace@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              5 days ago

              You said open source. Open source is a type of licensure.

              The entire point of licensure is legal pedantry.

              No. Open source is a concept. That concept also has pedantic legal definitions, but the concept itself is not inherently pedantic.

              And as far as your metaphor is concerned, pre-trained models are closer to pre-compiled binaries, which are expressly not considered Open Source according to the OSD.

              No, they’re not. Which is why I didn’t use that metaphor.

              A binary is explicitly a black box. There is nothing to learn from a binary, unless you explicitly decompile it back into source code.

              In this case, literally all the source code is available. Any researcher can read through their model, learn from it, copy it, twist it, and build their own version of it wholesale. Not providing the training data, is more similar to saying that Yuzu or an emulator isn’t open source because it doesn’t provide copyrighted games. It is providing literally all of the parts of it that it can open source, and then letting the user feed it whatever training data they are allowed access to.

          • Arcka@midwest.social
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            2
            ·
            9 days ago

            Tell me you’ve never compiled software from open source without saying you’ve never compiled software from open source.

            The only differences between open source and freeware are pedantic, right guys?

            • masterspace@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              Tell me you’ve never developed software without telling me you’ve never developed software.

              A closed source binary that is copyrighted and illegal to use, is totally the same thing as a all the trained weights and underlying source code for a neural network published under the MIT license that anyone can learn from, copy, and use, however they want, right guys?

      • umbrella
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        i feel like its less meaningful because we dont have access to the datasets.

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    167
    arrow-down
    15
    ·
    edit-2
    10 days ago

    Here’s an experiment for you to try at home. Ask an AI model a question, copy a sentence or two of what they give back, and paste it into a search engine. The results may surprise you.

    And stop comparing AI to humans but then giving AI models more freedom. If I wrote a paper I’d need to cite my sources. Where the fuck are your sources ChatGPT? Oh right, we’re not allowed to see that but you can take whatever you want from us. Sounds fair.

    • PixelProf@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 days ago

      Not to fully argue against your point, but I do want to push back on the citations bit. Given the way an LLM is trained, it’s not really close to equivalent to me citing papers researched for a paper. That would be more akin to asking me to cite every piece of written or verbal media I’ve ever encountered as they all contributed in some small way to way that the words were formulated here.

      Now, if specific data were injected into the prompt, or maybe if it was fine-tuned on a small subset of highly specific data, I would agree those should be cited as they are being accessed more verbatim. The whole “magic” of LLMs was that it needed to cross a threshold of data, combined with the attentional mechanism, and then the network was pretty suddenly able to maintain coherent sentences structure. It was only with loads of varied data from many different sources that this really emerged.

    • fmstrat@lemmy.nowsci.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 days ago

      This is the catch with OPs entire statement about transformation. Their premise is flawed, because the next most likely token is usually the same word the author of a work chose.

      • TommySoda@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        9 days ago

        And that’s kinda my point. I understand that transformation is totally fine but these LLM literally copy and paste shit. And that’s still if you are comparing AI to people which I think is completely ridiculous. If anything these things are just more complicated search engines with half the usefulness. If I search online about how to change a tire I can find some reliable sources to do so. If I ask AI how to change a tire it would just spit something out that might not even be accurate and I’d have to search again afterwards just to make sure what it told me was even accurate.

        It’s just a word calculator based on information stolen from people without their consent. It has no original thought process so it has no way to transform anything. All it can do is copy and paste in different combinations.

    • azuth@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      30
      ·
      10 days ago

      It’s not a breach of copyright or other IP law not to cite sources on your paper.

      Getting your paper rejected for lacking sources is also not infringing in your freedom. Being forced to pay damages and delete your paper from any public space would be infringement of your freedom.

      • explore_broaden@midwest.social
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        10 days ago

        I’m pretty sure that it’s true that citing sources isn’t really relevant to copyright violation, either you are violating or not. Saying where you copied from doesn’t change anything, but if you are using some ideas with your own analysis and words it isn’t a violation either way.

        • Eatspancakes84@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 days ago

          With music this often ends up in civil court. Pretty sure the same can in theory happen for written texts, but the commercial value of most written texts is not worth the cost of litigation.

      • TommySoda@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        7
        ·
        10 days ago

        I mean, you’re not necessarily wrong. But that doesn’t change the fact that it’s still stealing, which was my point. Just because laws haven’t caught up to it yet doesn’t make it any less of a shitty thing to do.

        • azuth@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          10 days ago

          It’s not stealing, its not even ‘piracy’ which also is not stealing.

          Copyright laws need to be scaled back, to not criminalize socially accepted behavior, not expand.

        • Octopus1348@lemy.lol
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          3
          ·
          10 days ago

          When I analyze a melody I play on a piano, I see that it reflects the music I heard that day or sometimes, even music I heard and liked years ago.

          Having parts similar or a part that is (coincidentally) identical to a part from another song is not stealing and does not infringe upon any law.

          • takeda@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            9 days ago

            You guys are missing a fundamental point. The copyright was created to protect an author for specific amount of time so somebody else doesn’t profit from their work essentially stealing their deserved revenue.

            LLM AI was created to do exactly that.

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          edit-2
          10 days ago

          The original source material is still there. They just made a copy of it. If you think that’s stealing then online piracy is stealing as well.

          • TommySoda@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            9 days ago

            Well they make a profit off of it, so yes. I have nothing against piracy, but if you’re reselling it that’s a different story.

            • ContrarianTrail@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              9 days ago

              But piracy saves you money which is effectively the same as making a profit. Also, it’s not just that they’re selling other people’s work for profit. You’re also paying for the insane amount of computing power it takes to train and run the AI plus salaries of the workers etc.

  • EldritchFeminity@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    143
    arrow-down
    17
    ·
    10 days ago

    The argument that these models learn in a way that’s similar to how humans do is absolutely false, and the idea that they discard their training data and produce new content is demonstrably incorrect. These models can and do regurgitate their training data, including copyrighted characters.

    And these things don’t learn styles, techniques, or concepts. They effectively learn statistical averages and patterns and collage them together. I’ve gotten to the point where I can guess what model of image generator was used based on the same repeated mistakes that they make every time. Take a look at any generated image, and you won’t be able to identify where a light source is because the shadows come from all different directions. These things don’t understand the concept of a shadow or lighting, they just know that statistically lighter pixels are followed by darker pixels of the same hue and that some places have collections of lighter pixels. I recently heard about an ai that scientists had trained to identify pictures of wolves that was working with incredible accuracy. When they went in to figure out how it was identifying wolves from dogs like huskies so well, they found that it wasn’t even looking at the wolves at all. 100% of the images of wolves in its training data had snowy backgrounds, so it was simply searching for concentrations of white pixels (and therefore snow) in the image to determine whether or not a picture was of wolves or not.

    • Riccosuave@lemmy.world
      link
      fedilink
      English
      arrow-up
      95
      arrow-down
      14
      ·
      10 days ago

      Even if they learned exactly like humans do, like so fucking what, right!? Humans have to pay EXORBITANT fees for higher education in this country. Arguing that your bot gets socialized education before the people do is fucking absurd.

      • v_krishna
        link
        fedilink
        English
        arrow-up
        34
        arrow-down
        4
        ·
        10 days ago

        That seems more like an argument for free higher education rather than restricting what corpuses a deep learning model can train on

        • nickwitha_k (he/him)@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          2
          ·
          10 days ago

          Porque no los dos? Allowing major corps to put even more downward pressure on workers doesn’t help anyone but the rich. LLMs aren’t going to save the world or become sentient.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      4
      ·
      edit-2
      10 days ago

      Basing your argument around how the model or training system works doesn’t seem like the best way to frame your point to me. It invites a lot of mucking about in the details of how the systems do or don’t work, how humans learn, and what “learning” and “knowledge” actually are.

      I’m a human as far as I know, and it’s trivial for me to regurgitate my training data. I regularly say things that are either directly references to things I’ve heard, or accidentally copy them, sometimes with errors.
      Would you argue that I’m just a statistical collage of the things I’ve experienced, seen or read? My brain has as many copies of my training data in it as the AI model, namely zero, but “Captain Picard of the USS Enterprise sat down for a rousing game of chess with his friend Sherlock Holmes, and then Shakespeare came in dressed like Mickey mouse and said ‘to be or not to be, that is the question, for tis nobler in the heart’ or something”. Direct copies of someone else’s work, as well as multiple copyright infringements.
      I’m also shit at drawing with perspective. It comes across like a drunk toddler trying their hand at cubism.

      Arguing about how the model works or the deficiencies of it to justify treating it differently just invites fixing those issues and repeating the same conversation later. What if we make one that does work how humans do in your opinion? Or it properly actually extracts the information in a way that isn’t just statistically inferred patterns, whatever the distinction there is? Does that suddenly make it different?

      You don’t need to get bogged down in the muck of the technical to say that even if you conceed every technical point, we can still say that a non-sentient machine learning system can be held to different standards with regards to copyright law than a sentient person. A person gets to buy a book, read it, and then carry around that information in their head and use it however they want. Not-A-Person does not get to read a book and hold that information without consent of the author.
      Arguing why it’s bad for society for machines to mechanise the production of works inspired by others is more to the point.

      Computers think the same way boats swim. Arguing about the difference between hands and propellers misses the point that you don’t want a shrimp boat in your swimming pool. I don’t care why they’re different, or that it technically did or didn’t violate the “free swim” policy, I care that it ruins the whole thing for the people it exists for in the first place.

      I think all the AI stuff is cool, fun and interesting. I also think that letting it train on everything regardless of the creators wishes has too much opportunity to make everything garbage. Same for letting it produce content that isn’t labeled or cited.
      If they can find a way to do and use the cool stuff without making things worse, they should focus on that.

      • keegomatic@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        10 days ago

        I’m not the above poster, but I really appreciate your argument. I think many people overcorrect in their minds about whether or not these models learn the way we do, and they miss the fact that they do behave very similarly to parts of our own systems. I’ve generally found that that overcorrection leads to bad arguments about copyright violation and ethical concerns.

        However, your point is very interesting (and it is thankfully independent of that overcorrection). We’ve never had to worry about nonhuman personhood in any amount of seriousness in the past, so it’s strangely not obvious despite how obvious it should be: it’s okay to treat real people as special, even in the face of the arguable personhood of a sufficiently advanced machine. One good reason the machine can be treated differently is because we made it for us, like everything else we make.

        I think there still is one related but dangling ethical question. What about machines that are made for us but we decide for whatever reason that they are equivalent in sentience and consciousness to humans?

        A human has rights and can take what they’ve learned and make works inspired by it for money, or for someone else to make money through them. They are well within their rights to do so. A machine that we’ve decided is equivalent in sentience to a human, though… can that nonhuman person go take what it’s learned and make works inspired by it so that another person can make money through them?

        If they SHOULDN’T be allowed to do that, then it’s notable that this scenario is only separated from what we have now by a gap in technology.

        If they SHOULD be allowed to do that (which we could make a good argument for, since we’ve agreed that it is a sentient being) then the technology gap is again notable.

        I don’t think the size of the technology gap actually matters here, logically; I think you can hand-wave it away pretty easily and apply it to our current situation rather than a future one. My guess, though, is that the size of the gap is of intuitive importance to anyone thinking about it (I’m no different) and most people would answer one way or the other depending on how big they perceive the technology gap to be.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        10 days ago

        Arguing why it’s bad for society for machines to mechanise the production of works inspired by others is more to the point.

        I agree, but the fact that shills for this technology are also wrong about it is at least interesting.

        Rhetorically speaking, I don’t know if that’s useless.

        I don’t care why they’re different, or that it technically did or didn’t violate the “free swim” policy,

        I do like this point a lot.

        If they can find a way to do and use the cool stuff without making things worse, they should focus on that.

        I do miss when the likes of cleverbot was just a fun novelty on the Internet.

      • Eatspancakes84@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 days ago

        Another good question is why AIs do not mindlessly regurgitate source material. The reason is that they have access to so much copyrighted material. If they were trained on only one book, they would constantly regurgitate material from that one book. Because it’s trained on many (millions) books, it’s able to get creative. So the argument of OpenAI really boils down to: “we are not breaking copyright law, because we have used sufficient copyrighted material to avoid directly infringing on copyright”.

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          Eeeh, I still think diving into the weeds of the technical is the wrong way to approach it. Their argument is that training isn’t copyright violation, not that sufficient training dilutes the violation.

          Even if trained only on one source, it’s quite unlikely that it would generate copyright infringing output. It would be vastly less intelligible, likely to the point of overtly garbled words and sentences lacking much in the way of grammar.

          If what they’re doing is technically an infringement or how it works is entirely aside from a discussion on if it should be infringement or permitted.

    • Eatspancakes84@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      10 days ago

      I am also not really getting the argument. If I as a human want to learn a subject from a book I buy it ( or I go to a library who paid for it). If it’s similar to how humans learn, it should cost equally much.

      The issue is of course that it’s not at all similar to how humans learn. It needs VASTLY more data to produce something even remotely sensible. Develop AI that’s truly transformative, by making it as efficient as humans are in learning, and the cost of paying for copyright will be negligible.

      • stephen01king@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 days ago

        If I as a human want to learn a subject from a book I buy it ( or I go to a library who paid for it). If it’s similar to how humans learn, it should cost equally much.

        You’re on Lemmy where people casually says “piracy is morally the right thing to do”, so I’m not sure this argument works on this platform.

        • Eatspancakes84@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          edit-2
          10 days ago

          I know my way around the Jolly Roger myself. At the same time using copyrighted materials in a commercial setting (as OpenAI does) shouldn’t be free.

          • stephen01king@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            10 days ago

            Only if they are selling the output. I see it as more they are selling access to the service on a server farm, since running ChatGPT is not cheap.

            • Hamartia@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              10 days ago

              The usual cycle of tech-bro capitalism would put them currently on the early acquire market saturation stage. So it’s unlikely that they are currently charging what they will when they are established and have displaced lots of necessary occupations.

              • stephen01king@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 days ago

                That’s true, but that’s not a problem unique to AI and is something most people would like more regulations for.

      • Blaster M@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        9 days ago

        Imagine if you had blinders and earmuffs on for most of the day, and only once in a while were you allowed to interact with certain people and things. Your ability to communicate would be truncated to only what you were allowed to absorb.

    • Dran@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      7
      ·
      10 days ago

      Devil’s Advocate:

      How do we know that our brains don’t work the same way?

      Why would it matter that we learn differently than a program learns?

      Suppose someone has a photographic memory, should it be illegal for them to consume copyrighted works?

      • EldritchFeminity@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        10 days ago

        Because we’re talking pattern recognition levels of learning. At best, they’re the equivalent of parrots mimicking human speech. They take inputs and output data based on the statistical averages from their training sets - collaging pieces of their training into what they think is the right answer. And I use the word think here loosely, as this is the exact same process that the Gaussian blur tool in Photoshop uses.

        This matters in the context of the fact that these companies are trying to profit off of the output of these programs. If somebody with an eidetic memory is trying to sell pieces of works that they’ve consumed as their own - or even somebody copy-pasting bits from Clif Notes - then they should get in trouble; the same as these companies.

        Given A and B, we can understand C. But an LLM will only be able to give you AB, A(b), and B(a). And they’ve even been just spitting out A and B wholesale, proving that they retain their training data and will regurgitate the entirety of copyrighted material.

    • interdimensionalmeme
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      10 days ago

      The solution is any AI must always be released on a strong copyleft and possibly abolish copyright outright has it has only served the powerful by allowing them to enclose humanity common intellectual heritage (see Disney’s looting and enclosing if ancestral children stories). If you choose to strengthen the current regime, don’t expect things to improve for you as an irrelevant atomised individual,

  • MentalEdge@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    99
    arrow-down
    11
    ·
    edit-2
    10 days ago

    The whole point of copyright in the first place, is to encourage creative expression, so we can have human culture and shit.

    The idea of a “teensy” exception so that we can “advance” into a dark age of creative pointlessness and regurgitated slop, where humans doing the fun part has been made “unnecessary” by the unstoppable progress of “thinking” machines, would be hilarious, if it weren’t depressing as fuck.

    • wagesj45@fedia.io
      link
      fedilink
      arrow-up
      49
      arrow-down
      13
      ·
      10 days ago

      The whole point of copyright in the first place, is to encourage creative expression

      …within a capitalistic framework.

      Humans are creative creatures and will express themselves regardless of economic incentives. We don’t have to transmute ideas into capital just because they have “value”.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        35
        arrow-down
        5
        ·
        10 days ago

        Sorry buddy, but that capitalistic framework is where we all have to exist for the forseeable future.

        Giving corporations more power is not going to help us end that.

        • Uriel238 [all pronouns]@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          3
          ·
          10 days ago

          Can’t say you’re wrong, however the forseeable future is less than two centuries, and our failure to navigate our way out of capitalism towards something more mutualistic figures largely into our imminent doom.

      • MentalEdge@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        10 days ago

        You’re not wrong.

        The kind of art humanity creates is skewed a lot by the need for it to be marketable, and then sold in order to be worth doing.

        But copyright is better than nothing, and this exemption would straight up be even worse than nothing.

      • Captain Aggravated@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        10 days ago

        Humans are indeed creative by nature, we like making things. What we don’t naturally do is publish, broadcast and preserve our work.

        Society is iterative. What we build today, we build mostly out of what those who came before us built. We tell our versions of our forefathers’ stories, we build new and improved versions of our forefather’s machines.

        A purely capitalistic society would have infinite copyright and patent durations, this idea is mine, it belongs to me, no one can ever have it, my family and only my family will profit from it forever. Nothing ever improves because improving on an old idea devalues the old idea, and the landed gentry can’t allow that.

        A purely communist society immediately enters whatever anyone creates into the public domain. The guy who revolutionizes energy production making everyone’s lives better is paid the same as a janitor. So why go through all the effort? Just sweep the floors.

        At least as designed, our idea of copyright is a compromise. If you have an idea, we will grant you a limited time to exclusively profit from your idea. You may allow others to also profit at your discretion; you can grant licenses, but that’s up to you. After the time is up, your idea enters the public domain, and becomes the property and heritage of humanity, just like the Epic of Gilgamesh. Others are free to reproduce and iterate upon your ideas.

        • 31337@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 days ago

          I think you have your janitor example backwards. Spending my time revolutionizing energy productions sounds much more enjoyable than sweeping floors. Same with designing an effective floor sweeping robot.

      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 days ago

        I’d agree, but here’s one issue with that: we live in reality, not in a post-capitalist dreamworld.

        Creativity takes up a lot of time from the individual, while a lot of us are already working two or even three jobs, all on top of art. A lot of us have to heavily compromise on a lot of things, or even give up our dreams because we don’t have the time for that. Sure, you get the occasional “legendary metal guitarist practiced so much he even went to the toilet with a guitar”, but many are so tired from their main job, they instead just give up.

        Developing game while having a full-time job feels like crunching 24/7, while only around 4 is going towards that goal, which includes work done on my smartphone at my job. Others just outright give up. This shouldn’t be the normal for up and coming artists.

        • wagesj45@fedia.io
          link
          fedilink
          arrow-up
          3
          ·
          9 days ago

          That’s why we should look for good solutions to societal problems, and not fall back on bad “solutions” just because that’s what we’re used to. I’m not against the idea of copyright existing. But copyright as it exists today is stifling and counterproductive for most creative endeavors. We do live in reality, but I don’t believe it is the only possible reality. We’re not getting to Star Trek Space Communism™ anytime soon and honestly I like the idea of owning stuff. That doesn’t mean that there aren’t concrete steps we can and should take right now in the present reality to make things better. And for that to happen we need to get our priorities and philosophies straight. Philosophies which for me include a robust public commons, the inability to own ideas outright, and the ability to take and transform art and culture. Otherwise, we’re just falling into the “temporarily embarrassed millionaires” mindset but for art and culture.

        • ClamDrinker@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 days ago

          Honestly, that’s why open source AI is such a good thing for small creatives. Hate it or love it, anyone wielding AI with the intention to make new expression will be much more safe and efficient to succeed until they can grow big enough to hire a team with specialists. People often look at those at the top but ignore the things that can grow from the bottom and actually create more creative expression.

          • ZILtoid1991@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 days ago

            One issue is, many open source AI also tries to ape whatever the big ones are doing at the moment, with the most outrageous example is one that generates a timelapse for AI art.

            There’s also tools that especially were created with artists in mind, but they’re less popular due to the average person cannot use it as easily as the prompter machines, nor promise the end of “people with fake jobs” (boomers like generative AI for this reason).

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        4
        ·
        10 days ago

        That’s the reason we got copyright, but I don’t think that’s the only reason we could want copyright.

        Two good reasons to want copyright:

        1. Accurate attribution
        2. Faithful reproduction

        Accurate attribution:

        Open source thrives on the notion that: if there’s a new problem to be solved, and it requires a new way of thinking to solve it, someone will start a project whose goal is not just to build new tools to solve the problem but also to attract other people who want to think about the problem together.

        If anyone can take the codebase and pretend to be the original author, that will splinter the conversation and degrade the ability of everyone to find each other and collaborate.

        In the past, this was pretty much impossible because you could check a search engine or social media to find the truth. But with enshittification and bots at every turn, that looks less and less guaranteed.

        Faithful reproduction:

        If I write a book and make some controversial claims, yet it still provokes a lot of interest, people might be inclined to publish slightly different versions to advance their own opinions.

        Maybe a version where I seem to be making an abhorrent argument, in an effort to mitigate my influence. Maybe a version where I make an argument that the rogue publisher finds more palatable, to use my popularity to boost their own arguments.

        This actually happened during the early days of publishing, by the way! It’s part of the reason we got copyright in the first place.

        And again, it seems like this would be impossible to get away with now, buuut… I’m not so sure anymore.

        Personally:

        I favor piracy in the sense that I think everyone has a right to witness culture even if they can’t afford the price of admission.

        And I favor remixing because the cultural conversation should be an active read-write two-way street, no just passive consumption.

        But I also favor some form of licensing, because I think we have a duty to respect the integrity of the work and the voice of the creator.

        I think AI training is very different from piracy. I’ve never downloaded a mega pack of songs and said to my friends “Listen to what I made!” I think anyone who compares OpenAI to pirates (favorably) is unwittingly helping the next set of feudal tech lords build a wall around the entirety of human creativity, and they won’t realize their mistake until the real toll booths open up.

        • EatATaco@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          4
          ·
          10 days ago

          I think AI training is very different from piracy. I’ve never downloaded a mega pack of songs and said to my friends “Listen to what I made!”

          I’ve never done this. But I have taken lessons from people for instruments, listened to bands I like, and then created and played songs that certainly are influences by all of that. I’ve also taken a lot of art classes, and studied other people’s painting styles and then created things from what I’ve learned, and said “look at what I made!” Which is far more akin to what AI is doing that what you are implying here.

          • Rekorse@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            4
            ·
            10 days ago

            So what if its closer? Its still not an accurate description, because thats not what AI does.

            • EatATaco@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              10 days ago

              Because what they are describing is just straight up theft, while what I describes is so much closer to how one trains and ai. I’m afraid that what comes out of this ai hysteria is that copyright gets more strict and humans copying style even becomes illegal.

              • kibiz0r@midwest.social
                link
                fedilink
                English
                arrow-up
                3
                ·
                10 days ago

                I’m sympathetic to the reflexive impulse to defend OpenAI out of a fear that this whole thing results in even worse copyright law.

                I, too, think copyright law is already smothering the cultural conversation and we’re potentially only a couple of legislative acts away from having “property of Disney” emblazoned on our eyeballs.

                But don’t fall into their trap of seeing everything through the lens of copyright!

                We have other laws!

                We can attack OpenAI on antitrust, likeness rights, libel, privacy, and labor laws.

                Being critical of OpenAI doesn’t have to mean siding with the big IP bosses. Don’t accept that framing.

                • EatATaco@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  10 days ago

                  Their framing of how AI works is grossly inaccurate. I just corrected that.

              • Rekorse@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 days ago

                Well that all doesn’t matter much. If AI is used to cause harm, it should be regulated. If that frustrates you then go get the laws changed that allow shitty companies to ruin good ideas.

                • EatATaco@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  10 days ago

                  I never said anything about leaving ai unregulated. I never said anything about being frustrated. And its likely you asking for laws to be changed, not me.

                  I’m not even sure you’re responding to my post.

    • zarenki
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 days ago

      The whole point of copyright in the first place, is to encourage creative expression, so we can have human culture and shit.

      I feel like that purpose has already been undermined by various changes to copyright law since its inception, such as DMCA and lengthening copyright term from 14 years to 95. Freedom to remix existing works is an important part of creative expression which current law stifles for any original work that releases in one person’s lifespan. (Even Disney knew this: the animated Pinocchio movie wouldn’t exist if copyright could last more than 56 years then)

      Either way, giving bots the ‘right’ to remix things that were just made less than a year ago while depriving humans the right to release anything too similar to a 94 year old work seems ridiculous on both ends.

  • calcopiritus@lemmy.world
    link
    fedilink
    English
    arrow-up
    78
    arrow-down
    10
    ·
    10 days ago

    I’ll train my AI on just the bee movie. Then I’m going to ask it “can you make me a movie about bees”? When it spits the whole movie, I can just watch it or sell it or whatever, it was a creation of my AI, which learned just like any human would! Of course I didn’t even pay for the original copy to train my AI, it’s for learning purposes, and learning should be a basic human right!

    • stephen01king@lemmy.zip
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      9
      ·
      10 days ago

      That would be like you writing out the bee movie yourself after memorizing the whole movie and claiming it is your own idea or using it as proof that humans memorizing a movie is violating copyright. Just because an AI is violating copyright by outputting the whole bee movie, it doesn’t mean training the AI on copyright stuff is violating copyright.

      Let’s just punish the AI companies for outputting copyright stuff instead of for training with them. Maybe that way they would actually go out of their way to make their LLM intelligent enough to not spit out copyrighted content.

      Or, we can just make it so that any output made by an AI that is trained on copyrighted stuff cannot be copyrighted.

      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 days ago

        I don’t think that’s a feasible dream in our current system. They’ll just lobby for it, some senators will say something akin to “art should have been always a hobby, not a profession”, then make adjustments for the current copyright laws so that they can be copyrighted.

      • calcopiritus@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        10 days ago

        If the solution is making the output non-copyrighted it fixes nothing. You can sell the pirating machine on a subscription. And it’s not like Netflix where the content ends when the subscription ends, you have already downloaded all the not-copyrighted content you wanted, and the internet would be full of non-copyrighted AI output.

        Instead of selling the bee movie, you sell a bee movie maker, and a spiderman maker, and a titanic maker.

        Sure, file a copyright infringement each time you manage to make an AI output copyrighted content. Just run it on a loop and it’s a money making machine. That’s fine by me.

        • stephen01king@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 days ago

          Yeah, because running the AI also have some cost, so you are selling the subscription to run the AI on their server, not it’s output.

          I’m not sure what is the legality of selling a bee movie maker, so you’d have to research that one yourself.

          It’s not really a money making machine if you lose more money running the AI on your server farm, but whatever floats your boat. Also, there are already lawsuits based on outputs created from chatgpt, so it is exactly what is already happening.

          • calcopiritus@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            edit-2
            10 days ago

            Yeah, making sandwiches also costs money! I have to pay my sandwich making employees to keep the business profitable! How do they expect me to pay for the cheese?

            EDIT: also, you completely missed my point. The money making machine is the AI because the copyright owners could just use them every time it produces copyright-protected material if we decided to take that route, which is what the parent comment suggested.

            • stephen01king@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 days ago

              They should pay for the cheese, I’m not arguing against that, but they should be paying it the same amount as a normal human would if they want access to that cheese. No extra fees for access to copyrighted material if you want to use it to train AI vs wanting to consume it yourself.

              And I didn’t miss your point. My point was that the reality is already occurring since people are already suing OpenAI for ChatGPT outputs that the people suing are generating themselves, so it’s no longer just a hypothetical. We’ll see if it is a money making machine for them or will they just waste their resources from doing that.

              • calcopiritus@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                10 days ago

                Media is not exactly like cheese though. With cheese, you buy it and it’s yours. Media, however, is protected by copyright. When you watch a movie, you are given a license to watch the movie.

                When an AI watches a movie, it’s not really watching it, it’s doing a different action. If the license of the movie says “you can’t use this license to train AI, use the other (more expensive) license for such purposes”, then AIs have extra fees to access the content that humans don’t have to pay.

                • stephen01king@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  10 days ago

                  Both humans and AI consume the content, even if they do not do so in the exact same way. I don’t see the need to differentiate that. It’s not like we have any idea of the mechanism by which humans consume a content to make the differentiation in the first place.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      10 days ago

      In the meantime I’ll introduce myself into the servers of large corporations and read their emails, codebase, teams and strategic analysis, it’s just learning!

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      10 days ago

      learning should be a basic human right!

      Education is a basic human right (except maybe in Usa, then it should be one there)

  • mm_maybe@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    71
    arrow-down
    9
    ·
    10 days ago

    The problem with your argument is that it is 100% possible to get ChatGPT to produce verbatim extracts of copyrighted works. This has been suppressed by OpenAI in a rather brute force kind of way, by prohibiting the prompts that have been found so far to do this (e.g. the infamous “poetry poetry poetry…” ad infinitum hack), but the possibility is still there, no matter how much they try to plaster over it. In fact there are some people, much smarter than me, who see technical similarities between compression technology and the process of training an LLM, calling it a “blurry JPEG of the Internet”… the point being, you wouldn’t allow distribution of a copyrighted book just because you compressed it in a ZIP file first.

    • cum_hoc@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      10 days ago

      The problem with your argument is that it is 100% possible to get ChatGPT to produce verbatim extracts of copyrighted works.

      Exactly! This is the core of the argument The New York Times made against OpenAI. And I think they are right.

      • VoterFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        7
        ·
        10 days ago

        The examples they provided were for very widely distributed stories (i.e. present in the data set many times over). The prompts they used were not provided. How many times they had to prompt was not provided. Their results are very difficult to reproduce, if not impossible, especially on newer models.

        I mean, sure, it happens. But it’s not a generalizable problem. You’re not going to get it to regurgitate your Lemmy comment, even if they’ve trained on it. You can’t just go and ask it to write Harry Potter and the goblet of fire for you. It’s not the intended purpose of this technology. I expect it’ll largely be a solved problem in 5-10 years, if not sooner.

    • cashew@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      4
      ·
      10 days ago

      I agree. You can’t just dismiss the problem saying it’s “just data represented in vector space” and on the other hand not be able properly censor the models and require AI safety research. If you don’t know exactly what’s going on inside, you also can’t claim that copyright is not being violated.

      • Hackworth@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        8
        ·
        10 days ago

        It honestly blows my mind that people look at a neutral network that’s even capable of recreating short works it was trained on without having access to that text during generation… and choose to focus on IP law.

        • fruitycoder@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          10 days ago

          Right! Like if we could honestly further enhance that feature its an incredible increase in compression tech!

    • FatCrab@lemmy.one
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      4
      ·
      10 days ago

      ML techniques have been very useful in compression, yes, but it’s sort of nuts to say that a data structure that encodes only (sometimes overly so for certain regions of its latent space/embedding space/semantics space/whatever you want to call it right now) relationships between values rather than value sequences themselves as storing contiguous copyright protected works is storing partiularized creative works in particularly identifiable manner.

      • GiveMemes@jlai.lu
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        4
        ·
        edit-2
        10 days ago

        Except that, again, as is literally written in the comment you’re directly replying to, it has been shown that AI can reproduce copyrightable works word for word, showing that it objectively and necessarily is storing particular creative works in a particularly identifiable manner, whether or not that manner is yet known to humans.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            3
            ·
            10 days ago

            You don’t learn by memorizing and reproducing works, you learn by understanding the concepts in various works and producing new works that are combinations of the ideas in those other works. AI doesn’t understand, and it has been shown to be able to reproduce works, so I think it’s fair to say that it’s doing a lot of “memorizing” and therefore plagiarism.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                7
                arrow-down
                1
                ·
                edit-2
                10 days ago

                Is it though? People memorize things very differently than computers do, but the actual mechanism of storage isn’t particularly important. What’s important is the net result. Whether it uses baysian networks (what we used in class for small-scale NLP), neural networks (what I assume LLMs use), or something else doesn’t particularly matter.

                For example, a search engine typically only stores keywords and relationships, so there’s no way for it to reproduce an entire work (ignoring, of course, the “caching” features some search engines have). All it does is associate keywords with source material, so there’s a strong argument that it falls under fair use.

                LLMs, on the other hand, process entire works and keep more than just keywords, and they store it in such a way that entire works can be recovered if coaxed. My understanding is that they break up words into something like sets of phonemes, and then queries do a similar break-up as input to the neural network to produce an output, which is then reassembled into text. But that’s my relatively naive understanding of how it all works (I’ve only done university level NLP, and that was years ago), but again, that’s really not the point here. The point is that it uses a lot more of the work than the typical understanding of “fair use,” and if copyrighted works can be reproduced by it, then the copyrighted work is “stored” in some fashion, so it can be thought of as a really complex form of compression, with tricky retrieval mechanisms. So in layman’s terms, it’s “memorizing” entire works in a way not entirely unlike a “mind palace”, and to reproduce a given work, you need the right input to follow the right steps, but a slightly different input will lead to a very different output (i.e. maybe something with similar content, but no copyright violations).

                What’s at issue isn’t whether the LLM is likely to reproduce entire works, but whether it can and does, which would mean it’s violating fair use standards.

          • GiveMemes@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            9 days ago

            Learning is not being able to reproduce a news article word for word.

        • FatCrab@lemmy.one
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          10 days ago

          No, it isn’t storing that information in that sequence. What is happening is that it is overly encoding those particular sequential relationships along some arbitrary but tightly mapped semantic concepts represented by dimensions in a massive vector space. It is storing copies of the information on the way that inadvertent copying of music might be based on “memorized” music listened to by the infringing artist in the past.

          • GiveMemes@jlai.lu
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            9 days ago

            Not what I said. I used the exact language the above commenter used because it was specific and accurate. Also, inadvertent copyright violation is still copyright violation under US law. I’m not the biggest fan of every application of that law, but the ability to keep large corporations from ripping off small artists and creators is one that I think is good and useful under the global economic system we live under currently.

            • FatCrab@lemmy.one
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              9 days ago

              Yes, inadvertent copying is still copying, but it would be copying in the output and is not evidence of copying happening in the creation of the model. That was why I used the music example, because it is rather probative of where there could be grounds for copyright infringement related to these model architectures. This may not seem an important distinction, but it has significant consequences on who is ultimately liable and how.

    • capital@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      9 days ago

      The problem with your argument is that it is 100% possible to get ChatGPT to produce verbatim extracts of copyrighted works.

      What method still works? I’d like to try it.

      I have access to ChatGPT 4, and the latest Anthropic model.

      Edit: hm… no answers but downvotes. I wonder why that is.

    • ClamDrinker@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      10 days ago

      This would be a good point, if this is what the explicit purpose of the AI was. Which it isn’t. It can quote certain information verbatim despite not containing that data verbatim, through the process of learning, for the same reason we can.

      I can ask you to quote famous lines from books all day as well. That doesn’t mean that you knowing those lines means you infringed on copyright. Now, if you were to put those to paper and sell them, you might get a cease and desist or a lawsuit. Therein lies the difference. Your goal would be explicitly to infringe on the specific expression of those words. Any human that would explicitly try to get an AI to produce infringing material… would be infringing. And unknowing infringement… well there are countless court cases where both sides think they did nothing wrong.

      You don’t even need AI for that, if you followed the Infinite Monkey Theorem and just happened to stumble upon a work falling under copyright, you still could not sell it even if it was produced by a purely random process.

      Another great example is the Mona Lisa. Most people know what it looks like and if they had sufficient talent could mimic it 1:1. However, there are numerous adaptations of the Mona Lisa that are not infringing (by today’s standards), because they transform the work to the point where it’s no longer the original expression, but a re-expression of the same idea. Anything less than that is pretty much completely safe infringement wise.

      You’re right though that OpenAI tries to cover their ass by implementing safeguards. Which is to be expected because it’s a legal argument in court that once they became aware of situations they have to take steps to limit harm. They can indeed not prevent it completely, but it’s the effort that counts. Practically none of that kind of moderation is 100% effective. Otherwise we’d live in a pretty good world.

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      11
      ·
      10 days ago

      Equating LLMs with compression doesn’t make sense. Model sizes are larger than their training sets. if it requires “hacking” to extract text of sufficient length to break copyright, and the platform is doing everything they can to prevent it, that just makes them like every platform. I can download © material from YouTube (or wherever) all day long.

      • mm_maybe@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        10 days ago

        Model sizes are larger than their training sets

        Excuse me, what? You think Huggingface is hosting 100’s of checkpoints each of which are multiples of their training data, which is on the order of terabytes or petabytes in disk space? I don’t know if I agree with the compression argument, myself, but for other reasons–your retort is objectively false.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          4
          ·
          edit-2
          10 days ago

          Just taking GPT 3 as an example, its training set was 45 terabytes, yes. But that set was filtered and processed down to about 570 GB. GPT 3 was only actually trained on that 570 GB. The model itself is about 700 GB. Much of the generalized intelligence of an LLM comes from abstraction to other contexts.

          Table 2.2 shows the final mixture of datasets that we used in training. The CommonCrawl data was downloaded from 41 shards of monthly CommonCrawl covering 2016 to 2019, constituting 45TB of compressed plaintext before filtering and 570GB after filtering, roughly equivalent to 400 billion byte-pair-encoded tokens. Language Models are Few-Shot Learners

          *Did some more looking, and that model size estimate assumes 32 bit float. It’s actually 16 bit, so the model size is 350GB… technically some compression after all!

      • beebarfbadger@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        10 days ago

        The issue isn’t that you can coax AI into giving away unaltered copyrighted books out of their trunk, the issue is that if you were to open the hood, you’d see that the entire engine is made of unaltered copyrighted books.

        All those “anti hacking” measures are just there to obfuscate the fact that that the unaltered works are being in use and recallable at all times.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          3
          ·
          edit-2
          10 days ago

          This is an inaccurate understanding of what’s going on. Under the hood is a neutral network with weights and biases, not a database of copyrighted work. That neutral network was trained on a HEAVILY filtered training set (as mentioned above, 45 terabytes was reduced to 570 GB for GPT3). Getting it to bug out and generate full sections of training data from its neutral network is a fun parlor trick, but you’re not going to use it to pirate a book. People do that the old fashioned way by just adding type:pdf to their common web search.

          • beebarfbadger@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            10 days ago

            Again: nobody is complaining that you can make AI spit out their training data because AI is the only source of that training data. That is not the issue and nobody cares about AI as a delivery source of pirated material. The issue is that next to the transformed output, the not-transformed input is being in use in a commercial product.

            • Hackworth@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              10 days ago

              The issue is that next to the transformed output, the not-transformed input is being in use in a commercial product.

              Are you only talking about the word repetition glitch?

  • finley@lemm.ee
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    6
    ·
    edit-2
    10 days ago

    “but how are we supposed to keep making billions of dollars without unscrupulous intellectual property theft?! line must keep going up!!”

  • dhork@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    20
    ·
    10 days ago

    Bullshit. AI are not human. We shouldn’t treat them as such. AI are not creative. They just regurgitate what they are trained on. We call what it does “learning”, but that doesn’t mean we should elevate what they do to be legally equal to human learning.

    It’s this same kind of twisted logic that makes people think Corporations are People.

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      19
      ·
      edit-2
      10 days ago

      Ok, ignore this specific company and technology.

      In the abstract, if you wanted to make artificial intelligence, how would you do it without using the training data that we humans use to train our own intelligence?

      We learn by reading copyrighted material. Do we pay for it? Sometimes. Sometimes a teacher read it a while ago and then just regurgitated basically the same copyrighted information back to us in a slightly changed form.

      • doctortran@lemm.ee
        link
        fedilink
        English
        arrow-up
        27
        arrow-down
        5
        ·
        edit-2
        10 days ago

        We learn by reading copyrighted material.

        We are human beings. The comparison is false on it’s face because what you all are calling AI isn’t in any conceivable way comparable to the complexity and versatility of a human mind, yet you continue to spit this lie out, over and over again, trying to play it up like it’s Data from Star Trek.

        This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.

        Moreover, human beings make their own choices, they aren’t actual tools.

        They pointed a tool at copyrighted works and told it to copy, do some math, and regurgitate it. What the AI “does” is not relevant, what the people that programmed it told it to do with that copyrighted information is what matters.

        There is no intelligence here except theirs. There is no intent here except theirs.

        • drosophila@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          10 days ago

          This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.

          I do think the complexity of artificial neural networks is overstated. A real neuron is a lot more complex than an artificial one, and real neurons are not simply feed forward like ANNs (which have to be because they are trained using back-propagation), but instead have their own spontaneous activity (which kinda implies that real neural networks don’t learn using stochastic gradient descent with back-propagation). But to say that there’s nothing at all comparable between the way humans learn and the way ANNs learn is wrong IMO.

          If you read books such as V.S. Ramachandran and Sandra Blakeslee’s Phantoms in the Brain or Oliver Sacks’ The Man Who Mistook His Wife For a Hat you will see lots of descriptions of patients with anosognosia brought on by brain injury. These are people who, for example, are unable to see but also incapable of recognizing this inability. If you ask them to describe what they see in front of them they will make something up on the spot (in a process called confabulation) and not realize they’ve done it. They’ll tell you what they’ve made up while believing that they’re telling the truth. (Vision is just one example, anosognosia can manifest in many different cognitive domains).

          It is V.S Ramachandran’s belief that there are two processes that occur in the Brain, a confabulator (or “yes man” so to speak) and an anomaly detector (or “critic”). The yes-man’s job is to offer up explanations for sensory input that fit within the existing mental model of the world, whereas the critic’s job is to advocate for changing the world-model to fit the sensory input. In patients with anosognosia something has gone wrong in the connection between the critic and the yes man in a particular cognitive domain, and as a result the yes-man is the only one doing any work. Even in a healthy brain you can see the effects of the interplay between these two processes, such as with the placebo effect and in hallucinations brought on by sensory deprivation.

          I think ANNs in general and LLMs in particular are similar to the yes-man process, but lack a critic to go along with it.

          What implications does that have on copyright law? I don’t know. Real neurons in a petri dish have already been trained to play games like DOOM and control the yoke of a simulated airplane. If they were trained instead to somehow draw pictures what would the legal implications of that be?

          There’s a belief that laws and political systems are derived from some sort of deep philosophical insight, but I think most of the time they’re really just whatever works in practice. So, what I’m trying to say is that we can just agree that what OpenAI does is bad and should be illegal without having to come up with a moral imperative that forces us to ban it.

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          12
          ·
          edit-2
          10 days ago

          We are human beings. The comparison is false on it’s face because what you all are calling AI isn’t in any conceivable way comparable to the complexity and versatility of a human mind, yet you continue to spit this lie out, over and over again, trying to play it up like it’s Data from Star Trek.

          If you fundamentally do not think that artificial intelligences can be created, the onus is on yo uto explain why it’s impossible to replicate the circuitry of our brains. Everything in science we’ve seen this far has shown that we are merely physical beings that can be recreated physically.

          Otherwise, I asked you to examine a thought experiment where you are trying to build an artificial intelligence, not necessarily an LLM.

          This model isn’t “learning” anything in any way that is even remotely like how humans learn. You are deliberately simplifying the complexity of the human brain to make that comparison.

          Or you are over complicating yourself to seem more important and special. Definitely no way that most people would be biased towards that, is there?

          Moreover, human beings make their own choices, they aren’t actual tools.

          Oh please do go ahead and show us your proof that free will exists! Thank god you finally solved that one! I heard people were really stressing about it for a while!

          They pointed a tool at copyrighted works and told it to copy, do some math, and regurgitate it. What the AI “does” is not relevant, what the people that programmed it told it to do with that copyrighted information is what matters.

          “I don’t know how this works but it’s math and that scares me so I’ll minimize it!”

          • pmc@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            1
            ·
            edit-2
            10 days ago

            If we have an AI that’s equivalent to humanity in capability of learning and creative output/transformation, it would be immoral to just use it as a tool. At least that’s how I see it.

            • masterspace@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              9
              ·
              10 days ago

              I think that’s a huge risk, but we’ve only ever seen a single, very specific type of intelligence, our own / that of animals that are pretty closely related to us.

              Movies like Ex Machina and Her do a good job of pointing out that there is nothing that inherently means that an AI will be anything like us, even if they can appear that way or pass at tasks.

              It’s entirely possible that we could develop an AI that was so specifically trained that it would provide the best script editing notes but be incapable of anything else for instance, including self reflection or feeling loss.

      • Wiz@midwest.social
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        10 days ago

        The things is, they can have scads of free stuff that is not copyrighted. But they are greedy and want copyrighted stuff, too

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          7
          ·
          10 days ago

          We all should. Copyright is fucking horseshit.

          It costs literally nothing to make a digital copy of something. There is ZERO reason to restrict access to things.

          • Wiz@midwest.social
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            10 days ago

            You sound like someone who has not tried to make an artistic creation for profit.

              • Wiz@midwest.social
                link
                fedilink
                English
                arrow-up
                4
                ·
                10 days ago

                Better system for WHOM? Tech-bros that want to steal my content as their own?

                I’m a writer, performing artist, designer, and illustrator. I have thought about copyright quite a bit. I have released some of my stuff into the public domain, as well as the Creative Commons. If you want to use my work, you may - according to the licenses that I provide.

                I also think copyright law is way out of whack. It should go back to - at most - life of author. This “life of author plus 95 years” is ridiculous. I lament that so much great work is being lost or forgotten because of the oppressive copyright laws - especially in the area of computer software.

                But tech-bros that want my work to train their LLMs - they can fuck right off. There are legal thresholds that constitute “fair use” - Is it used for an academic purpose? Is it used for a non-profit use? Is the portion that is being used a small part or the whole thing? LLM software fail all of these tests.

                They can slurp up the entirety of Wikipedia, and they do. But they are not satisfied with the free stuff. But they want my artistic creations, too, without asking. And they want to sell something based on my work, making money off of my work, without asking.

                • masterspace@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  4
                  ·
                  edit-2
                  10 days ago

                  Better system for WHOM? Tech-bros that want to steal my content as their own?

                  A better system for EVERYONE. One where we all have access to all creative works, rather than spending billions on engineers nad lawyers to create walled gardens and DRM and artificial scarcity. What if literally all the money we spent on all of that instead went to artist royalties?

                  But tech-bros that want my work to train their LLMs - they can fuck right off. There are legal thresholds that constitute “fair use” - Is it used for an academic purpose? Is it used for a non-profit use? Is the portion that is being used a small part or the whole thing? LLM software fail all of these tests.

                  No. It doesn’t.

                  They can literally pass all of those tests.

                  You are confusing OpenAI keeping their LLM closed source and charging access to it, with LLMs in general. The open source models that Microsoft and Meta publish for instance, pass literally all of the criteria you just stated.

          • ContrarianTrail@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            edit-2
            10 days ago

            Making a copy is free. Making the original is not. I don’t expect a professional photographer to hand out their work for free because making copies of it costs nothing. You’re not paying for the copy, you’re paying for the money and effort needed to create the original.

            • masterspace@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              edit-2
              10 days ago

              Making a copy is free. Making the original is not.

              Yes, exactly. Do you see how that is different from the world of physical objects and energy? That is not the case for a physical object. Even once you design something and build a factory to produce it, the first item off the line takes the same amount of resources as the last one.

              Capitalism is based on the idea that things are scarce. If I have something, you can’t have it, and if you want it, then I have to give up my thing, so we end up trading. Information does not work that way. We can freely copy a piece of information as much as we want. Which is why monopolies and capitalism are a bad system of rewarding creators. They inherently cause us to impose scarcity where there is no need for it, because in capitalism things that are abundant do not have value. Capitalism fundamentally fails to function when there is abundance of resources, which is why copyright was a dumb system for the digital age. Rather than recognize that we now live in an age of information abundance, we spend billions of dollars trying to impose artificial scarcity.

      • Geobloke@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        10 days ago

        And that’s all paid for. Think how much just the average high school graduate has has invested in them, ai companies want all that, but for free

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          10
          ·
          edit-2
          10 days ago

          It’s not though.

          A huge amount of what you learn, someone else paid for, then they taught that knowledge to the next person, and so on. By the time you learned it, it had effectively been pirated and copied by human brains several times before it got to you.

          Literally anything you learned from a Reddit comment or a Stack Overflow post for instance.

          • Geobloke@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            10 days ago

            If only there was a profession that exchanges knowledge for money. Some one who “teaches.” I wonder who would pay them

  • sentientity@lemm.ee
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    6
    ·
    edit-2
    9 days ago

    Disagree. These companies are exploiting an unfair power dynamic they created that people can’t say no to, to make an ungodly amount of money for themselves without compensating people whose data they took without telling them. They are not creating a cool creative project that collaboratively comments on or remixes what other people have made, they are seeking to gobble up and render irrelevant everything that they can, for short term greed. That’s not the scenario these laws were made for. AI hurts people who have already been exploited and industries that have already been decimated. Copyright laws were not written with this kind of thing in mind. There are potentially cool and ethical uses for AI models, but open ai and google are just greed machines.

    Edited * THRICE because spelling. oof.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    4
    ·
    10 days ago

    Not even stealing cheese to run a sandwich shop.

    Stealing cheese to melt it all together and run a cheese shop that undercuts the original cheese shops they stole from.

  • lightnsfw@reddthat.com
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    3
    ·
    9 days ago

    If ChatGPT was free I might see their point but it’s not so no. If you’re making money from someone’s work you should pay them.

  • helenslunch@feddit.nl
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    8
    ·
    10 days ago

    Those claiming AI training on copyrighted works is “theft” misunderstand key aspects of copyright law and AI technology.

    Or maybe they’re not talking about copyright law. They’re talking about basic concepts. Maybe copyright law needs to be brought into the 21st century?

  • gcheliotis@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    5
    ·
    edit-2
    10 days ago

    Though I am not a lawyer by training, I have been involved in such debates personally and professionally for many years. This post is unfortunately misguided. Copyright law makes concessions for education and creativity, including criticism and satire, because we recognize the value of such activities for human development. Debates over the excesses of copyright in the digital age were specifically about humans finding the application of copyright to the internet and all things digital too restrictive for their educational, creative, and yes, also their entertainment needs. So any anti-copyright arguments back then were in the spirit specifically of protecting the average person and public-interest non-profit institutions, such as digital archives and libraries, from big copyright owners who would sue and lobby for total control over every file in their catalogue, sometimes in the process severely limiting human potential.

    AI’s ingesting of text and other formats is “learning” in name only, a term borrowed by computer scientists to describe a purely computational process. It does not hold the same value socially or morally as the learning that humans require to function and progress individually and collectively.

    AI is not a person (unless we get definitive proof of a conscious AI, or are willing to grant every implementation of a statistical model personhood). Also AI it is not vital to human development and as such one could argue does not need special protections or special treatment to flourish. AI is a product, even more clearly so when it is proprietary and sold as a service.

    Unlike past debates over copyright, this is not about protecting the little guy or organizations with a social mission from big corporate interests. It is the opposite. It is about big corporate interests turning human knowledge and creativity into a product they can then use to sell services to - and often to replace in their jobs - the very humans whose content they have ingested.

    See, the tables are now turned and it is time to realize that copyright law, for all its faults, has never been only or primarily about protecting large copyright holders. It is also about protecting your average Joe from unauthorized uses of their work. More specifically uses that may cause damage, to the copyright owner or society at large. While a very imperfect mechanism, it is there for a reason, and its application need not be the end of AI. There’s a mechanism for individual copyright owners to grant rights to specific uses: it’s called licensing and should be mandatory in my view for the development of proprietary LLMs at least.

    TL;DR: AI is not human, it is a product, one that may augment some tasks productively, but is also often aimed at replacing humans in their jobs - this makes all the difference in how we should balance rights and protections in law.

    • 31337@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      10 days ago

      AI are people, my friend. /s

      But, really, I think people should be able to run algorithms on whatever data they want. It’s whether the output is sufficiently different or “transformative” that matters (and other laws like using people’s likeness). Otherwise, I think the laws will get complex and nonsensical once you start adding special cases for “AI.” And I’d bet if new laws are written, they’d be written by lobbiests to further erode the threat of competition (from free software, for instance).

    • Michal@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      12
      ·
      10 days ago

      What do you think “ingesting” means if not learning?

      Bear in mind that training AI does not involve copying content into its database, so copyright is not an issue. AI is simply predicting the next token /word based on statistics.

      You can train AI in a book and it will give you information from the book - information is not copyrightable. You can read a book a talk about its contents on TV - not illegal if you’re a human, should it be illegal if you’re a machine?

      There may be moral issues on training on someone’s hard gathered knowledge, but there is no legislature against it. Reading books and using that knowledge to provide information is legal. If you try to outlaw Automating this process by computers, there will be side effects such as search engines will no longer be able to index data.