ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

  • TWeaK@lemm.ee
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    8
    ·
    7 months ago

    And just the other day I had people arguing to me that it simply wasn’t possible for ChatGPT to contain significant portions of copyrighted work in its database.

    • NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      58
      arrow-down
      10
      ·
      7 months ago

      Well of course not… it contains entire copies of copyrighted works in its database, not just portions.

      • ayaya@lemdro.id
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        2
        ·
        edit-2
        7 months ago

        The important distinction is that this “database” would be the training data, which it only has access to during training. It does not have access once it is actually deployed and running.

        It is easy to think of it like a human taking a test. You are allowed to read your textbooks as much as you want while you study, but once you actually start the test you can only go off of what you remember. Sure you might remember bits and pieces, but it is not the same thing as being able to directly pull from any textbook you want at any time.

        It would require you to have a photographic memory (or in the case of ChatGPT, terabytes of VRAM) to be able to perfectly remember the entirety of your textbooks during the test.

        • ignirtoq@kbin.social
          link
          fedilink
          arrow-up
          19
          arrow-down
          1
          ·
          7 months ago

          It doesn’t have to have a copy of all copyrighted works it trained from in order to violate copyright law, just a single one.

          However, this does bring up a very interesting question that I’m not sure the law (either textual or common law) is established enough to answer: how easily accessible does a copy of a copyrighted work have to be from an otherwise openly accessible data store in order to violate copyright?

          In this case, you can view the weights of a neural network model as that data store. As the network trains on a data set, some human-inscrutable portion of that data is encoded in those weights. The argument has been that because it’s only a “portion” of the data covered by copyright being encoded in the weights, and because the weights are some irreversible combination of all of such “portions” from all of the training data, that you cannot use the trained model to recreate a pristine chunk of the copyrighted training data of sufficient size to be protected under copyright law. Attacks like this show that not to be the case.

          However, attacks like this seem only able to recover random chunks of training data. So someone can’t take a body of training data, insert a specific copyrighted work in the training data, train the model, distribute the trained model (or access to the model through some interface), and expect someone to be able to craft an attack to get that specific work back out. In other words, it’s really hard to orchestrate a way to violate someone’s copyright on a specific work using LLMs in this way. So the courts will need to decide if that difficulty has any bearing, or if even just a non-zero possibility of it happening is enough to restrict someone’s distribution of a pre-trained model or access to a pre-trained model.

          • fubo@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            7 months ago

            It doesn’t have to have a copy of all copyrighted works it trained from in order to violate copyright law, just a single one.

            Sure, which would create liability to that one work’s copyright owner; not to every author. Each violation has to be independently shown: it’s not enough to say “well, it recited Harry Potter so therefore it knows Star Wars too;” it has to be separately shown to recite Star Wars.

            It’s not surprising that some works can be recited; just as it’s not surprising for a person to remember the full text of some poem they read in school. However, it would be very surprising if all works from the training data can be recited this way, just as it’s surprising if someone remembers every poem they ever read.

          • TWeaK@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            7 months ago

            how easily accessible does a copy of a copyrighted work have to be from an otherwise openly accessible data store in order to violate copyright?

            I don’t think it really matters how accessible it is, what matters is the purpose of use. In a nutshell, fair use covers education, news and criticism. After that, the first consideration is whether the use is commercial in nature.

            ChatGPT’s use isn’t education (research), they’re developing a commercial product - even the early versions were not so much prototypes but a part of the same product they have today. Even if it were considered as a research fair use exception, the product absolutely is commercial in nature.

            Whether or not data was openly accessible doesn’t really matter - more than likely the accessible data itself is a copyright violation. That would be a separate violation, but it absolutely does not excuse ChatGPT’s subsequent violation. ChatGPT also isn’t just reading the data at its source, it’s copying it into its training dataset, and that copying is unlicensed.

            • ignirtoq@kbin.social
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              7 months ago

              Whether or not data was openly accessible doesn’t really matter […] ChatGPT also isn’t just reading the data at its source, it’s copying it into its training dataset, and that copying is unlicensed.

              Actually, the act of copying a work covered by copyright is not itself illegal. If I check out a book from a library and copy a passage (or the whole book!) for rereading myself or some other use that is limited strictly to myself, that’s actually legal. If I turn around and share that passage with a friend in a way that’s not covered under fair use, that’s illegal. It’s the act of distributing the copy that’s illegal.

              That’s why whether the AI model is publicly accessible does matter. A company is considered a “person” under copyright law. So OpenAI can scrape all the copyrighted works off the internet it wants, as long as it didn’t break laws to gain access to them. (In other words, articles freely available on CNN’s website are free to be copied (but not distributed), but if you circumvent the New York Times’ paywall to get articles you didn’t pay for, then that’s not legal access.) OpenAI then encodes those copyrighted works in its models’ weights. If it provides open access to those models, and people execute these attacks to recover pristine copies of copyrighted works, that’s illegal distribution. If it keeps access only for employees, and they execute attacks that recover pristine copies of copyrighted works, that’s keeping the copies within the use of the “person” (company), so it is not illegal. If they let their employees take the copyrighted works home for non-work use (or to use the AI model for non-work use and recover the pristine copies), that’s illegal distribution.

              • TWeaK@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                ·
                7 months ago

                Actually, the act of copying a work covered by copyright is not itself illegal.

                I’m going to need you to back that up with a source. Specifically, legislation.

                If I check out a book from a library and copy a passage (or the whole book!) for rereading myself or some other use that is limited strictly to myself, that’s actually legal.

                What you’re getting at here is the fair use exemption for education or research, which I have already explained. When considering fair use, it has to be for specific use cases (education, research, news, criticism, or comment). Then, after that, the first thing the court considers is whether the use is commercial in nature. The second is the amount of copying.

                You checking a book out of a library and copying down a passage will almost certainly be education/research, and probably noncommercial, so it will most likely be fair use. Copying the whole book might also be fair use, but it is less likely to be so. Copying a book for a commercial report is far less likely.

                The fact that it’s “strictly limited to yourself” has no real bearing in law. Like I say, this isn’t research - they’re not writing academic papers and releasing their dataset for others to reproduce and prove their work - and even the earliest versions of their training have some presence in the existing commercial product they have developed. Their use is thus not research, so not fair use, and even if you considered it as research it is highly commercial in nature and they are copying full work into their training dataset.

                Bringing in the whole “the law treats corporations as people” is further proving you don’t really know how IP law works. Just because something is published and freely accessible does not give the reader unlimited copyright to it. Fair use is an extremely limited exemption.

        • NaibofTabr@infosec.pub
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          5
          ·
          7 months ago

          ChatGPT is a large language model. The model contains word relationships - a nebulous collection of rules for stringing word together. The model does not contain information. In order for ChatGPT to answer flexibly answer questions, it must have access to information for reference - information that it can index, tag and sort for keywords.

          • TWeaK@lemm.ee
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            7 months ago

            information that it can index, tag and sort for keywords.

            The dataset ChatGPT uses to train on contains data copied unlawfully. They’re not just reading the data at its source, they’re copying the data into a training database without sufficient license.

            Whether ChatGPT itself contains all the works is debatable - is it just word relationships when the system can reproduce significant chunks of copyrighted data from those relationships? - but the process of training inherently requires unlicensed copying.

            In terms of fair use, they could argue a research exemption, but this isn’t really research, it’s product development. The database isn’t available as part of scientific research, it’s protected as a trade secret. Even if it was considered research, it absolutely is commercial in nature.

            In my opinion, there is a stronger argument that OpenAI have broken copyright for commercial gain than that they are legitimately performing fair use copying for the benefit of society.

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              7 months ago

              Every time you load a webpage you are making a local copy of it for your own use, if it is on the open web you are implicitly given permission to make a copy of it for your own use. You are not given permission to then distribute those copies which is where LLMs may get into trouble, but making a copy for the purpose of training is not a breach of copyright as far as I can understand or have heard.

              • TWeaK@lemm.ee
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                1
                ·
                7 months ago

                Yes, you do make a copy of a web page. And every time you load a video game you make a copy into RAM. However, this copying is all permitted under user license - you’re allowed to make minor copies as part of the process of running the software and playing the media.

                Case in point, the UK courts ruled that playing pirated games was illegal, because when you load the game from a disc you copy it into RAM, and this copying is not licensed by the player.

                OpenAI does not have any license for copying into its database. The terms and conditions of web pages say you’re allowed to view them, not allowed to take the data and use it for things. They don’t explicitly prohibit this (yet), but the lack of a prohibition does not mean a license is implied. OpenAI can only hope for a fair use exemption, and I don’t think they qualify because a) it isn’t really “research” but product development, and even if it is research b) it is purely for commercial gain.

                • Womble@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  7 months ago

                  Could you point to the judgement on playing copied games was illegal in the UK? I can only find articles about specifically DS copy cartridges which are very obviously intended to make/use unlicensed copies of games to distribute.

                  Even so, that again hinges on right to distribute, not right to make a copy for personal use. If a game is made freely available on the web for you to play it is not illegal to download that game to play offline or study it.

                  • TWeaK@lemm.ee
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    7 months ago

                    I’d have to go digging, sorry I don’t have the time right now. It was to do with piracy on the OG X-Box. It wasn’t the main part of the case, just a tangential point inside the judge’s ruling.

                    Downloading a game to play it would be copyright infringement. Downloading involves making a copy on your device. However one copy really isn’t worth the hassle of claiming against, so it never happens. This is why all the Napster cases inflated the counts of infringement by including everyone you connected to as if you had uploaded a complete copy to them, that’s the only way to make the claim worthwhile. Also in the US uploading to someone carried punitive damages, similar cases didn’t work so well in the UK with actual damages.

                    Downloading it to study is fair use under the research exemption, particularly if it’s a non-commercial activity.

                    Copyright infringement happens all the time, but the vast majority of cases aren’t worth prosecuting, and there’s no penalty for a rights holder not to prosecute. Meanwhile, with Trademarks, the rights holder absolutely can lose their rights if they don’t prosecute every infringement they become aware of.

          • ayaya@lemdro.id
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            7 months ago

            I’m honestly not sure what you’re trying to say here. If by “it must have access to information for reference” you mean it has access while it is running, it doesn’t. Like I said that information is only available during training. Either you’re trying to make a point I’m just not getting or you are misunderstanding how neural networks function.

            • NaibofTabr@infosec.pub
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              3
              ·
              7 months ago

              Like I said that information is only available during training.

              This is not correct. I understand how neural networks function, I also understand that the neural network is not a complete system in itself. In order to be useful, the model is connected to other things, including a source of reference information. For instance, earlier this year ChatGPT was connected to the internet so that it could respond to queries with more up-to-date information. At that point, the neural network was frozen. It was not being actively trained on the internet, it was just connected to it for the sake of completing search queries.

              • brianorca@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                7 months ago

                That is an optional feature, not required to make use of an LLM. And not even a feature of most LLMs. ChatGPT was usable before they added that, but it can help when you need recent data. And they do continue to train It, with the current cutoff being April of this year, at least for some models. (But training is expensive, so we can expect it to be in conjunction with other design changes that require additional training.)

    • KingRandomGuy@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      7 months ago

      Not sure what other people were claiming, but normally the point being made is that it’s not possible for a network to memorize a significant portion of its training data. It can definitely memorize significant portions of individual copyrighted works (like shown here), but the whole dataset is far too large compared to the model’s weights to be memorized.

      • ayaya@lemdro.id
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        7 months ago

        And even then there is no “database” that contains portions of works. The network is only storing the weights between tokens. Basically groups of words and/or phrases and their likelyhood to appear next to each other. So if it is able to replicate anything verbatim it is just overfitted. Ironically the solution is to feed it even more works so it is less likely to be able to reproduce any single one.

        • Kbin_space_program@kbin.social
          link
          fedilink
          arrow-up
          5
          arrow-down
          3
          ·
          edit-2
          7 months ago

          That’s a bald faced lie.

          and it can produce copyrighted works.
          E.g. I can ask it what a Mindflayer is and it gives a verbatim description from copyrighted material.

          I can ask Dall-E “Angua Von Uberwald” and it gives a drawing of a blonde female werewolf. Oops, that’s a copyrighted character.

          • KingRandomGuy@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            ·
            7 months ago

            I think what they mean is that ML models generally don’t directly store their training data, but that they instead use it to form a compressed latent space. Some elements of the training data may be perfectly recoverable from the latent space, but most won’t be. It’s not very surprising as a result that you can get it to reproduce copyrighted material word for word.

          • ayaya@lemdro.id
            link
            fedilink
            English
            arrow-up
            7
            ·
            7 months ago

            I think you are confused, how does any of that make what I said a lie?

          • TimeSquirrel@kbin.social
            link
            fedilink
            arrow-up
            6
            ·
            7 months ago

            I can do that too. It doesn’t mean I directly copied it from the source material. I can draw a crude picture of Mickey Mouse without having a reference in front of me. What’s the difference there?

    • 5BC2E7@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      7 months ago

      yea this “attack” could potentially sink closedAI with lawsuits.

      • NevermindNoMind@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        7 months ago

        This isn’t just an OpenAI problem:

        We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT…

        If a model uses copyrighten work for training without permission, and the model memorized it, that could be a problem for whoever created it, open, semi open, or closed source.