Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

  • Rikudou_Sage@lemmings.world
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    1 year ago

    That’s incorrect in my opinion. AI learns patterns from its training data. So do humans, by the way. It’s not copy-pasting parts of image or code.

    • MJBrune@beehaw.org
      link
      fedilink
      arrow-up
      13
      ·
      1 year ago

      At the heart of copyright law is the intent. If an artist makes something, someone can’t just come along and copy it and resell it. The intent is so that artists can make a living for their innovation.

      AI training on copyrighted images and then reproducing works derived from those images in order to compete with those images in the same style breaks the intent of copyright law. Equally, it does not matter if a picture is original. If you take an artist’s picture and recreate it with pixel art, there have already been cases where copyright infringement settlements have been made in favor of the original artist. Despite the original picture not being used at all, just studied. Mile’s David Kind Of Bloop cover art.

      • grue
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        You’re correct in your description of what a derivative work is, but this part is mistaken:

        The intent is so that artists can make a living for their innovation.

        The intent is “to promote the progress of science and the useful arts” so that, in the long run, the Public Domain is enriched with more works than would otherwise exist if no incentive were given. Allowing artists to make a living is nothing more than a means to that end.

        • MJBrune@beehaw.org
          link
          fedilink
          arrow-up
          6
          ·
          1 year ago

          It promotes progress by giving people the ability to make the works. If they can’t make a living off of making the works then they aren’t going to do it as a job. Thus yes, the intent is so that artists can make a living off of their work so that more artists have the ability to make the art. It’s really that simple. The intent is so that more people can do it. It’s not a means to the end, it’s the entire point of it. Otherwise, you’d just have hobbyists contributing.

          • whelmer@beehaw.org
            link
            fedilink
            arrow-up
            5
            ·
            1 year ago

            I like what you’re saying so I’m not trying to be argumentative, but to be clear copyright protections don’t simply protect those who make a living from their productions. You are protected by them regardless of whether you intend to make any money off your work and that protection is automatic. Just to expand upon what @grue was saying.

    • grue
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      By the same token, a human can easily be deemed to have infringed copyright even without cutting and pasting, if the result is excessively inspired by some other existing work.

    • Samus Crankpork@beehaw.org
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      AI doesn’t “learn” anything, it’s not even intelligent. If you show a human artwork of a person they’ll be able to recognize that they’re looking at a human, how their limbs and expression works, what they’re wearing, the materials, how gravity should affect it all, etc. AI doesn’t and can’t know any of that, it just predicts how things should look based on images that have been put in it’s database. It’s a fancy Xerox.

      • Rikudou_Sage@lemmings.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Why do people who have no idea how some thing works feel the urge to comment on its working? It’s not just AI, it’s pretty much everything.

        AI does learn, that’s the whole shtick and that’s why it’s so good at stuff computers used to suck at. AI is pretty much just a buzzword, the correct abbreviation is ML which stands for Machine Learning - it’s even in the name.

        AI also recognizes it looks at a human! It can also recognize what they’re wearing, the material. AI is also better in many, many things than humans are. It also sucks compared to humans in many other things.

        No images are in its database, you fancy Xerox.

        • Samus Crankpork@beehaw.org
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          And I wish that people who didn’t understand the need for the human element in creative endeavours would focus their energy on automating things that should be automated, like busywork, and dangerous jobs.

          If the prediction model actually “learned” anything, they wouldn’t have needed to add the artist’s work back after removing it. They had to, because it doesn’t learn anything, it copies the data it’s been fed.

          • Rikudou_Sage@lemmings.world
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Just because you repeat the same thing over and over it doesn’t become truth. You should be the one to learn, before you talk. This conversation is over for me, I’m not paid to convince people who behave like children of how things they’re scared of work.