Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.

  • falsem@kbin.social
    link
    fedilink
    arrow-up
    18
    ·
    1 year ago

    If I look at someone’s paintings, then paint something in a similar style did I steal their work? Or did I take inspiration from it?

    • Pulse@dormi.zone
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      1 year ago

      No, you used it to inform your style.

      You didn’t drop his art on to a screenprinter, smash someone else’s art on top, then try to sell t-shirts.

      Trying to compare any of this to how one, individual, human learns is such a wildly inaccurate way to justify stealing a someone’s else’s work product.

      • falsem@kbin.social
        link
        fedilink
        arrow-up
        13
        ·
        1 year ago

        If it works correctly it’s not a screenprinter, it’s something unique as the output.

        • Pulse@dormi.zone
          link
          fedilink
          arrow-up
          17
          arrow-down
          1
          ·
          1 year ago

          The fact that folks can identify the source of various parts of the output, and that intact watermarks have shown up, shows that it doesn’t work like you think it does.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            10
            ·
            1 year ago

            They can’t, and “intact” watermarks don’t show up. You’re the one who is misunderstanding how this works.

            When a pattern is present very frequently the AI can learn to imitate it, resulting in things that closely resemble known watermarks. This is called “overfitting” and is avoided as much as possible. But even in those cases, if you examine the watermark-like pattern closely you’ll see that it’s usually quite badly distorted and only vaguely watermark-like.

            • Pulse@dormi.zone
              link
              fedilink
              arrow-up
              11
              ·
              1 year ago

              Yes, because “imitate” and “copy” are different things when stealing from someone.

              I do understand how it works, the “overfitting” was just laying clear what it does. It copies but tries to sample things in a way that won’t look like clear copies. It had no creativity, it is trying to find new ways of making copies.

              If any of this was ethical, the companies doing it would have just asked for permission. That they didn’t says a everything you need to know.

              I don’t usually have these kinds discussions anymore, I got tired of conversations like this back in 2016, when it became clear that people will go to the ends of the earth to justify unethical behavior as long as the people being hurt by it are people they don’t care about.

              • FaceDeer@kbin.social
                link
                fedilink
                arrow-up
                5
                ·
                1 year ago

                And we’re back to you calling it “stealing”, which it certainly is not. Even if it was copyright violation, copyright violation is not stealing.

                You should try to get the basic terminology right, at the very least.

                • Pulse@dormi.zone
                  link
                  fedilink
                  arrow-up
                  6
                  ·
                  1 year ago

                  Just because you’ve redefined theft in a way that makes you feel okay about it doesn’t change what they did.

                  They took someone else’s work product, fed it into their machine then used that to make money.

                  They stole someone’s labor.

                  • FaceDeer@kbin.social
                    link
                    fedilink
                    arrow-up
                    4
                    ·
                    1 year ago

                    I haven’t “redefined” it, I’m using the legal definition. People do sometimes sloppily equate copyright violation with theft in common parlance, but they’re in for a rude awakening if they intend to try translating that into legal action.

                    Using that term in an argument like this is merely trying to beg the question of whether it’s wrong, since most everyone agrees that stealing is wrong you’re trying to cast the action of training an AI as something everyone will by default agree is wrong. But it’s not stealing, no matter how much you want it to be, and I’m calling that rhetorical trick out here.

                    If you want to argue that it’s wrong you need to argue against the actual process that’s happening, not some magical scenario where the AI trainers are somehow literally robbing people.

          • jarfil@beehaw.org
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            Does that mean the AI is not smart enough to remove watermarks, or that it’s so smart it can reproduce them?

            • nickwitha_k (he/him)@lemmy.sdf.org
              link
              fedilink
              arrow-up
              4
              ·
              1 year ago

              LLMs and directly related technologies are not AI and possess no intelligence or capability to comprehend, despite the hype. So, they are absolutely the former, though it’s rather like a bandwagon sort of thing (x number of reference images had a watermark, so that’s what the generated image should have).

              • jarfil@beehaw.org
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                LLMs […] no intelligence or capability to comprehend

                That’s debatable. LLMs have shown emergent behaviors aside from what was trained, and they seem to be capable of comprehending relationships between all sorts of tokens, including multi-modal ones.

                Anyway, Stable diffusion is not an LLM, it’s more of a “neural network hallucination machine” with some cool hallucinations, that sometimes happen to be really close to some or parts of the input data. It still needs to be “smart” enough to decompose the original data into enough and the right patterns, that it can reconstruct part of the original from the patterns alone.

                • nickwitha_k (he/him)@lemmy.sdf.org
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  1 year ago

                  Thanks for the clarification!

                  LLMs have indeed shown interesting behaviors but, from my experience with the technology and how it works, I would say that any claims of intelligence being possessed by a system that is only an LLM would be suspect and require extraordinary evidence to prove that it is not mistaken anthropomorphizing.

                  • jarfil@beehaw.org
                    link
                    fedilink
                    arrow-up
                    3
                    ·
                    edit-2
                    1 year ago

                    I don’t think an LLM alone can be intelligent… but I do think it can be the central building block for a sentient self-aware intelligent system.

                    Humans can be thought of as being made of a set of field-specific neural networks, tied together by a looping self-evaluating multi-modal LLM that we call “conscience”. The ability of an LLM to consume its own output, is what allows it to be used as the conscience loop, and current LLMs being trained on human language with all its human nuance, is an extra bonus.

                    Probably some other non-text multi-modal neural networks capable of consuming their own output could also be developed and be put in a loop, but right now we have LLMs, and we kind of understand most of what they’re saying, and they kind of understand most of what we’re saying, so that makes communication easier.

                    I mean, it is anthropomorphizing, but in this case I think it makes sense because it’s also anthropogenic, since these human language LLMs get trained on human language.

            • Swedneck@discuss.tchncs.de
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              It’s like staring yourself blind at artworks with watermarks until you start seeing artworks with blurry watermarks in your dreams