A group of hackers that says it believes “AI-generated artwork is detrimental to the creative industry and should be discouraged” is hacking people who are trying to use a popular interface for the AI image generation software Stable Diffusion with a malicious extension for the image generator interface shared on Github.

ComfyUI is an extremely popular graphical user interface for Stable Diffusion that’s shared freely on Github, making it easier for users to generate images and modify their image generation models. ComfyUI_LLMVISION, the extension that was compromised to hack users, is a ComfyUI extension that allowed users to integrate large language models GPT-4 and Claude 3 into the same interface.

The ComfyUI_LLMVISION Github page is currently down, but a Wayback Machine archive of it from June 9 states that it was “COMPROMISED BY NULLBULGE GROUP.”

  • A_Very_Big_Fan@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    9
    ·
    edit-2
    5 months ago

    Honestly I still don’t understand the “stealing” argument. Does the stealing occur during training? From everything I’ve learned about the technology, the training, in terms of the data given and the end result, isn’t any different than me scrolling through Google images to get a concept of how to draw something. It’s not like they have a copy of the whole Internet on their servers to make it work.

    Does it occur during the image generation? Because try as I might, I’ve never been able to get it to output copyrighted material. I know over fitting used to be an issue, but we figured out how to solve that issue a long time ago. “But the signatures!!” yeah, it’s never outputted a recognizable/legible signature, it just associates signatures with art.

    Shouldn’t art theft be judged like any other copyright matter? It doesn’t matter how it was created, it matters if it violates fair use. I really don’t think training crosses that line, and I’ve yet to see these models output a copy of another image outside of image-to-image models.

    • retrospectology@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 months ago

      It’s theft of labor without any compensation, aimed at cheapening the very value of that labor.

      A human artist can, and often does, train simply by looking at the real world. The art they then produce is a result of that knowledge being interpreted and stylized by their own brain and perception. The decision making on how to represent a given subject, what details to add and leave out to achieve an effect, is done by the artist themselves. It’s a product of their internal mental laboring.

      By contrast, if you trained an AI on photos alone it would never, ever produce anything that looks like a drawing or a piece of art, it would never create a stylized piece of art or make a creative decision of its own.

      In order to produce art the AI must be fueled with human created art, that humans labored to produce. The human artists are not being compensated for the use of that labor, and even worse the AI is leveraging that to make the human labor worth less. And what’s more, that AI’s ability will stagnate without further theft of newer, more novel art and concepts.

      Without that keystone of human labor the AI simply can’t function.

      Ripping off so many people at once and so chaotically that you can’t distinguish exactly how any given individual is being exploited doesn’t mean those people aren’t still being ripped off. The machine that the tech bros created could not exist without the stolen labor of the artists.