• Cyborganism@lemmy.ca
    link
    fedilink
    English
    arrow-up
    21
    ·
    11 months ago

    You know… Instead of having AI create art while humans bust their asses at work, why not make AI do the work and let humans create art?

    • SpaceToast@mander.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      Because then people wouldn’t pay out the ass for small conveniences. Keeping people working as much as possible is the point.

    • auf
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      11 months ago

      Because it can

  • albigu@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    11 months ago

    I apologize for the inconvenience, but as an AI language model, I don’t have direct access to books or copyrighted materials like “The Bedwetter” by Sarah Silverman.

    Pack it up, guys!

    On a serious note corporations abusing authors’ copyrighted work is on an entire different level to civilian piracy and I hope they get seriously shafted over it. Same thing for Bing and Bard. All of chatGPT is built on dubious or outright illegal datasets and there is no reason huge multinationals shouldn’t at least pay and inform the authors of those works. But in reality the blame will probably be shifted to the libraries.

  • Hot Saucerman
    link
    fedilink
    English
    arrow-up
    10
    ·
    11 months ago

    More detailed coverage from The Verge: https://www.theverge.com/2023/7/9/23788741/sarah-silverman-openai-meta-chatgpt-llama-copyright-infringement-chatbots-artificial-intelligence-ai

    The complaint lays out in steps why the plaintiffs believe the datasets have illicit origins — in a Meta paper detailing LLaMA, the company points to sources for its training datasets, one of which is called ThePile, which was assembled by a company called EleutherAI. ThePile, the complaint points out, was described in an EleutherAI paper as being put together from “a copy of the contents of the Bibliotik private tracker.” Bibliotik and the other “shadow libraries” listed, says the lawsuit, are “flagrantly illegal.”

    I used to have a Bibliotik account, and if this is true about ThePile, they very likely have at least the beginnings of a successful case.

  • Uriel-238@lemmy.fmhy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 months ago

    I think this is going to raise some questions about fair use, since AI projects are absolutely a derivative works that are sufficiently removed from the content they used. (There may be some argument that it’s also educational use.)

    This case may rekindle questions about fair use given that our current copyright-maximalist clime has been less interested in enforcing fair use and more interested in enforcing copyright regardless of fair use.

  • Andreas@feddit.dk
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    Are they going to keep the lawsuit focused on OpenAI and Meta or turn it into yet another lawsuit against piracy?

    • Uriel-238@lemmy.fmhy.ml
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 months ago

      Any lawsuit that rules in favor of copyright holders promotes piracy (as opposed to legalizing use of copyrighted material).

      The more draconian and extreme our copyright laws, the more there is a need for a piracy sector.

  • stephen
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Who? Like no sarcasm, who is she?

  • CaptainBasculin
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    7
    ·
    11 months ago

    It’s not illegal for a human to learn from the contents of a book, so why the fuck it’s illegal for an AI?

    • Ace T'Ken@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      11 months ago

      Because the thing referred to as AI (which is definitely not AI) is simply strip mining the book to shit out “content.”

      It is not reading, understanding, or learning from the book. It is using it to sell services for its masters.

      An author should control their work. They should be able to decide for themselves whether or not they want to help big tech sell garbage to idiots.