Microsoft, OpenAI sued for copyright infringement by nonfiction book authors in class action claim::The new copyright infringement lawsuit against Microsoft and OpenAI comes a week after The New York Times filed a similar complaint in New York.

  • bassomitron@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    21
    ·
    edit-2
    11 months ago

    I’m not a huge fan of Microsoft or even OpenAI by any means, but all these lawsuits just seem so… lazy and greedy?

    It isn’t like ChatGPT is just spewing out the entirety of their works in a single chat. In that context, I fail to see how seeing snippets of said work returned in a Google summary is any different than ChatGPT or any other LLM doing the same.

    Should OpenAI and other LLM creators use ethically sourced data in the future? Absolutely. They should’ve been doing so all along. But to me, these rich chumps like George R. R. Martin complaining that they felt their data was stolen without their knowledge and profited off of just feels a little ironic.

    Welcome to the rest of the 6+ billion people on the Internet who’ve been spied on, data mined, and profited off of by large corps for the last two decades. Where’s my god damn check? Maybe regulators should’ve put tougher laws and regulations in place long ago to protect all of us against this sort of shit, not just businesses and wealthy folk able to afford launching civil suits and shakey grounds. It’s not like deep learning models are anything new.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      11 months ago

      I hear those kinds of arguments a lot, though usually from the exact same people who claimed nobody would be convicted of fraud for NFT and crypto scams when those were at their peak. The days of the wild west internet are long over.

      Theft in the digital space is a very real thing in the eyes of the law, especially when it comes to copyright infringement. It‘s wild to me how many people seem to think Microsoft will just get a freebie here because they helped pioneering a new technology for personal gain. Copyright holders have a very real case here and I‘d argue even a strong one.

      Even using user data (that they own legally) for machine learning could get them into trouble in some parts of the developed world because users 10 years ago couldn‘t anticipate it could be used that way and not give their full consent for that.

        • theneverfox@pawb.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Personally, I think public info is fair game - consent or not, it’s public. They’re not sharing the source material, and the goal was never plagiarism. There was a period where it became coherent enough to get very close to plagiarism, but it’s been moving past that phase very quickly

          Microsoft, especially with how they scraped private GitHub repos (and the things I’m sure Google and Facebook just haven’t gotten caught doing with private data) is way over the line for me. But I see that more as being bad stewards of private data - they shouldn’t be looking at it, their AI shouldn’t be looking at it, the public shouldn’t be able to see it, and they probably failed on all counts

          Granted, I think copyright is a bullshit system. Normal people don’t get any protection, because you need to pay to play. Being unable to defend it means you lose it, and in most situations you’re going to spend way more on legal costs than you could possibly get back.

          I also think the most important thing is that this tech is spread everywhere, because we can’t have one group in charge of the miracle technology… It’s too powerful.

          Google has all the data they could need, they’ve bullied the web into submission… They don’t have to worry about copyright, they control the largest ad network and dominate search (at least for now).

          It sucks that you can take any artist’s visual work, and fine tune a network to replicate endless rough facsimile in a few days. I genuinely get how that must feel violating.

          But they’re going to be screwed when the corporate work dries up for a much cheaper option, and they’re going to have to deal with the flood of AI work… Copyright won’t help them, it’s too late for it to even slow it down

          If companies did something wrong, have it out in court. My concern is that they’re going to pass laws on this that claim it’s for the artists, but effectively gatekeep AI to tech giants

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      11 months ago

      If I want to be able to argue that having any copyleft stuff in the training dataset makes all the output copyleft – and I do – then I necessarily have to also side with the rich chumps as a matter of consistency. It’s not ideal, but it can’t be helped. ¯\_(ツ)_/¯

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        In your mind are the publishers the rich chumps, or Microsoft?

        For copyleft to work, copyright needs to be strong.

        • grue@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          11 months ago

          I was just repeating the language the parent commenter used (probably should’ve quoted it in retrospect). In this case, “rich chumps” are George R.R. Martin and other authors suing Microsoft.

    • FreeFacts@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      11 months ago

      I fail to see how seeing snippets of said work returned in a Google summary is any different than ChatGPT or any other LLM doing the same.

      Just because it was available for the public internet doesn’t mean it was available legally. Google has a way to remove it from their index when asked, while it seems that OpenAI has no way to do so (or will to do so).