Generative artificial intelligence (GenAI) company Anthropic has claimed to a US court that using copyrighted content in large language model (LLM) training data counts as “fair use”, however.

Under US law, “fair use” permits the limited use of copyrighted material without permission, for purposes such as criticism, news reporting, teaching, and research.

In October 2023, a host of music publishers including Concord, Universal Music Group and ABKCO initiated legal action against the Amazon- and Google-backed generative AI firm Anthropic, demanding potentially millions in damages for the allegedly “systematic and widespread infringement of their copyrighted song lyrics”.

  • Lvxferre@mander.xyz
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    Most things that I could talk about were already addressed by other users (specially @OttoVonNoob@lemmy.ca), so I’ll address a specific point - better models would skip this issue altogether.

    The current models are extremely inefficient on their usage of training data. LLMs are a good example; Claude v2.1 was allegedly trained on hundreds of billions of words. In the meantime, it’s claimed that a 4yo child hears something between 45 millions and 13 millions words through their still short life. It’s four orders of magnitude of difference, so even if someone claims that those bots are as smart as a 4yo*, they’re still chewing through the training data without using it efficiently.

    Once this is solved, the corpus size will get way, way smaller. Then it would be rather feasible to train those models without offending the precious desire for greed of the American media mafia, in a way that still fulfils the entitlement of the GAFAM mafia.

    *I seriously doubt that, but I can’t be arsed to argue this here - it’s a drop in a bucket.

    • Sonori@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      The thing is, i’m not sure at all that it’s even physically possible for an LLM be trained like a four year old, they learn in fundamentally different ways. Even very young children quickly learn by associating words with concepts and objects, not by forming a statistical model of how often x mingingless string of characters comes after every other meaningless string of charecters.

      Similarly when it comes to image classifiers, a child can often associate a word to concept or object after a single example, and not need to be shown hundreds of thousands of examples until they can create a wide variety of pixel value mappings based on statistical association.

      Moreover, a very large amount of the “progress” we’ve seen in the last few years has only come by simplifying the transformers and useing ever larger datasets. For instance, GPT 4 is a big improvement on 3, but about the only major difference between the two models is that they threw near the entire text internet at 4 as compared to three’s smaller dataset.

      • Lvxferre@mander.xyz
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        My point is that the current approach - statistical association - is so crude that it’ll probably get ditched in the near future anyway, with or without licencing matters. And that those better models (that won’t be LLMs or diffusion-based) will probably skip this issue altogether.

        The comparison with 4yos is there mostly to highlight how crude it is. I don’t think either that it’s viable to “train” models in the same way as we’d train a human being.