• ☆ Yσɠƚԋσʂ ☆OP
    link
    fedilink
    English
    arrow-up
    12
    ·
    3 months ago

    I don’t think this is going to be the end of AI either, and the corpus of data before AI generated content became prevalent is also huge. So, I don’t think there’s really lack of training data. I personally think this is more interesting from the perspective of how these algorithms work in general. The fact that they end up collapsing when consuming their own content seems to indicate that the quality of content is fundamentally different from that generated by humans.

    • BountifulEggnog [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 months ago

      Yea that’s completly fair, I think ai models in general to have lots of interesting characteristics that are very different from humans. I just see a lot of people taking conclusions from papers like this that aren’t justified.

      • ☆ Yσɠƚԋσʂ ☆OP
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        Very much agree, and I find the whole hatred of generative AI is largely misguided to begin with. It’s interesting technology that has useful applications. Most of the problems associated with it ultimately trace back to capitalism, as opposed to any inherent problem with LLMs themselves.