• zephorah@lemm.ee
    link
    fedilink
    arrow-up
    22
    ·
    4 months ago

    How much glue should you put in your pizza?

    Seriously. Eating Reddit means some truly strange shit is going to emerge. We all know the pattern of fun replies on Reddit, but the AI does not. Hence the glue thing.

    • ArbitraryValue@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      edit-2
      4 months ago

      I don’t think that’s an impossible problem. Existing models can reliability distinguish between, for example, different languages. Most of their training data is presumably in English but while this may make them better at generating English text, it doesn’t make them randomly switch from other languages to English. A sufficiently advanced model would likewise distinguish between descriptions of reality and shit-posts because the content of shit-posts would not be useful for predicting descriptions of reality. Some fine tuning would teach it to produce just the descriptions of reality.

      Or look at it this way: the folks developing these LLMs aren’t ignorant of the fact that Reddit content is often false and meant to be funny. They’re not going to make the sort of silly mistake that someone who isn’t an expert can still easily predict and they’re not going to train their LLMs on that content if it makes the LLMs worse, although we’re still going to see some glue on pizza while the technology continues to develop.

      • SpacetimeMachine@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        4 months ago

        It can cross check a language with tons of other words and examples of that language already in its data set. There is no such data for whether or not something confirms with reality. That simply doesn’t exist and really won’t ever exist. They are not similar problems. One is immensely more challenging to solve than the other.