• Blapoo
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I like to explain LLMs to people as “glorified autocompletes”. They’re just stringing words together in “the most rational way possible” based on the training data. They’re not “sentient” or “smart”, but they can still surprise our meat brains.

    In other words, it doesn’t “know” anything, but can still output a pattern that makes us go “Ooooooo it KNOWS”.

    Folks are getting better at training specific goals into their models. So the math that failed yesterday may work tomorrow may fail the day after. These problems will be solved in time and we’ll have a broader range of surprising output moments.

    I dunno, just feels like a waste of an article for anyone in the know and confusing for those not paying attention. “ChatGPT doesn’t have a soul!” Ya, duh . . .