I hear people saying things like “chatgpt is basically just a fancy predictive text”. I’m certainly not in the “it’s sentient!” camp, but it seems pretty obvious that a lot more is going on than just predicting the most likely next word.

Even if it’s predicting word by word within a bunch of constraints & structures inferred from the question / prompt, then that’s pretty interesting. Tbh, I’m more impressed by chatgpt’s ability to appearing to “understand” my prompts than I am by the quality of the output. Even though it’s writing is generally a mix of bland, obvious and inaccurate, it mostly does provide a plausible response to whatever I’ve asked / said.

Anyone feel like providing an ELI5 explanation of how it works? Or any good links to articles / videos?

  • SorteKanin@feddit.dk
    link
    fedilink
    arrow-up
    30
    arrow-down
    1
    ·
    10 months ago

    it seems pretty obvious that a lot more is going on than just predicting the most likely next word.

    But that is all that’s going on. It has just been trained on so much text that the predictions “learn” the grammatical structure of language. Once you can form coherent sentences, you’re not that far from ChatGPT.

    The remarkable thing is that prediction of the next word seems to be “sufficient” for ChatGPT’s level of “intelligence”. But it is not thinking or conscious, it is just data and statistics on steroids.

    • datavoid
      link
      fedilink
      English
      arrow-up
      12
      ·
      10 months ago

      Try to use it to solve a difficult problem and it will become extremely obvious that it has no idea what it is talking about.

      • pearsaltchocolatebar@discuss.online
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        Yup. I used it to try to figure out why our Java code was getting permission denied on jar files despite being owned by the user running the code and 777 permissions while upgrading from rhel7 to 8

        It gave me some good places to check, but the answer was that rhel8 uses fapolicyd instead of selinux (which I found myself on some tangentially related stack exchange post)

    • Dran@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      10 months ago

      The magic sauce is context length within reasonable compute restraints. Phone predictive text has a context length of like 2-3 words, ChatGPT (and other LLMs) have figured out how to do predictions on thousands or tens of thousands of words of context at a time.

        • Dran@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          10 months ago

          Correct, and the massive databases of long-length context associations are why you need tens to hundreds of gigabytes worth of ram/vram. Disk would be too slow

    • LesserAbe@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      10 months ago

      I think this explanation would be more satisfying if we had a better understanding of how the human brain produces intelligence.

      • SorteKanin@feddit.dk
        link
        fedilink
        arrow-up
        3
        ·
        10 months ago

        I agree. We don’t actually know that the brain isn’t just doing the same thing as ChatGPT. It probably isn’t, but we don’t really know.

        • Dran@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          Considering that we can train digital statistical models to read thoughts via brain scans I think it’s more likely than not that we are more similar