• bionicjoey@lemmy.ca
    link
    fedilink
    arrow-up
    18
    arrow-down
    3
    ·
    1 year ago

    The author is an imbecile if they haven’t been able to break GPT. It took me less than one day of tooling around with it before I got it to say something which outed it as having no understanding of what we were discussing.

    • BitSound@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      10
      ·
      1 year ago

      That doesn’t mean it’s not intelligent. Humans can get broken in all sorts of ways. Are we not intelligent?

      • bionicjoey@lemmy.ca
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        The ways in which humans make mistakes are entirely different from the ways GPT makes mistakes.

        Also, if you explain to a human their mistake, they can alter their understanding of the world in order to not make that mistake in the future. Not so with GPT.

        • BitSound@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          LLMs can certainly do that, why are you asserting otherwise?

          ChatGPT can do it for a single session, but not across multiple sessions. That’s not some inherent limitations to LLMs, that’s just because it’s convenient for OpenAI to do it that way. If we spun up a copy of a human from the same original state every time you wanted to ask it a question and then killed it after it was done responding, it similarly wouldn’t be able to change its behavior across questions.

          Like, imagine we could do something like this. You could spin up a copy of that brain image, alter its understanding of the world, then spin up a fresh copy that doesn’t have that altered understanding. That’s essentially what we’re doing with LLMs today. But if you don’t spin up a fresh copy, it would retain its altered understanding.

          • bionicjoey@lemmy.ca
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            1 year ago

            I literally watched it not correct itself after trying to explain to it what I wanted changed in a half dozen different ways during a single session. It never was able to understand what I was asking for.

            Edit: Furthermore, I watched it become less intelligent as our conversation went longer. It basically forgot things we had discussed and misremembered or hallucinated details after a longer exchange.

            • BitSound@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              3
              ·
              1 year ago

              For your edit: Yes, that’s what’s known as the context window limit. ChatGPT has an 8k token “memory” (for most people), and older entries are dropped. That’s not an inherent limitation of the approach, it’s just a way of keeping OpenAI’s bills lower.

              Without an example I don’t think there’s anything to discuss. Here’s one trivial example though where I altered ChatGPT’s understanding of the world:

              If I continued that conversation, ChatGPT would eventually forget that due to the aforementioned context window limit. For a more substantive way of altering an LLM’s understanding of the world, look at how OpenAI did RLHF to get ChatGPT to not say naughty things. That permanently altered the way GPT-4 responds, in a similar manner to having an angry nun rap your knuckles whenever you say something naughty.

      • Cornelius
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Are we not intelligent?

        Well… there’s an argument to be made there.