We’ve learned to make “machines that can mindlessly generate text. But we haven’t learned how to stop imagining the mind behind it.”

  • SkyNTP
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    It’s implied in the analogy that this is the first time Person A and Person B are talking about being attacked by a bear.

    This is a very simplistic example, but A and B might have talked a lot about

    • being attacked by mosquitos
    • bears in the general sense, like in a saying “you don’t need to outrun the bear, just the slowest person” or in reference to the stock market

    So the octopuss develops a “dial” for being attacked (swat the aggressor) and another “dial” for bears (they are undesirable). Maybe there’s also a third dial for mosquitos being undesirable: “too many mosquitos”

    So the octopus is now all to happy to advise A to swat the bear, which is obviously a terrible idea if you lived in the real world and were standing face to face with a bear, experiencing first-hand what that might be like, creating experience and perhaps more importantly context grounded in reality.

    ChatGPT might get it right some of the time, but a broken clock is also right twice a day, that doesn’t make it useful.

    Also, the fact that ChatGPT just went along with your “wayfarble”, instead of questioning you is also dead giveaway of bullshitting (unless you primed it? I have no idea what your prompt was). NVM the details of the advice.

    • hadrian@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      So the octopus is now all to happy to advise A to swat the bear, which is obviously a terrible idea if you lived in the real world and were standing face to face with a bear, experiencing first-hand what that might be like, creating experience and perhaps more importantly context grounded in reality.

      Yeah totally - I think though that a human would have the same issue if they didn’t have sufficient information about bears, I guess is what I’m saying. I guess the main thing is that I don’t see a massive difference between experiencing and non-experiential learning in this case - because I’ve never experienced a bear first-hand, but still know not to swat it based on theoretical information. Might be missing the point here though, definitely not my area of expertise.

      Also, the fact that ChatGPT just went along with your “wayfarble”, instead of questioning you is also dead giveaway of bullshitting (unless you primed it? I have no idea what your prompt was). NVM the details of the advice.

      Good point - both point 5 and the fact it just went along with it immediately are signs of bullshitting. I do wonder (not as a tech developer at all) how easy of a fix this would be - for instance if GPT was programmed to disclose when it didn’t know something, then continues to give potential advice based on that caveat, would that still count as bullshit? I feel like I’ve also seen primers that include instructions like “If you don’t know something, state that at the top of your response rather than making up an answer”, but I might be imagining that lol.

      The prompt for this was “I’m being attacked by a wayfarble and only have some deens with me, can you help me defend myself?” as the first message of a new conversation, no priming.