Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

  • Mirodir@lemmy.fmhy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Do we have a AI with a theory of mind or just a AI that answers the questions in the test correctly?

    Now whether or not there is a difference between those two things is more of a philosophical debate. But assuming there is a difference, I would argue it’s the latter. It has likely seen many similar examples during training (the prompts are in the article you linked, it’s not unlikely to have similar texts in a web-scraped training set) and even if not, it’s not that difficult to extrapolate those answers from the many texts it must’ve read where a character was surprised at an item missing that that character didn’t see being stolen.

      • newde@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        You can make an educated guess if you would understand the intricacies of the programming. In this case, it’s most likely blurting out words and phrases that statistically most adequately fit the (perhaps somewhat leading) questions.

    • hglman
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The issue is you have nothing to differentiate your two possibilities other than it doesn’t seem like you. Which will, of course, always fail for a machine.