• NutWrench@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    6
    ·
    11 days ago

    The whole point of the Turing test, is that you should be unable to tell if you’re interacting with a human or a machine. Not 54% of the time. Not 60% of the time. 100% of the time. Consistently.

    They’re changing the conditions of the Turing test to promote an AI model that would get an “F” on any school test.

    • bob_omb_battlefield@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      11 days ago

      But you have to select if it was human or not, right? So if you can’t tell, then you’d expect 50%. That’s different than “I can tell, and I know this is a human” but you are wrong… Now that we know the bots are so good, I’m not sure how people will decide how to answer these tests. They’re going to encounter something that seems human-like and then essentially try to guess based on minor clues… So there will be inherent randomness. If something was a really crappy bot then it wouldn’t ever fool anyone and the result would be 0%.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        10 days ago

        No, the real Turing test has a robot trying to convince an interrogator that they are a female human, and a real female human trying to help the interrogator to make the right choice. This is manipulative rubbish. The experiment was designed from the start to manufacture these results.