• Carrolade@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    ·
    8 months ago

    Honestly, I think ChatGPT wouldn’t make that particular mistake. Sounding proper is its primary purpose. Maybe a cheap knockoff.

      • hackerwacker
        link
        fedilink
        arrow-up
        24
        arrow-down
        5
        ·
        8 months ago

        Humans are just electrified meat. Stop anthropomorphizing it.

      • acosmichippo@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        8 months ago

        it guesses the next word… based on examples created by humans. It’s not just making shit up out of thin air.

      • Carrolade@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        8 months ago

        Yes, it does that because it was designed to sound convincing, and that is a good method for accomplishing that. That is the primary goal behind the design of all chatbots, and what the Turing Test was intended to gauge. Anyone who makes a chatbot wants it to sound good first and foremost.

      • otp@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        Lol making a mistake isn’t unique to humans. Machines make mistakes.

        Congratulations for knowing that a LLM isn’t the same as a human though, I guess!