It’s time to call a spade a spade. ChatGPT isn’t just hallucinating. It’s a bullshit machine.

From TFA (thanks @mxtiffanyleigh for sharing):

"Bullshit is ‘any utterance produced where a speaker has indifference towards the truth of the utterance’. That explanation, in turn, is divided into two “species”: hard bullshit, which occurs when there is an agenda to mislead, or soft bullshit, which is uttered without agenda.

“ChatGPT is at minimum a soft bullshitter or a bullshit machine, because if it is not an agent then it can neither hold any attitudes towards truth nor towards deceiving hearers about its (or, perhaps more properly, its users’) agenda.”

https://futurism.com/the-byte/researchers-ai-chatgpt-hallucinations-terminology

@technology #technology #chatGPT #LLM #LargeLanguageModels

  • davelA
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    2
    ·
    edit-2
    5 months ago

    I think “hallucinating” and “bullshitting” are pretty much synonyms in the context of LLMs. And I think they’re both equally imperfect analogies for the exact same reasons. When we talk about hallucinators & bullshitters, we’re almost always talking about beings with consciousness/understanding/agency/intent (people usually, pets occasionally), but spicy autocompleters don’t really have those things.

    But if calling them “bullshit machines” is more effective communication, that’s great—let’s go with that.

    To say that they bullshit reminds me of On Bullshit, which distinguishes between lying and bullshitting: “The main difference between the two is intent and deception.” But again I think it’s a bit of a stretch to say LLMs have intent.

    I might say that LLMs hallunicate/bullshit, and the rules & guard rails that developers build into & around them are attempts to mitigate the madness.

    • heavyboots
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      5 months ago

      I totally agree that both seem to imply intent, but IMHO hallucinating is something that seems to imply not only more agency than an LLM has, but also less culpability. Like, “Aw, it’s sick and hallucinating, otherwise it would tell us the truth.”

      Whereas calling it a bullshit machine still implies more intentionality than an LLM is capable of, but at least skews the perception of that intention more in the direction of “It’s making stuff up” which seems closer to the mechanisms behind an LLM to me.

      I also love that the researchers actually took the time to not only provide the technical definition of bullshit, but also sub-categorized it too, lol.

    • MedicsOfAnarchy@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      I think for the sake of mixed company and delicate sensibilities we should refer to this as a “BM” rather than a “bullshit machine”. Therefore it could be a LLM BM, or simply a BM.

      • davelA
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        5 months ago

        Large Bowel Movement, got it.

    • Sheepie@aus.social
      link
      fedilink
      arrow-up
      2
      ·
      5 months ago

      @davel Very well said. I’ll continue to call it bullshit because I think that’s still a closer and more accurate term than “hallucinate”. But it’s far from the perfect descriptor of what AI does, for the reasons you point out.