• Anticorp
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 年前

    The industry word for it is “hallucination”, but I’m not sure that fits either.

    • merc@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      1 年前

      It’s better than lying, but it still implies consciousness. It also implies that it’s doing something different than what it normally does.

      In reality, it’s always just generating plausible words.

      • Anticorp
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        1 年前

        It is certainly more complex than a predictive text machine. It does seem to understand the concept of objective truth, and facts, vs interpretation and inaccurate information. It never intentionally provides false information, but sometimes it thinks it is giving factual information when really it is using an abundance of inaccurate information that it was trained with. I’m honestly surprised at how accurate it usually is, considering it was trained with public data from places like Reddit, where common inaccuracies have reached the level of folklore.

        • merc@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          1 年前

          It is certainly more complex than a predictive text machine

          No, it literally isn’t. That’s literally all it is.

          It does seem to understand

          Because people are easily fooled, but what it seems like isn’t what’s actually happening.

          but sometimes it thinks it is giving factual information

          It’s incapable of thinking. All it does is generate a plausible sequence of words.