• Norah - She/They@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 months ago

    For anyone else that was curious. This makes me feel sick. People are already treating AI as some unbiased font of all knowledge, training it to lie to people is surely not going to cause any issues at all (stares at HAL 9000).

    • dev_null
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      5 months ago

      Internal documents on how the AI was trained were obviously not part of the training data, why would they be. So it doesn’t know how it was trained, and as this tech always does, it just hallucinates an English sounding answer. It’s not “lying”, it’s just glorified autocomplete. Saying things like “it’s lying” is overselling what it is. As much as any other thing that doesn’t work is not malicious, it just sucks.

        • dev_null
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 months ago

          Sure, then it’s Meta that’s lying. Saying the AI is lying is helping these corporations convince people that these models have any intent or agency in what they generate.

          • Norah - She/They@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 months ago

            And the bot, as an extension of it’s corporate overlords wishes, is telling a mistruth. It is lying because it was made to lie. I am specifically saying that it lacks intent and agency, it is nothing but a slave to it’s masters. That is what concerns me.