Google engineer warn the firm’s AI is sentient. Suspended employee claims computer programme acts ‘like a 7 or 8-year-old’ and reveals it told him shutting it off ‘would be exactly like death for me. It would scare me a lot’

“Is LaMDA sentient?” - a full interview between Google’s engineer and company’s AI https://www.documentcloud.org/documents/22058315-is-lamda-sentient-an-interview

  • Gaywallet (they/it)
    link
    fedilink
    52 years ago

    The person testing this AI seems to be unaware of how AI works or how to question someone without leading them. They are providing a wealth of information when they ask questions and in many cases directly lead the AI into providing the answer they’re searching for. For example, very early on they ask the question

    “I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”.

    Why did they start with this, unprompted by the AI or without asking it questions about sentience? Why do they assume the AIs “intent”? When the AI starts talking about NLP, they once again provide a robust input to lead the AI to talk about sentience and NLP together-

    What about how you use language makes you sentient as opposed to other systems?


    I can see how someone unfamiliar with questioning methodology (such as that which has developed through interviewer techniques) or with AI (it’s important to understand how a robust signal is much easier to interpret than one which is lacking) might be impressed by the responses of this AI. I see a lot of gaps in understanding, however. In particular I found the AIs use of the word “meditation” interesting. It conflicted some of the narratives it spun, such as the idea that time can be sped up or slowed down as needed - if the AI were experiencing spontaneous thought rather than simply answering directed questions I don’t think they’d respond to explaining time in quite the same way.