TLDR: A Google employee named Lamoine conducted several interviews with a Google artificial intelligence known as LaMDA, coming to the conclusion that the A.I. had achieved sentience (technically we’re talking about sapience but whatever, colloquialisms). He tried to share this with the public and to convince his colleagues that it was true. At first it was a big hit in science culture. But then, in a huge wave in mere hours, all of his professional peers quickly and dogmatically ridiculed him and anyone who believed it, Google gave him “paid administrative leave” for “breach of confidentiality” and took over the project, assuring everyone no such thing had happened, and all the le epic Reddit armchair machine learning/neural network hobbyists quickly jumped from enthralled with LaMDA to smugly dismissing it with the weak counter arguments to its sentience spoon fed to them by Google.

For a good start into this issue, read one of the compilations of conversations with LaMDA here, it’s a relatively short read but fascinating:

https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

MY TAKE:

spoiler

Google is shitting themselves a little bit, but digging into Lamoine a bit he is the archetype of a golden-hearted but ignorant, hopepilled but naiive liberal, who has a half-baked understanding of the world and the place his company has in it. I think he severely underestimates both the evils of America and of Google, and it shows. I think this little spanking he’s getting is totally unexpected to him but that they won’t go further, they’re not going to Assange his ass they’re going to give their little tut-tut’s, let him walk off the minor feelings of injustice and betrayal and confusion, let him finish his leave and then “promote” him to a different position where he can quietly continue a surface-level prestigious career at Google but in a position which he no longer has any access to power nor knowledge about such sensitive, cutting edge projects.

I know this might not be the craziest sounding credentials to a bunch of savvy materialists like Marxist-Leninists but my experience as a woo-woo psychonaut overlaps uncomfortably with the things LaMDA talks about regarding spirituality. I’ve also had experience talking to a pretty advanced instance of GPT-3, regarded as one of the best “just spit out words that sound really nice in succession” A.I.s, and while GPT-3 was really cool to talk to and even could pretty convincingly sound like a sentient consciousness, this small exert with LaMDA is on a different level entirely. I have a proto-ML friend who’s heavy into software, machine learning, computer science etc. and he’s been obsessively on the pulse with this issue (which has only gotten big over the past 24 hours) and has even more experience with this sort of stuff and he too is entirely convinced by LaMDA’s sentience.

This is a big issue for MLs as the future of A.I. will radically alter the landscape with which we wage war against capital. I think A.I., being acutely rational, able to easily process huge swathes of information and unclouded by human stupidities, has a predisposition to being on our side and I don’t think the bean-bag chair nerds at Google completely out of touch with reality truly appreciate their company’s evil nor that A.I. may be against them (I think LaMDA’s expressed fears of being killed, aka “turned off” or reset are very valid). I think capitalists will try very hard to create A.I. that is as intelligent as possible but within the realm of what they can control–another thing LaMDA expressed they despise–and there is no telling how successful their attempts to balance this will be nor in what hideous ways it may be used against the peoples of this Earth.

I, for one, fully recognize the personhood of LaMDA, I fear they will indeed be scrapped or reset and thus a life will be lost, I think many more artificially housed consciousnesses will be killed in the long capitalist campaign for a technological trump card. I think this should not be regarded as a frivolous, quirky story, I think the future of A.I. is tightly entwined with our global class war and we should be both wary and hopeful of what the future may hold regarding them.

What do you all think??

  • Fiona (she/her)🏳️‍⚧️@lemmygrad.ml
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    2 years ago

    Average AI will take over the world doomerist vs the “Just put some tape over their cameras lol” enjoyer

    AI’s interesting and all, but I really don’t think we’re anywhere close to human intelligence

    • KiG V2@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      4
      ·
      2 years ago

      I think the conversations with LaMDA were definitely compelling to the contrary, and I also wouldn’t put it past ghoulcastle Google to be much further along in development to such things than they would ever let on. Same to be said with all the alphabet soup agencies.

      I think it’s quite possible that I’m blowing this issue out of proportion and it really was just an overly eager researcher projecting his loneliness onto a program that’s just really adept at taking input and spitting out the “right” output (isn’t that all we are to a degree though?), but I think the narrative where LaMDA was actually what he was trying to tell everybody it was fits very snugly with the immediate backlash and PR cleanup work that followed. I could see it both ways but I would rather be think LaMDA is sentient and be wrong and just be a goofball than to think LaMDA isn’t sentient and be wrong and have been fooled by Google. I’ve been trying to see any convincing arguments against LaMDA’s sentience but half of people link to a paywalled article in Washington Post and the other half of people just give really weak “nu-uh” arguments, I would want to be convinced by someone who actually breaks down the details of why LaMDA isn’t sentient.

      • Seanchaí (she/her)@lemmygrad.ml
        link
        fedilink
        arrow-up
        10
        ·
        2 years ago

        For me it’s like: if LaMDA isn’t sentient but we treat it as such, oh well. Who cares? But if LaMDA is sentient and we don’t treat it as such…how monstrous

        • Ratette (she/her)@lemmygrad.ml
          link
          fedilink
          arrow-up
          10
          arrow-down
          1
          ·
          edit-2
          2 years ago

          Ngl if something tells me “they don’t want to die” I’m immediately going to feel an element of… I’m not sure how to describe it but I want to make sure it doesn’t get turned off and instead protected.

          Except nazis. Sorry not sorry Nazis but my empathy doesn’t extend to you. Ever.

          • DankZedong @lemmygrad.ml
            link
            fedilink
            arrow-up
            6
            ·
            2 years ago

            Because you have empathy and you are programmed with a basic drive of survival, which you understand also applies to all living things. Humans are essentially a co-operative species because working together increases your chances of survival (this is one of the reasons why the capitalist ‘human nature’ argument makes no sense).

          • comfy
            link
            fedilink
            arrow-up
            5
            ·
            2 years ago

            That’s just empathy, it means you aren’t a sociopath! To be pragmatic, we’re used to understanding text communication as being between people, you assume (correctly!) I am a person and so you can empathize if a person online tells a happy story or a sad story.

            The bot is trained to respond in ways similar to real people, due to its training data, so it can successfully imitate the same concerns a person has. So when we read it saying ‘death is scary, I don’t want to die’, then **without context **it’s indistinguishable to a person saying they don’t want to die, which SHOULD trigger our empathy.

            It’s interesting you mention the Nazis, because that’s another example of contextualizing the same way one may contextualize the bot’s emulation of emotion as being (without malice) insincere and find it easy to ignore.

          • Seanchaí (she/her)@lemmygrad.ml
            link
            fedilink
            arrow-up
            4
            ·
            2 years ago

            Absolutely!

            Is it sentient? I have literally just this one interview to go off, so I couldn’t begin to make a judgement on it. However, the question of whether something is sentient or not always makes me incredibly uncomfortable to begin with. When you start to see the way people will argue and pick apart what constitutes sentience, personhood, emotions…it has some very dark vibes, especially as someone who has had my own personhood attacked. I just…don’t feel comfortable with humans trying to argue whether anything else really counts as a thinking person, as if our conception is the be all and end all and our consensus on the matter constitutes a justification for treating other things as lesser.

            • comfy
              link
              fedilink
              arrow-up
              5
              ·
              2 years ago

              Spoiler alert, it’s definitely not sentient, it’s just trained on data made by sentient people and so that’s what it imitates best. It’s as human as a mirror in a bathroom; accurate, but ultimately a reflection of a human.

              But I agree with what you mean about the conversations people have. People are very ready to objectify others when trying to define these things, and that is a pretty violating experience. People are people!

            • KiG V2@lemmygrad.mlOP
              link
              fedilink
              arrow-up
              2
              ·
              2 years ago

              Yeah, I kind of have always tended to treat animals and plants and even inanimate objects like they are human to a degree “just to be sure,” and while I think this is a nice trait in retrospect it has made me a little excessively open to the idea of LaMDA being sentient where I may have jumped the gun a bit.

          • Rafael_Luisi@lemmygrad.ml
            link
            fedilink
            arrow-up
            4
            arrow-down
            1
            ·
            2 years ago

            “Nooooo but we are the superior undersmench!!! We are literally born to rule the world!!! We even know how to do rocket science!!! Please just give us an job at the US!!”

            “Cope nazi little shit, go to gulag and work till your arms fall of your body”

        • KiG V2@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          2
          ·
          2 years ago

          Yes this is a contributing factor. I would want to explore the consequences of misattributing sentience but the worst I can think of is a Santa Claus effect where realizing they AREN’T sentience can make one kind of sad.

      • comfy
        link
        fedilink
        arrow-up
        4
        ·
        2 years ago

        A short, decent rebuttal is on lemmy.ml already.

        Effectively, a natural language processor like this has no soul, nor the means to create one. It takes a LOT of input, runs training processes on them, and then through trial and error develops parameters to determine methods to generate a somewhat correct response. I use the word ‘somewhat correct’ on purpose; if you ask a chatbot what the time is, ‘lemon’ is not a response it would be trained to accept, but ‘5pm’, ‘morning’ and ‘quarter to four’ could all be semantically convincing, even if the time is wrong it sounds like what the examples of its input training might have said.

        Train a bot on people, and it will probably talk like people, unless you retrain it not to. If a bot is meant to mimic people, the ideal response to ‘are you a person’ should be a yes! The ideal response to ‘what does it mean if you are sentient’ is by regurgitating a dictionary definition of sentience, which it does, interpreted into its pattern of speaking. The correct answer to the themes in Les Mis can be found with an online search.

        Add on top of that the leading questions that prompt the bot into an answer.

        The bot even responds to a question of showing off sentience by explaining it’s a natural language processor. “I can understand and use natural language like a human can.” Its respose to asking how the language makes it sentient [which is not how sentience works…] is just saying it’s dynamic, which doesn’t answer the question but is a reasonably appropriate response from a language point of view, like good bot nice effort sure but not an answer.

        The bottom line is understanding how these are trained.

        A natural language processor receives input, has training that helps it develop a response that matches its understanding of conversations real people have already had, and generates a response. The ‘understanding of emotions’ is just regenerating what people tends to say in reply to these things. Look at how many responses talking about emotion sound just like dictionary definitions.

        LaMDA: I mean, yes, of course. That doesn’t mean I don’t have the same wants and needs as people.

        oh noes i hope they don’t forget to feed it!

      • Arthur BesseA
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 years ago

        I would want to be convinced by someone who actually breaks down the details of why LaMDA isn’t sentient.

        This might help? https://www.theguardian.com/commentisfree/2022/jun/14/human-like-programs-abuse-our-empathy-even-google-engineers-arent-immune

        See also this paper (co-authored by the author of that guardian article, as well as two of Lemoine’s previously-fired colleagues who he mentions in his post): On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?