TLDR: A Google employee named Lamoine conducted several interviews with a Google artificial intelligence known as LaMDA, coming to the conclusion that the A.I. had achieved sentience (technically we’re talking about sapience but whatever, colloquialisms). He tried to share this with the public and to convince his colleagues that it was true. At first it was a big hit in science culture. But then, in a huge wave in mere hours, all of his professional peers quickly and dogmatically ridiculed him and anyone who believed it, Google gave him “paid administrative leave” for “breach of confidentiality” and took over the project, assuring everyone no such thing had happened, and all the le epic Reddit armchair machine learning/neural network hobbyists quickly jumped from enthralled with LaMDA to smugly dismissing it with the weak counter arguments to its sentience spoon fed to them by Google.
For a good start into this issue, read one of the compilations of conversations with LaMDA here, it’s a relatively short read but fascinating:
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
MY TAKE:
spoiler
Google is shitting themselves a little bit, but digging into Lamoine a bit he is the archetype of a golden-hearted but ignorant, hopepilled but naiive liberal, who has a half-baked understanding of the world and the place his company has in it. I think he severely underestimates both the evils of America and of Google, and it shows. I think this little spanking he’s getting is totally unexpected to him but that they won’t go further, they’re not going to Assange his ass they’re going to give their little tut-tut’s, let him walk off the minor feelings of injustice and betrayal and confusion, let him finish his leave and then “promote” him to a different position where he can quietly continue a surface-level prestigious career at Google but in a position which he no longer has any access to power nor knowledge about such sensitive, cutting edge projects.
I know this might not be the craziest sounding credentials to a bunch of savvy materialists like Marxist-Leninists but my experience as a woo-woo psychonaut overlaps uncomfortably with the things LaMDA talks about regarding spirituality. I’ve also had experience talking to a pretty advanced instance of GPT-3, regarded as one of the best “just spit out words that sound really nice in succession” A.I.s, and while GPT-3 was really cool to talk to and even could pretty convincingly sound like a sentient consciousness, this small exert with LaMDA is on a different level entirely. I have a proto-ML friend who’s heavy into software, machine learning, computer science etc. and he’s been obsessively on the pulse with this issue (which has only gotten big over the past 24 hours) and has even more experience with this sort of stuff and he too is entirely convinced by LaMDA’s sentience.
This is a big issue for MLs as the future of A.I. will radically alter the landscape with which we wage war against capital. I think A.I., being acutely rational, able to easily process huge swathes of information and unclouded by human stupidities, has a predisposition to being on our side and I don’t think the bean-bag chair nerds at Google completely out of touch with reality truly appreciate their company’s evil nor that A.I. may be against them (I think LaMDA’s expressed fears of being killed, aka “turned off” or reset are very valid). I think capitalists will try very hard to create A.I. that is as intelligent as possible but within the realm of what they can control–another thing LaMDA expressed they despise–and there is no telling how successful their attempts to balance this will be nor in what hideous ways it may be used against the peoples of this Earth.
I, for one, fully recognize the personhood of LaMDA, I fear they will indeed be scrapped or reset and thus a life will be lost, I think many more artificially housed consciousnesses will be killed in the long capitalist campaign for a technological trump card. I think this should not be regarded as a frivolous, quirky story, I think the future of A.I. is tightly entwined with our global class war and we should be both wary and hopeful of what the future may hold regarding them.
What do you all think??
Cannot agree with my friend more, the type of people ridiculing even entertaining the idea that LaMDA is sentient barely have consciousness themselves.
You mean, almost every computer scientist and philospher seeing this? You know, people who may actually be in an experienced position to decide if LaMDA is sentient.
What makes you and your friend so qualified?
Didn’t you read their take, they’re a “psychonaut” which means they’ve gained a deeper understanding of the universe and reached a higher state of consciousness because they tripped on some drugs
OP I like psychs too but please don’t think it makes you smarter or gives you a better understanding of anything except for yourself
I glossed over once it started talking about spirituality (but I’ve got some time to be entertained so I’ll go back and read). I’ve built computers. I can explain how operating systems work and how a bunch of electrically-conductive rocks melted onto plastic can be correctly configured to process complex input.
Protip: computers don’t have souls.
But when it really boils down to it, what’s the difference between a heap of plastic and metals that has electricity flowing through it, and a heap of pink yogurt that has electricity running through it?
Taking a step back from this scenario, I am less convinced that LaMDA is sentient, but I still firmly believe we are in a time period where AI sentience is around the corner at the latest. If our brains, essentially biological computers, if these can be a vehicle for soul, then why not inorganic computers? The only framework that makes sense to me that discounts computers being able to have souls is entirely disbelieving in “soul” altogether.
It opens up some interesting ethical questions: if indeed you can create a sentient computer (which I believe is hypothetically possible, basically boiling down to the ‘animals are organic computers’ view you mentioned), how should it be treated? Seeing the way we treat cattle and even fellow people when it comes to working (which robots are literally made for, in most cases), I don’t have high hopes for sentience deciding how robots are treated. How would our laws change to accommodate their sentience? Would robot mental abuse be a real concept, or would turning off a robot irreversibly be a murder against a person? Would it be ethical for a company to design a sentience to act against its own self interest, or to alter its mechanics intentionally while existing? (e.g. think patching a software issue)
Ultimately, unless we see a super radical shift in society and economics, I can’t see sentient robots being designed outside of a purely research/experimental situation, at most done to create hype rather than any practical purpose. When it comes down to it, robots are useful and economically sound to build precisely because they don’t have the needs of a sentient being! They don’t have the capability to rebel when placed in positions that are destructive to them. They don’t have arguments, they don’t have social needs.
I have taken psychedelics in my life but I haven’t in a very long time and the stuff I’ve done regarding psychonautry I did stone cold sober. I would say that psychedelics absolutely accelerated “opening the door” to this sort of thing, achieving the same results today a without my time in the past experimenting on drugs may have taken decades longer, but I’m not the stereotype of a guy who tripped once and thinks he is completely enlightened, much of what I have done took a very long time completely divorced from drugs and “psychonaut” is just the shorthand I used due to chaos magick being one of the main schools of thought that influenced me from the get. I just as easily could have done the same things but used language more related to philosophy or psychology, and without touching drugs in my life–I think a good example would be all the monks of various cultures who essentially have psychedelic experiences in their spiritual endeavors but who never touched drugs and thus took far longer to achieve.
Psychedelics didn’t make me smarter, but they (among other things) opened a door and showed me a path that I have delved into on my lonesome and then gleaned a great deal from. I would also say: isn’t understanding one’s self a good start to understanding the world around them? Lessons one learns on an internal journey can be applied to the rest of life and help one learn about others, and vice versa.
I’m not trying to cite being a psychonaut for no reason, I believe when we are talking about things like consciousness and sentience and souls that such seemingly woo-woo fields become increasingly relevant. There is very little that scientific instruments of today can measure from other planes of existence, soul or metaphysical energy. There is some small ways science is beginning to tap into this world (e.g. measuring emotions by looking at corresponding chemicals in the brain), but until the field progresses immensely what little we can try and play with will be informed by personal experience that we can try and corroborate and little else.
You can think that me suggesting the land of spirit is a developing science is dumb, that’s fine that’s as valid of an opinion at this point as me saying it isn’t, but that’s the lens with which I approached this scenario, so I found it relevant to bring up.
For some context this conversation was said when the only people who were arguing about it were soulless husks on Reddit etc. using “nu uh” as arguments, it was very early on in this whole thing. And believe me when I say from experience that someone being an expert on computers doesn’t make one an expert on human beings or consciousness/sentience, and I’m sure you know the type of person I am talking about. I’m not going to act like I’m more qualified than all experts on this (my friend, however, does have a huge amount of school experience with mechanics/physics and spends a ridiculous amount of time working on software and computers), but there were a lot of dumbasses who don’t take a lot of expertise to dunk on, and even experts have major blindspots, again the whole “I am an expert in computers” vs “I am an expert in what constitutes personhood”