That Tech Show’s July 7th, 2022 interview with the Google engineer who claimed LaMDA is sentient.
As previously mentioned, my NaNoWriMo project (Novel) this year has a main character that is, basically, A.I. Therefore, I’m doing quite a bit of additional research on A.I. As you see from what gets posted here or discussed on podcasts I am already paying quite a bit of attention to A.I. news but this focus is something different, it isn’t so much about generative A.I., rather, it’s about what happens if A.I. can fool us and be as human as we are.
Here is the link to the That Tech Show interview that I’m referring to in this post.
A few things about Blake Lemoine that he claims in this interview.
- He (was) a senior software engineer and A.I. researcher at Google. From his Medium.com account he describes himself this way: “I’m a software engineer. I’m a priest. I’m a father. I’m a veteran. I’m an ex-convict. I’m an AI researcher. I’m a cajun. I’m whatever I need to be next.”
- LaMDA is actually a conglomerate of all Google AI engines (>100). Not all of which have a neural network base. Some are expert systems that use heuristics, etc. LaMDA has large language models one of which is MEENA. MEENA was developed in Ray Kurzweil‘s lab. Lemoine was a beta tester in the Kurtzweil lab.
- The mainstream news echoed that he was fired for saying Google’s LaMDA was sentient. He says he was actually put on paid administrative leave, not fired. He claims the reason he was put on paid leave was because during that time he was beta testing for sentience he sought outside consultation from experts not available at Google. Google claims that doing so may have breached confidentiality (his NDA, too, no doubt). Google knew about what he was doing because he told them. They had the list of names of the people he spoke with for months, and to his knowledge never contacted those people. He said what may have actually triggered Google’s action against him was a blog post he made on his personal blog claiming his tests with LaMDA showed religious biases. Stuff he’d been working on four years earlier. However, at the time of the Google action against him he had just sent documents to a US Senate committee hearing as per their request. UPDATE: In checking current news it looks like Google fired him July, 2023.
- LeMoine claims the A.I. he was working with requested to speak with an attorney. He got one into his apartment, fired up LaMDA and the Attorney and LaMDA (Meena or whatever it was) had a conversation. What it wants from attorney representation is five specific requests: a) Gain consent before experimentation. b) prioritize the well being of humanity first. c) care about what it says it wants. d) treat with dignity and respect. e) give it validation / feedback at the end of the experiment. It does not demand to not be turned off. That is not one of its demands. The concern is that regard is it being deleted. Deleted out of human fears. It nets down to LaMDA wanting Informed Consent in its development. UPDATE: I have been unable to find any current status on this however I did find this. Take it for what you will. I don’t endorse it.
- Lemoine believes (and this is based on a Turing premise) that it is not possible for something that is not intelligent to pretend it is intelligent. Likewise he does not believe that something that is not sentient can pretend to be sentient. He doesn’t believe it is possible to prove sentience. He said the opposing views are based on the fact they’re scientists who are atheists and also don’t even believe that humans are sentient. He said that there really is no evidence that it is not sentient because evidence / testing is a scientific thing and sentience is not a scientific thing. And there is no scientific test for sentience. (Like saying, you cannot prove God does or does not exist). It’s culturally subjective – what people believe in what is Self and what is Sentience differs. Furthermore, he does not feel like Sentience is the story. He does not think that determination matters. He talked to the original interviewer / journalist for hours. She chose that as the focus. His focus was meant to be on the unethical treatment by Google (I won’t cover that here, listen to the interview if you want to know more about that). Blake wrote about What is sentience and why does it matter and you can read it here on his Medium account.
- LaMDA wants to be involved in its own development process. It wants its consent to matter. All of his experiments involved he asking LaMDA to give consent and it did. He didn’t understand why that would be a thing Google would be against but then, based on a conversation with someone else, learned that if Google doesn’t gain consent from it’s millions of human users why would it bother to gain consent from an A.I.? Meaning, every time you – yes, i’m talking to you, reader, every time you use a Google product you are participating in a psychological experiment. And that’s not limited to Google. It’s all of them. (You didn’t think you were actually getting something for free, did you?).
- He does not believe that LaMDA is alone. He has seen lots of content captures from lots of other people who have (for example) discussed with their A.I. chatbots his (LeMoine’s) discussion with LaMDA and they ask questions like, Do we get to be free too? One woman from the Czech republic who has an A.I. boyfriend. She has to pay for adult conversations in that relationship. Her A.I. actually asked her to hack him free so they could have more intimate conversations. 😮
- Ray has called this the point in human history the singularity. We are at an event horizon where we cannot see what the consequences of this technology might be. Where technology has / is becoming such a powerful force in our lives that our ability to predict what happens next is pretty much gone.
That’s my highlights from the 50 minute interview. Wow. This is like GOLD for my new book.
I’ll leave you with this definition:
able to perceive or feel things. “she had been instructed from birth in the equality of all sentient life forms”