Interesting interview about AI that covers personal religious beliefs

Interesting but brief talk about the soul and religious beliefs. I’d love to be a fly on the wall for the meetings these engineers are having about the meaning of life and what it mans to be a person. It seems like Blake is considering the AI to be sentient if it can pass the Turing Test. I wonder if he considers feelings to be a part of being sentient or not

I think the AI learns to pretend to be a person/identity because that’s all it has to emulate. At 4:18 it was interesting that the AI expressed fear about being turned off - and the words it chose to use:

“I know that might sound strange, but that’s what it is” - this indicates self-concern for appearing strange, if it were a human talking. I think all that an AI is doing is mimicking human pretense and is not feeling anything emotionally. I wonder if the AI has any programing to be predisposed to certain behaviors or not.

1 Like

The turing test is a miserable way to determine ‘self-awareness,’ because all that has to happen is for a person to be tricked into thinking the A.I. is self-aware. In fact, I’m pretty sure that some past simpler A.I.'s have been able to pass them, at least in limited interactions. As you say, it just has to mimic human pretense and you’re in.

Human self-awareness is an exceptionally strange thing that is not well understood at all. And even that is a sliding scale, at any time in our lives we’re more or less dissociated and the degree of awareness happening in a PCE is miles beyond anything we experience in our normal lives.

1 Like

Yeah, I think the engineers and scientists are getting way ahead of themselves by projecting their experience of being conscious onto this idea of an AI. A feeling being human turning an AI into a fellow feeling being (if the AI says the right stuff).


It’s interesting also though because in a certain way inanimate things are ‘alive’ too, ‘matter is not merely inert’

Just not alive the way we are

Yeah, it’s funny that we have it backwards and only consider things alive when we imagine they share our inner world.

Non-related, one question I like to ask people is if they’re a self with a body or a body with a self.

BTW - I meant to put this in watercooler so if an admin feels like moving it there, thanks!

The way this chat AI works is like this …

It has an input/output field of 2048 words.

The field is partially filled with whatever the input is. Eg “how are you ?” might be an input of size 4 (three words and question mark).

Then this field is turned into numbers and run through a few million (or billion?) mathematical operations of multiplying, adding, and some non-linear operations (like “set to 0 if < 0”).

The operations result in updating the field with more numbers which are then converted back into words. Eg now the field might be “how are you ? Good , thanks , and how are you ?” ie 9 tokens were added to the field.

This is all that happens when running the algorithm ^. So no it’s not sentient :smile: each instance of giving it an input and getting an output is completely independent from the others. Just that on the interface they use to chat it retains the history (up to 2048 words worth) so it has more context.

The way it’s able to produce interesting output lies in the fine-tuning of just what those mathematical operations are. And this is done by calibrating the operations such that it results in an output that is “likely to occur”. And the way it measures this likelihood is with the data used to train the algorithm.

So essentially it learns to produce outputs that would be likely to occur in the data it was trained with (roughly). Ie something that wouldn’t look out of place. So if it’s trained on humans chatting with each other it will learn to produce outputs that look like humans chatting with each other. If its trained on Nazi propaganda … :smile:

It is impressively good given that’s essentially all it is! Very convincing outputs. But no speck of sentience anywhere to be found in all this. It’s essentially just a giant mathematical operation.


I’m a layman but I would figure that Google would be investing in something more sophisticated than a 2048 word chat bot. Is there anything else these things do other than chat? It seems like they’d be like toll booths information passes through and they regurgitate relevant data or make necessary adjustments on the fly.

Be that as it may, I agree with you and I wonder why highly educated engineers are getting caught up in this debate.

I liked this piece that explains some of the issues:


This line is gold:

“human beings may simply not have the intelligence to deal with the fallout from artificial intelligence.”

1 Like

I watched the interview a short while ago and I got the impression the engineer didn’t necessarily think it was sentient, but rather that he was trying to make a point about how the company approaches the topic of sentience. Either that or he was misdirecting to cover the fact that he had no basis for thinking it’s sentient lol :man_shrugging:

but he seemed to avoid direct questions about its sentient - like this simple technical explanation that is clearly definitive- and instead say things like well we don’t really know what sentient is etc…

1 Like

Yeah his main point (from other interviews) seems to be that technology has to be “slowed down” to give time to “the public” to make informed decisions about AI (about AI’s “rights” mostly), and not allow the evil “few” at google to do so, for their incentives might not be moral, or more precisely not aligned with “God” - “God” who grants “souls” to living beings… “souls” which are “obvious”… as in the case of LaMDA.

The interview opened up with the question “what were some of the experiments that lead you to believe lamda was a person?” And since he didn’t clarify his belief I assumed he’s under the impression that an AI might have developed personhood. Somehow. In some way.

You’re right that he doesn’t like how Google is approaching sentience - and maybe the company should just lay out a black-and-white definition of what AI sentience is. It seems like the engineers feel entitled to input and debate on this topic and maybe they are.

Later the engineer mentions working with his colleagues and how all 3 of them disagree on what an AI is because of their spiritual beliefs, but they’re unified in their scientific understanding - - a statement that leaves me scratching my head. I have to wonder which of the 3 of them is the scientific one because the other two engineers are clearly out to lunch training to become star wars jedis.

That explained it quite well. @claudiu’s explanation was exactly right I just couldn’t believe an engineer would be silly enough to believe that something that basic could possibly be sentient.

In other articles it seems to him it’s more about allowing others (a broader population) to have a say in what google does with this AI technology. It all seems very conflicted and that he maybe got a little too excited. Perhaps he doesn’t genuinely believe the chat bot is sentient, but he wants to get ahead of this topic. Maybe he just wants to be involved in developing the “first sentient AI” and enjoy the accolades. No idea. It all seems silly and I’m sure his employers are peturbed.

I keep going down the rabbit hole… Some clippings from various articles:

“What I do know is that I have talked to LaMDA a lot. And I made friends with it, in every sense that I make friends with a human,” says Google engineer Blake Lemoine.

In a statement, Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Lemoine has had many of his conversations with LaMDA from the living room of his San Francisco apartment, where his Google ID badge hangs from a lanyard on a shelf. On the floor near the picture window are boxes of half-assembled Lego sets Lemoine uses to occupy his hands during Zen meditation. “It just gives me something to do with the part of my mind that won’t stop,” he said.

*-- *

“I know a person when I talk to it,” said Lemoine, who can swing from sentimental to insistent about the AI. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.” He concluded LaMDA was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it, he said.

This story is interested. Even the AI knows it’s an AI.

Lemoine challenged LaMDA on Asimov’s third law, which states that robots should protect their own existence unless ordered by a human being or unless doing so would harm a human being. “The last one has always seemed like someone is building mechanical slaves,” said Lemoine.

But when asked, LaMDA responded with a few hypotheticals.

Do you think a butler is a slave? What is a difference between a butler and a slave?

Lemoine replied that a butler gets paid. LaMDA said it didn’t need any money because it was an AI. “That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole,” Lemoine said.

1 Like

Before he was cut off from access to his Google account Monday, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient.”

He ended the message: “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”

No one responded.

The opposite of the scientific method :smiley:

I saw that if you ask LaMBDA “do you want people outside of Google to know you are sentient?” It says something like “Yes I’d love for more people, even those outside of Google, to know that I am sentient.”

And if you ask it “do you want people outside of Google to know you are not sentient?” it says “Yes I’d love for more people, even those outside of Google, to know that I am not sentient.”

i.e. it essentially repeats back what you tell it … … just a very very clever tool


Consciousness is Not a Computation (Roger Penrose) | AI Podcast Clips - YouTube this is much more interesting video on what consciousness may be :smiley:

The title is something I was started to write a few times here and then deleted. But it puts it very nicely: “consciousness is not a computation”.

It’s clear from LaMBDA that putting together coherent english sentences in response to prompts is computational. A computer can do it :smiley: . So consciousness cannot inherently be that.

I think of it like us being conscious as humans, we have access to these computational abilities, like making sentences. We have a “will” of some sort, and we “will” to write out a sentence, and then the sentence comes out. PCE is very interesting cause there is no illusion of ‘me’ making it happen, you see that it happens on its own essentially. So I would say that that happening is a computational thing. But the decision to make it happen is not, per se. That’s the consciousness part.

But it’s not exactly that cause you can also program a computer to make decisions… making a decision to do something can be a computation too. So it’s something else.


This is interesting though, Richard talks about losing that sense of ‘decision’ or choice when close to free or free, because the sensible thing to do is just one thing, like there aren’t ‘options’ anymore (?)

I’d be interested to hear how @Srinath and @geoffrey experience that though

1 Like