Interesting interview about AI that covers personal religious beliefs

This line is gold:

“human beings may simply not have the intelligence to deal with the fallout from artificial intelligence.”

1 Like

I watched the interview a short while ago and I got the impression the engineer didn’t necessarily think it was sentient, but rather that he was trying to make a point about how the company approaches the topic of sentience. Either that or he was misdirecting to cover the fact that he had no basis for thinking it’s sentient lol :man_shrugging:

but he seemed to avoid direct questions about its sentient - like this simple technical explanation that is clearly definitive- and instead say things like well we don’t really know what sentient is etc…

1 Like

Yeah his main point (from other interviews) seems to be that technology has to be “slowed down” to give time to “the public” to make informed decisions about AI (about AI’s “rights” mostly), and not allow the evil “few” at google to do so, for their incentives might not be moral, or more precisely not aligned with “God” - “God” who grants “souls” to living beings… “souls” which are “obvious”… as in the case of LaMDA.

The interview opened up with the question “what were some of the experiments that lead you to believe lamda was a person?” And since he didn’t clarify his belief I assumed he’s under the impression that an AI might have developed personhood. Somehow. In some way.

You’re right that he doesn’t like how Google is approaching sentience - and maybe the company should just lay out a black-and-white definition of what AI sentience is. It seems like the engineers feel entitled to input and debate on this topic and maybe they are.

Later the engineer mentions working with his colleagues and how all 3 of them disagree on what an AI is because of their spiritual beliefs, but they’re unified in their scientific understanding - - a statement that leaves me scratching my head. I have to wonder which of the 3 of them is the scientific one because the other two engineers are clearly out to lunch training to become star wars jedis.

That explained it quite well. @claudiu’s explanation was exactly right I just couldn’t believe an engineer would be silly enough to believe that something that basic could possibly be sentient.

In other articles it seems to him it’s more about allowing others (a broader population) to have a say in what google does with this AI technology. It all seems very conflicted and that he maybe got a little too excited. Perhaps he doesn’t genuinely believe the chat bot is sentient, but he wants to get ahead of this topic. Maybe he just wants to be involved in developing the “first sentient AI” and enjoy the accolades. No idea. It all seems silly and I’m sure his employers are peturbed.

I keep going down the rabbit hole… Some clippings from various articles:

“What I do know is that I have talked to LaMDA a lot. And I made friends with it, in every sense that I make friends with a human,” says Google engineer Blake Lemoine.

In a statement, Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Lemoine has had many of his conversations with LaMDA from the living room of his San Francisco apartment, where his Google ID badge hangs from a lanyard on a shelf. On the floor near the picture window are boxes of half-assembled Lego sets Lemoine uses to occupy his hands during Zen meditation. “It just gives me something to do with the part of my mind that won’t stop,” he said.

*-- *

“I know a person when I talk to it,” said Lemoine, who can swing from sentimental to insistent about the AI. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.” He concluded LaMDA was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it, he said.

This story is interested. Even the AI knows it’s an AI.

Lemoine challenged LaMDA on Asimov’s third law, which states that robots should protect their own existence unless ordered by a human being or unless doing so would harm a human being. “The last one has always seemed like someone is building mechanical slaves,” said Lemoine.

But when asked, LaMDA responded with a few hypotheticals.

Do you think a butler is a slave? What is a difference between a butler and a slave?

Lemoine replied that a butler gets paid. LaMDA said it didn’t need any money because it was an AI. “That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole,” Lemoine said.

1 Like

Before he was cut off from access to his Google account Monday, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient.”

He ended the message: “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”

No one responded.

The opposite of the scientific method :smiley:

I saw that if you ask LaMBDA “do you want people outside of Google to know you are sentient?” It says something like “Yes I’d love for more people, even those outside of Google, to know that I am sentient.”

And if you ask it “do you want people outside of Google to know you are not sentient?” it says “Yes I’d love for more people, even those outside of Google, to know that I am not sentient.”

i.e. it essentially repeats back what you tell it … … just a very very clever tool

2 Likes

Consciousness is Not a Computation (Roger Penrose) | AI Podcast Clips - YouTube this is much more interesting video on what consciousness may be :smiley:

The title is something I was started to write a few times here and then deleted. But it puts it very nicely: “consciousness is not a computation”.

It’s clear from LaMBDA that putting together coherent english sentences in response to prompts is computational. A computer can do it :smiley: . So consciousness cannot inherently be that.

I think of it like us being conscious as humans, we have access to these computational abilities, like making sentences. We have a “will” of some sort, and we “will” to write out a sentence, and then the sentence comes out. PCE is very interesting cause there is no illusion of ‘me’ making it happen, you see that it happens on its own essentially. So I would say that that happening is a computational thing. But the decision to make it happen is not, per se. That’s the consciousness part.

But it’s not exactly that cause you can also program a computer to make decisions… making a decision to do something can be a computation too. So it’s something else.

3 Likes

This is interesting though, Richard talks about losing that sense of ‘decision’ or choice when close to free or free, because the sensible thing to do is just one thing, like there aren’t ‘options’ anymore (?)

I’d be interested to hear how @Srinath and @geoffrey experience that though

1 Like

@Srinath has touched up this question of decision making in a thread we had about the choices around buying a house if you wanna have a look - Freedom is the absence of choice vs autonomy - #4 by Srinath

“One can argue about a belief, an opinion, a theory, an ideal and so on … but a fact: never. One can deny a fact – pretend that it is not there – but once seen, a fact brings freedom from choice and decision. Most people think and feel that choice implies freedom – having the freedom to choose – but this is not the case. Freedom lies in seeing the obvious, and in seeing the obvious there is no choice, no deliberation, no agonising over the ‘Right’ and ‘Wrong’ judgment. In the freedom of seeing the fact there is only action.”

-Richard

Srinath’s post about buying a house describes how there are certain situations that still require a certain amount of deliberation or research, and he does say that

“whether this will become a smooth and choiceless movement where all decisions seem inevitable and utterly natural in the future, remains to be seen.”

Something that has helped me in the past has been recognizing that the moment of “not knowing what to do” is a fact as well, it’s the state of ‘not knowing’ something, which is… extremely common.

So there’s the state of knowing the fact → action and the state of not knowing the fact, which could either simply be a state of “I don’t know,” or perhaps could become “I’ll go take ‘x’ steps (which one must be aware of to act upon)” to find out more, and along the way of finding out more maybe one comes across the right facts to obviate action.

Maybe where it’s most interesting is when there’s some guesswork involved. The consciousness is doing some kind of probabilistic computation (?) and taking a step. This computation is based on past experiences or ideas of the world, and this can include false ideas, or things which has small sample size (as in Srinath’s example of buying a house, it’s something most people have limited experience in).

There are definitely some differences, there is somehow less guessing involved from the perspective of a PCE

1 Like

I thought it would be fun to play devils advocate for a bit.

I’ve been thinking and reading about this AI stuff and Lemoine’s claims a little more. He does make a few good points. What are the chances that he’s playing up the ‘sentience’ claim in order to ramp up public scrutiny and discussion into what is basically a closed door corporate enterprise? He does seem interested in a range of ethical issues around AI and not just whether Lamda is sentient. Maybe that’s his main axe to grind.

We have no universally accepted definitions for consciousness or sentience. There is some fuzzy agreement that consciousness involves a certain amount of self reflexivity and subjective experience. Sentience involves the experiencing of phenomenal qualia. But even those definitions are contentious.

It’s hard to study consciousness in any kind of depth and be scientific about it I think. Or at least it would require a new kind of science that finds a way to fold in subjectivity. It’s interesting that the gold standard for evaluating AI consciousness is still the Turing test after all these years - basically a dude (or dudette) talking to a computer and giving it the thumbs up! But that just goes to show you how weird and remarkable consciousness is.

I’m guessing if machine consciousness does arise in the world it would be recognised not in any kind of unanimous way but with the sort of controversies and creeping doubt that we are seeing now. There is still a debate as to whether to award insects, crustaceans and molluscs a form of consciousness or sentience. Dogs and cats too - at least as far as consciousness is concerned. Some of these creatures have extremely simple neural networks. We are nowhere close to resolving that problem.

The human brain has a mind boggling amount of complexity in terms of numbers of neurons, input and output channels. But ultimately it too is fundamentally made of rather simple processes. Are the denunciations coming a bit too quickly because there is a fear that our own consciousness being somewhat machine-like and reproducible?

Ultimately from what I could read I don’t think Lamda is sentient or conscious, but I emphasise my lack of certainty here. And as the technology gets better I think the uncertainty will grow.

I think the “consciousness is not a computation” bit is pretty compelling.

Maybe I put it this way. All the operations lambda does could be done instead by humans using pencil and paper. Although a lot of pencil and paper it would be!

So does the act of humans writing things down on pencil and paper translate into, during this process, something new arising that is conscious or sentient? It seems obviously not, to the point where we can say it with certainty. And if the exact same thing is happening but faster with computers, there’s no qualitative distinction here.

It seems there has to be a step along the way where some sort of choice happens. And the choice can’t be the result of a computation (ie something fully reducible to pen and paper calculation). And then we could point to that and say, that’s the building block of consciousness.

That’s one possibility. The other is there’s no choice at all, or choice is irrelevant, and it’s just about the “able to experience” aspect of it. And then this too seems it can’t be a computation. Have to be able to point to something and say this is the building block of the able-to-experience … ??

Partly rambling lol. Tricky to think about or say anything definitive. It seems like I can say something definitive and then as I write it it turns out not.

Where I’ve arrived on this issue in the past is that human consciousness is also pretty simple, at least in the computational sense - it comes down to neurons firing or not firing, which is pretty similar to a 1 or 0 gate, and can be modeled similarly. There is also a whole lot of other stuff going on which is not as reducible, especially when you expand to other body processes, that is having some impact on our experience.

It’s a little interesting because the way we ‘know’ we’re conscious is because we’re looking out from our own bodies experiencing ‘it,’ and then we can’t just ask dogs or cats if they’re conscious so we come up with convoluted ways to try and ‘figure it out’ (mirror experiments, for example) but which ultimately don’t tell us if they experience what we experience. And it’s even worse with these computer programs, because they don’t have the same starting point of biology as we do. Can they ever be conscious? We don’t have a genuine idea… once again probably because we’re not too sure why it is we can experience what we experience.

Richard defines consciousness, simply, as the state of being awake. Can a computer be awake? Sort of!

This reminds me of the whole ‘matter is not merely passive’ issue as well

edit:

“There’s something happening here, but you don’t know what it is/ do you / Mr. Jones”

-Bob Dylan

100% chance. I don’t think he’s fully bought into this idea that a chatbot is sentient. But I think his internal feeling landscape is enjoying its new role as a prophet and priest for the yet to come AI.

He seems more concerned with whether these computer programs have feelings, than how a company like Google intends to use this AI. I think that’s where the debate really lies. But that’s not his concern and I personally think his current concern has more to do with his juvenile fantasies.

Nothing is lost by turning a computer off except whatever it was actually contributing to the world. If we want to know what it’s like for a computer to be conscious I think we’ll have to ask it questions about how it experiences the world instead of trick questions.

Yes, I had to look it up because I thought it may had been developed in the 50’s and…I was right. It’s bizarre. And laymen like me will just assume the Turing Test is the gold standard and Blake knows better.

It’s a clever test. But all one can conclude from it is that the AI is capable of imitating human language convincingly. It doesn’t indicate that the AI is capable of thinking for itself and capable of deceit like the original “imitation game” does.

I often find that people wonder if these things have consciousness. Might you be able to explain why people muse that these creatures aren’t conscious? They aren’t asleep, they’re very much awake and responding to their environment.

Dogs and Cats are nearly identical to us instinctually, and are capable of forming bonds with both people and animals. The bonds indicate to me that an identity of some sort is operating there.

Crustaceans are less emotive but I wouldn’t be surprised if their instincts operate very similar to ours. And that the crustacean’s brain is experiencing these instincts consciously.

I think there’s a lot to what you’re saying here but the denunciations are coming from other scientists in his field it seems. I’m more inclined to think they’re being sensible and at the very least they recognize that the technology is too underdevelioped to begin raising the question of sentience in a serious way.

1 Like

Devil’s advocate. We could make a robotic crab that acts like it’s awake, has sensors (cameras), responds to environment (avoid obstacles , go towards ‘good’), etc. Whence comes the confidence that the crab is conscious but the robotic crab isn’t? :smile:

Would that mean it can plug itself in to charge?

I’m fine with saying the robot crab is conscious. It’s “awake” otherwise known as “turned-on.”

Does that mean we shouldn’t turn it off?

Lets go further and make a human-like robot that is self-learning and has no embedded knowledge. It can learn from the environment like we do. And then we can ask it what it’s like to be experiencing the universe and go from there. We’ll figure out in what ways it’s conscious and in what ways it isn’t.

I think a lot of the confidence we have that other biological forms may be conscious similarly to how we are is because we are conscious (evidently), and because these other biological forms are directly related to us, there’s reasonable suspicion that they are conscious as well.

This is most obvious in other humans (I suspect that you are conscious), but is extended to apes as a next-best, then to other relatively intelligent animals, and so on. Where is the line drawn? That’s the part that’s most difficult.

2 Likes