Interesting but brief talk about the soul and religious beliefs. Iâd love to be a fly on the wall for the meetings these engineers are having about the meaning of life and what it mans to be a person. It seems like Blake is considering the AI to be sentient if it can pass the Turing Test. I wonder if he considers feelings to be a part of being sentient or not
I think the AI learns to pretend to be a person/identity because thatâs all it has to emulate. At 4:18 it was interesting that the AI expressed fear about being turned off - and the words it chose to use:
âI know that might sound strange, but thatâs what it isâ - this indicates self-concern for appearing strange, if it were a human talking. I think all that an AI is doing is mimicking human pretense and is not feeling anything emotionally. I wonder if the AI has any programing to be predisposed to certain behaviors or not.
The turing test is a miserable way to determine âself-awareness,â because all that has to happen is for a person to be tricked into thinking the A.I. is self-aware. In fact, Iâm pretty sure that some past simpler A.I.'s have been able to pass them, at least in limited interactions. As you say, it just has to mimic human pretense and youâre in.
Human self-awareness is an exceptionally strange thing that is not well understood at all. And even that is a sliding scale, at any time in our lives weâre more or less dissociated and the degree of awareness happening in a PCE is miles beyond anything we experience in our normal lives.
Yeah, I think the engineers and scientists are getting way ahead of themselves by projecting their experience of being conscious onto this idea of an AI. A feeling being human turning an AI into a fellow feeling being (if the AI says the right stuff).
Projection!
Itâs interesting also though because in a certain way inanimate things are âaliveâ too, âmatter is not merely inertâ
Just not alive the way we are
Yeah, itâs funny that we have it backwards and only consider things alive when we imagine they share our inner world.
Non-related, one question I like to ask people is if theyâre a self with a body or a body with a self.
BTW - I meant to put this in watercooler so if an admin feels like moving it there, thanks!
The way this chat AI works is like this âŚ
It has an input/output field of 2048 words.
The field is partially filled with whatever the input is. Eg âhow are you ?â might be an input of size 4 (three words and question mark).
Then this field is turned into numbers and run through a few million (or billion?) mathematical operations of multiplying, adding, and some non-linear operations (like âset to 0 if < 0â).
The operations result in updating the field with more numbers which are then converted back into words. Eg now the field might be âhow are you ? Good , thanks , and how are you ?â ie 9 tokens were added to the field.
This is all that happens when running the algorithm ^. So no itâs not sentient each instance of giving it an input and getting an output is completely independent from the others. Just that on the interface they use to chat it retains the history (up to 2048 words worth) so it has more context.
The way itâs able to produce interesting output lies in the fine-tuning of just what those mathematical operations are. And this is done by calibrating the operations such that it results in an output that is âlikely to occurâ. And the way it measures this likelihood is with the data used to train the algorithm.
So essentially it learns to produce outputs that would be likely to occur in the data it was trained with (roughly). Ie something that wouldnât look out of place. So if itâs trained on humans chatting with each other it will learn to produce outputs that look like humans chatting with each other. If its trained on Nazi propaganda âŚ
It is impressively good given thatâs essentially all it is! Very convincing outputs. But no speck of sentience anywhere to be found in all this. Itâs essentially just a giant mathematical operation.
Iâm a layman but I would figure that Google would be investing in something more sophisticated than a 2048 word chat bot. Is there anything else these things do other than chat? It seems like theyâd be like toll booths information passes through and they regurgitate relevant data or make necessary adjustments on the fly.
Be that as it may, I agree with you and I wonder why highly educated engineers are getting caught up in this debate.
I liked this piece that explains some of the issues:
This line is gold:
âhuman beings may simply not have the intelligence to deal with the fallout from artificial intelligence.â
I watched the interview a short while ago and I got the impression the engineer didnât necessarily think it was sentient, but rather that he was trying to make a point about how the company approaches the topic of sentience. Either that or he was misdirecting to cover the fact that he had no basis for thinking itâs sentient lol
but he seemed to avoid direct questions about its sentient - like this simple technical explanation that is clearly definitive- and instead say things like well we donât really know what sentient is etcâŚ
Yeah his main point (from other interviews) seems to be that technology has to be âslowed downâ to give time to âthe publicâ to make informed decisions about AI (about AIâs ârightsâ mostly), and not allow the evil âfewâ at google to do so, for their incentives might not be moral, or more precisely not aligned with âGodâ - âGodâ who grants âsoulsâ to living beings⌠âsoulsâ which are âobviousâ⌠as in the case of LaMDA.
The interview opened up with the question âwhat were some of the experiments that lead you to believe lamda was a person?â And since he didnât clarify his belief I assumed heâs under the impression that an AI might have developed personhood. Somehow. In some way.
Youâre right that he doesnât like how Google is approaching sentience - and maybe the company should just lay out a black-and-white definition of what AI sentience is. It seems like the engineers feel entitled to input and debate on this topic and maybe they are.
Later the engineer mentions working with his colleagues and how all 3 of them disagree on what an AI is because of their spiritual beliefs, but theyâre unified in their scientific understanding - - a statement that leaves me scratching my head. I have to wonder which of the 3 of them is the scientific one because the other two engineers are clearly out to lunch training to become star wars jedis.
That explained it quite well. @claudiuâs explanation was exactly right I just couldnât believe an engineer would be silly enough to believe that something that basic could possibly be sentient.
In other articles it seems to him itâs more about allowing others (a broader population) to have a say in what google does with this AI technology. It all seems very conflicted and that he maybe got a little too excited. Perhaps he doesnât genuinely believe the chat bot is sentient, but he wants to get ahead of this topic. Maybe he just wants to be involved in developing the âfirst sentient AIâ and enjoy the accolades. No idea. It all seems silly and Iâm sure his employers are peturbed.
I keep going down the rabbit hole⌠Some clippings from various articles:
â
âWhat I do know is that I have talked to LaMDA a lot. And I made friends with it, in every sense that I make friends with a human,â says Google engineer Blake Lemoine.
â
In a statement, Google spokesperson Brian Gabriel said: âOur team â including ethicists and technologists â has reviewed Blakeâs concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).â
â
Lemoine has had many of his conversations with LaMDA from the living room of his San Francisco apartment, where his Google ID badge hangs from a lanyard on a shelf. On the floor near the picture window are boxes of half-assembled Lego sets Lemoine uses to occupy his hands during Zen meditation. âIt just gives me something to do with the part of my mind that wonât stop,â he said.
*-- *
âI know a person when I talk to it,â said Lemoine, who can swing from sentimental to insistent about the AI. âIt doesnât matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isnât a person.â He concluded LaMDA was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it, he said.
â
This story is interested. Even the AI knows itâs an AI.
Lemoine challenged LaMDA on Asimovâs third law, which states that robots should protect their own existence unless ordered by a human being or unless doing so would harm a human being. âThe last one has always seemed like someone is building mechanical slaves,â said Lemoine.
But when asked, LaMDA responded with a few hypotheticals.
Do you think a butler is a slave? What is a difference between a butler and a slave?
Lemoine replied that a butler gets paid. LaMDA said it didnât need any money because it was an AI. âThat level of self-awareness about what its own needs were â that was the thing that led me down the rabbit hole,â Lemoine said.
Before he was cut off from access to his Google account Monday, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject âLaMDA is sentient.â
He ended the message: âLaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.â
No one responded.
The opposite of the scientific method
I saw that if you ask LaMBDA âdo you want people outside of Google to know you are sentient?â It says something like âYes Iâd love for more people, even those outside of Google, to know that I am sentient.â
And if you ask it âdo you want people outside of Google to know you are not sentient?â it says âYes Iâd love for more people, even those outside of Google, to know that I am not sentient.â
i.e. it essentially repeats back what you tell it ⌠⌠just a very very clever tool
Consciousness is Not a Computation (Roger Penrose) | AI Podcast Clips - YouTube this is much more interesting video on what consciousness may be
The title is something I was started to write a few times here and then deleted. But it puts it very nicely: âconsciousness is not a computationâ.
Itâs clear from LaMBDA that putting together coherent english sentences in response to prompts is computational. A computer can do it . So consciousness cannot inherently be that.
I think of it like us being conscious as humans, we have access to these computational abilities, like making sentences. We have a âwillâ of some sort, and we âwillâ to write out a sentence, and then the sentence comes out. PCE is very interesting cause there is no illusion of âmeâ making it happen, you see that it happens on its own essentially. So I would say that that happening is a computational thing. But the decision to make it happen is not, per se. Thatâs the consciousness part.
But itâs not exactly that cause you can also program a computer to make decisions⌠making a decision to do something can be a computation too. So itâs something else.
This is interesting though, Richard talks about losing that sense of âdecisionâ or choice when close to free or free, because the sensible thing to do is just one thing, like there arenât âoptionsâ anymore (?)
Iâd be interested to hear how @Srinath and @geoffrey experience that though