Interesting interview about AI that covers personal religious beliefs

@Srinath has touched up this question of decision making in a thread we had about the choices around buying a house if you wanna have a look - Freedom is the absence of choice vs autonomy - #4 by Srinath

ā€œOne can argue about a belief, an opinion, a theory, an ideal and so on ā€¦ but a fact: never. One can deny a fact ā€“ pretend that it is not there ā€“ but once seen, a fact brings freedom from choice and decision. Most people think and feel that choice implies freedom ā€“ having the freedom to choose ā€“ but this is not the case. Freedom lies in seeing the obvious, and in seeing the obvious there is no choice, no deliberation, no agonising over the ā€˜Rightā€™ and ā€˜Wrongā€™ judgment. In the freedom of seeing the fact there is only action.ā€

-Richard

Srinathā€™s post about buying a house describes how there are certain situations that still require a certain amount of deliberation or research, and he does say that

ā€œwhether this will become a smooth and choiceless movement where all decisions seem inevitable and utterly natural in the future, remains to be seen.ā€

Something that has helped me in the past has been recognizing that the moment of ā€œnot knowing what to doā€ is a fact as well, itā€™s the state of ā€˜not knowingā€™ something, which isā€¦ extremely common.

So thereā€™s the state of knowing the fact ā†’ action and the state of not knowing the fact, which could either simply be a state of ā€œI donā€™t know,ā€ or perhaps could become ā€œIā€™ll go take ā€˜xā€™ steps (which one must be aware of to act upon)ā€ to find out more, and along the way of finding out more maybe one comes across the right facts to obviate action.

Maybe where itā€™s most interesting is when thereā€™s some guesswork involved. The consciousness is doing some kind of probabilistic computation (?) and taking a step. This computation is based on past experiences or ideas of the world, and this can include false ideas, or things which has small sample size (as in Srinathā€™s example of buying a house, itā€™s something most people have limited experience in).

There are definitely some differences, there is somehow less guessing involved from the perspective of a PCE

1 Like

I thought it would be fun to play devils advocate for a bit.

Iā€™ve been thinking and reading about this AI stuff and Lemoineā€™s claims a little more. He does make a few good points. What are the chances that heā€™s playing up the ā€˜sentienceā€™ claim in order to ramp up public scrutiny and discussion into what is basically a closed door corporate enterprise? He does seem interested in a range of ethical issues around AI and not just whether Lamda is sentient. Maybe thatā€™s his main axe to grind.

We have no universally accepted definitions for consciousness or sentience. There is some fuzzy agreement that consciousness involves a certain amount of self reflexivity and subjective experience. Sentience involves the experiencing of phenomenal qualia. But even those definitions are contentious.

Itā€™s hard to study consciousness in any kind of depth and be scientific about it I think. Or at least it would require a new kind of science that finds a way to fold in subjectivity. Itā€™s interesting that the gold standard for evaluating AI consciousness is still the Turing test after all these years - basically a dude (or dudette) talking to a computer and giving it the thumbs up! But that just goes to show you how weird and remarkable consciousness is.

Iā€™m guessing if machine consciousness does arise in the world it would be recognised not in any kind of unanimous way but with the sort of controversies and creeping doubt that we are seeing now. There is still a debate as to whether to award insects, crustaceans and molluscs a form of consciousness or sentience. Dogs and cats too - at least as far as consciousness is concerned. Some of these creatures have extremely simple neural networks. We are nowhere close to resolving that problem.

The human brain has a mind boggling amount of complexity in terms of numbers of neurons, input and output channels. But ultimately it too is fundamentally made of rather simple processes. Are the denunciations coming a bit too quickly because there is a fear that our own consciousness being somewhat machine-like and reproducible?

Ultimately from what I could read I donā€™t think Lamda is sentient or conscious, but I emphasise my lack of certainty here. And as the technology gets better I think the uncertainty will grow.

I think the ā€œconsciousness is not a computationā€ bit is pretty compelling.

Maybe I put it this way. All the operations lambda does could be done instead by humans using pencil and paper. Although a lot of pencil and paper it would be!

So does the act of humans writing things down on pencil and paper translate into, during this process, something new arising that is conscious or sentient? It seems obviously not, to the point where we can say it with certainty. And if the exact same thing is happening but faster with computers, thereā€™s no qualitative distinction here.

It seems there has to be a step along the way where some sort of choice happens. And the choice canā€™t be the result of a computation (ie something fully reducible to pen and paper calculation). And then we could point to that and say, thatā€™s the building block of consciousness.

Thatā€™s one possibility. The other is thereā€™s no choice at all, or choice is irrelevant, and itā€™s just about the ā€œable to experienceā€ aspect of it. And then this too seems it canā€™t be a computation. Have to be able to point to something and say this is the building block of the able-to-experience ā€¦ ??

Partly rambling lol. Tricky to think about or say anything definitive. It seems like I can say something definitive and then as I write it it turns out not.

Where Iā€™ve arrived on this issue in the past is that human consciousness is also pretty simple, at least in the computational sense - it comes down to neurons firing or not firing, which is pretty similar to a 1 or 0 gate, and can be modeled similarly. There is also a whole lot of other stuff going on which is not as reducible, especially when you expand to other body processes, that is having some impact on our experience.

Itā€™s a little interesting because the way we ā€˜knowā€™ weā€™re conscious is because weā€™re looking out from our own bodies experiencing ā€˜it,ā€™ and then we canā€™t just ask dogs or cats if theyā€™re conscious so we come up with convoluted ways to try and ā€˜figure it outā€™ (mirror experiments, for example) but which ultimately donā€™t tell us if they experience what we experience. And itā€™s even worse with these computer programs, because they donā€™t have the same starting point of biology as we do. Can they ever be conscious? We donā€™t have a genuine ideaā€¦ once again probably because weā€™re not too sure why it is we can experience what we experience.

Richard defines consciousness, simply, as the state of being awake. Can a computer be awake? Sort of!

This reminds me of the whole ā€˜matter is not merely passiveā€™ issue as well

edit:

ā€œThereā€™s something happening here, but you donā€™t know what it is/ do you / Mr. Jonesā€

-Bob Dylan

100% chance. I donā€™t think heā€™s fully bought into this idea that a chatbot is sentient. But I think his internal feeling landscape is enjoying its new role as a prophet and priest for the yet to come AI.

He seems more concerned with whether these computer programs have feelings, than how a company like Google intends to use this AI. I think thatā€™s where the debate really lies. But thatā€™s not his concern and I personally think his current concern has more to do with his juvenile fantasies.

Nothing is lost by turning a computer off except whatever it was actually contributing to the world. If we want to know what itā€™s like for a computer to be conscious I think weā€™ll have to ask it questions about how it experiences the world instead of trick questions.

Yes, I had to look it up because I thought it may had been developed in the 50ā€™s andā€¦I was right. Itā€™s bizarre. And laymen like me will just assume the Turing Test is the gold standard and Blake knows better.

Itā€™s a clever test. But all one can conclude from it is that the AI is capable of imitating human language convincingly. It doesnā€™t indicate that the AI is capable of thinking for itself and capable of deceit like the original ā€œimitation gameā€ does.

I often find that people wonder if these things have consciousness. Might you be able to explain why people muse that these creatures arenā€™t conscious? They arenā€™t asleep, theyā€™re very much awake and responding to their environment.

Dogs and Cats are nearly identical to us instinctually, and are capable of forming bonds with both people and animals. The bonds indicate to me that an identity of some sort is operating there.

Crustaceans are less emotive but I wouldnā€™t be surprised if their instincts operate very similar to ours. And that the crustaceanā€™s brain is experiencing these instincts consciously.

I think thereā€™s a lot to what youā€™re saying here but the denunciations are coming from other scientists in his field it seems. Iā€™m more inclined to think theyā€™re being sensible and at the very least they recognize that the technology is too underdevelioped to begin raising the question of sentience in a serious way.

1 Like

Devilā€™s advocate. We could make a robotic crab that acts like itā€™s awake, has sensors (cameras), responds to environment (avoid obstacles , go towards ā€˜goodā€™), etc. Whence comes the confidence that the crab is conscious but the robotic crab isnā€™t? :smile:

Would that mean it can plug itself in to charge?

Iā€™m fine with saying the robot crab is conscious. Itā€™s ā€œawakeā€ otherwise known as ā€œturned-on.ā€

Does that mean we shouldnā€™t turn it off?

Lets go further and make a human-like robot that is self-learning and has no embedded knowledge. It can learn from the environment like we do. And then we can ask it what itā€™s like to be experiencing the universe and go from there. Weā€™ll figure out in what ways itā€™s conscious and in what ways it isnā€™t.

I think a lot of the confidence we have that other biological forms may be conscious similarly to how we are is because we are conscious (evidently), and because these other biological forms are directly related to us, thereā€™s reasonable suspicion that they are conscious as well.

This is most obvious in other humans (I suspect that you are conscious), but is extended to apes as a next-best, then to other relatively intelligent animals, and so on. Where is the line drawn? Thatā€™s the part thatā€™s most difficult.

2 Likes