@Srinath has touched up this question of decision making in a thread we had about the choices around buying a house if you wanna have a look - Freedom is the absence of choice vs autonomy - #4 by Srinath
āOne can argue about a belief, an opinion, a theory, an ideal and so on ā¦ but a fact: never. One can deny a fact ā pretend that it is not there ā but once seen, a fact brings freedom from choice and decision. Most people think and feel that choice implies freedom ā having the freedom to choose ā but this is not the case. Freedom lies in seeing the obvious, and in seeing the obvious there is no choice, no deliberation, no agonising over the āRightā and āWrongā judgment. In the freedom of seeing the fact there is only action.ā
Srinathās post about buying a house describes how there are certain situations that still require a certain amount of deliberation or research, and he does say that
āwhether this will become a smooth and choiceless movement where all decisions seem inevitable and utterly natural in the future, remains to be seen.ā
Something that has helped me in the past has been recognizing that the moment of ānot knowing what to doā is a fact as well, itās the state of ānot knowingā something, which isā¦ extremely common.
So thereās the state of knowing the fact ā action and the state of not knowing the fact, which could either simply be a state of āI donāt know,ā or perhaps could become āIāll go take āxā steps (which one must be aware of to act upon)ā to find out more, and along the way of finding out more maybe one comes across the right facts to obviate action.
Maybe where itās most interesting is when thereās some guesswork involved. The consciousness is doing some kind of probabilistic computation (?) and taking a step. This computation is based on past experiences or ideas of the world, and this can include false ideas, or things which has small sample size (as in Srinathās example of buying a house, itās something most people have limited experience in).
There are definitely some differences, there is somehow less guessing involved from the perspective of a PCE
I thought it would be fun to play devils advocate for a bit.
Iāve been thinking and reading about this AI stuff and Lemoineās claims a little more. He does make a few good points. What are the chances that heās playing up the āsentienceā claim in order to ramp up public scrutiny and discussion into what is basically a closed door corporate enterprise? He does seem interested in a range of ethical issues around AI and not just whether Lamda is sentient. Maybe thatās his main axe to grind.
We have no universally accepted definitions for consciousness or sentience. There is some fuzzy agreement that consciousness involves a certain amount of self reflexivity and subjective experience. Sentience involves the experiencing of phenomenal qualia. But even those definitions are contentious.
Itās hard to study consciousness in any kind of depth and be scientific about it I think. Or at least it would require a new kind of science that finds a way to fold in subjectivity. Itās interesting that the gold standard for evaluating AI consciousness is still the Turing test after all these years - basically a dude (or dudette) talking to a computer and giving it the thumbs up! But that just goes to show you how weird and remarkable consciousness is.
Iām guessing if machine consciousness does arise in the world it would be recognised not in any kind of unanimous way but with the sort of controversies and creeping doubt that we are seeing now. There is still a debate as to whether to award insects, crustaceans and molluscs a form of consciousness or sentience. Dogs and cats too - at least as far as consciousness is concerned. Some of these creatures have extremely simple neural networks. We are nowhere close to resolving that problem.
The human brain has a mind boggling amount of complexity in terms of numbers of neurons, input and output channels. But ultimately it too is fundamentally made of rather simple processes. Are the denunciations coming a bit too quickly because there is a fear that our own consciousness being somewhat machine-like and reproducible?
Ultimately from what I could read I donāt think Lamda is sentient or conscious, but I emphasise my lack of certainty here. And as the technology gets better I think the uncertainty will grow.
I think the āconsciousness is not a computationā bit is pretty compelling.
Maybe I put it this way. All the operations lambda does could be done instead by humans using pencil and paper. Although a lot of pencil and paper it would be!
So does the act of humans writing things down on pencil and paper translate into, during this process, something new arising that is conscious or sentient? It seems obviously not, to the point where we can say it with certainty. And if the exact same thing is happening but faster with computers, thereās no qualitative distinction here.
It seems there has to be a step along the way where some sort of choice happens. And the choice canāt be the result of a computation (ie something fully reducible to pen and paper calculation). And then we could point to that and say, thatās the building block of consciousness.
Thatās one possibility. The other is thereās no choice at all, or choice is irrelevant, and itās just about the āable to experienceā aspect of it. And then this too seems it canāt be a computation. Have to be able to point to something and say this is the building block of the able-to-experience ā¦ ??
Partly rambling lol. Tricky to think about or say anything definitive. It seems like I can say something definitive and then as I write it it turns out not.
Where Iāve arrived on this issue in the past is that human consciousness is also pretty simple, at least in the computational sense - it comes down to neurons firing or not firing, which is pretty similar to a 1 or 0 gate, and can be modeled similarly. There is also a whole lot of other stuff going on which is not as reducible, especially when you expand to other body processes, that is having some impact on our experience.
Itās a little interesting because the way we āknowā weāre conscious is because weāre looking out from our own bodies experiencing āit,ā and then we canāt just ask dogs or cats if theyāre conscious so we come up with convoluted ways to try and āfigure it outā (mirror experiments, for example) but which ultimately donāt tell us if they experience what we experience. And itās even worse with these computer programs, because they donāt have the same starting point of biology as we do. Can they ever be conscious? We donāt have a genuine ideaā¦ once again probably because weāre not too sure why it is we can experience what we experience.
Richard defines consciousness, simply, as the state of being awake. Can a computer be awake? Sort of!
This reminds me of the whole āmatter is not merely passiveā issue as well
edit:
āThereās something happening here, but you donāt know what it is/ do you / Mr. Jonesā
100% chance. I donāt think heās fully bought into this idea that a chatbot is sentient. But I think his internal feeling landscape is enjoying its new role as a prophet and priest for the yet to come AI.
He seems more concerned with whether these computer programs have feelings, than how a company like Google intends to use this AI. I think thatās where the debate really lies. But thatās not his concern and I personally think his current concern has more to do with his juvenile fantasies.
Nothing is lost by turning a computer off except whatever it was actually contributing to the world. If we want to know what itās like for a computer to be conscious I think weāll have to ask it questions about how it experiences the world instead of trick questions.
Yes, I had to look it up because I thought it may had been developed in the 50ās andā¦I was right. Itās bizarre. And laymen like me will just assume the Turing Test is the gold standard and Blake knows better.
Itās a clever test. But all one can conclude from it is that the AI is capable of imitating human language convincingly. It doesnāt indicate that the AI is capable of thinking for itself and capable of deceit like the original āimitation gameā does.
I often find that people wonder if these things have consciousness. Might you be able to explain why people muse that these creatures arenāt conscious? They arenāt asleep, theyāre very much awake and responding to their environment.
Dogs and Cats are nearly identical to us instinctually, and are capable of forming bonds with both people and animals. The bonds indicate to me that an identity of some sort is operating there.
Crustaceans are less emotive but I wouldnāt be surprised if their instincts operate very similar to ours. And that the crustaceanās brain is experiencing these instincts consciously.
I think thereās a lot to what youāre saying here but the denunciations are coming from other scientists in his field it seems. Iām more inclined to think theyāre being sensible and at the very least they recognize that the technology is too underdevelioped to begin raising the question of sentience in a serious way.
Devilās advocate. We could make a robotic crab that acts like itās awake, has sensors (cameras), responds to environment (avoid obstacles , go towards āgoodā), etc. Whence comes the confidence that the crab is conscious but the robotic crab isnāt?
go towards āgoodā
Would that mean it can plug itself in to charge?
Whence comes the confidence that the crab is conscious but the robotic crab isnāt?
Iām fine with saying the robot crab is conscious. Itās āawakeā otherwise known as āturned-on.ā
Does that mean we shouldnāt turn it off?
Whence comes the confidence that the crab is conscious but the robotic crab isnāt?
Lets go further and make a human-like robot that is self-learning and has no embedded knowledge. It can learn from the environment like we do. And then we can ask it what itās like to be experiencing the universe and go from there. Weāll figure out in what ways itās conscious and in what ways it isnāt.
I think a lot of the confidence we have that other biological forms may be conscious similarly to how we are is because we are conscious (evidently), and because these other biological forms are directly related to us, thereās reasonable suspicion that they are conscious as well.
This is most obvious in other humans (I suspect that you are conscious), but is extended to apes as a next-best, then to other relatively intelligent animals, and so on. Where is the line drawn? Thatās the part thatās most difficult.