It's probably not possible to have a twin of me that does everything the same except experiences no qualia, i.e. you can predict 100% accurately, if you expose it to stimulus X and it does Y, that I would also do Y if I was exposed to stimulus X.

But can you make an "almost-p-zombie"? A copy of me, which, while not being exactly like me (besides consciousness), is almost exactly like me. So a function, which, when it takes in a stimulus X, says, not with 100% certainty, but 99.999999999%, what I will do in response. Is this possible to construct within the laws of our universe?

Additionally, is this easier or harder to construct than a fully conscious simulation of me?

Just curious. 

New Answer
New Comment

2 Answers sorted by

avturchin

40

It is possible to create a good model of a person with current LLMs who will behave 70-90 percent like me. The model could even claim that it is conscious. I experimented with my model, but it is most likely not conscious (or all LLMs are conscious). 

Noosphere89

20

My guess is that the answer is also likely no, because the self-model is still retained to a huge degree, so p-zombies can't really exist without hugely damaging the brain/being dead.

I explain a lot more about the (IMO) best current model of how consciousness works in general, since I reviewed a post on this topic:

https://www.lesswrong.com/posts/FQhtpHFiPacG3KrvD/seth-explains-consciousness#7ncCBPLcCwpRYdXuG

Would that imply that there is a hard, rigid, and abrupt limit on how accurately you can predict the actions of a conscious being without actually creating a conscious being? And if so, where is this limit?

Curated and popular this week