Ritalin comments on Proofs, Implications, and Models - Less Wrong

58 Post author: Eliezer_Yudkowsky 30 October 2012 01:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (209)

You are viewing a single comment's thread. Show more comments above.

Comment author: mwengler 01 November 2012 01:58:50PM 0 points [-]

virtual constructs mistake themselves for conscious

My brain just folded in on itself and vanished. Or at least in simulation it did. I think you may have stated a basilisk, or at least one that works on my self-simulation.

I used to think I was conscious, but then I realized I was mistaken.

Whoever it was that said "I err, therefore I am" didn't know what he was talking about... because he was wrong in thinking he was even conscious!

Comment author: Ritalin 01 November 2012 06:27:01PM *  0 points [-]

I used to wonder what consciousness could be, until you all shared its qualia with me.

You know, we could simply ask; "What would convince us that the simulated humans are not conscious?" "What would convince us that we ourselves are not conscious?" Because, otherwise, "unconscious homunculi" are basically the same as P-Zombies, and we're making a useless distinction.

Nevertheless, it is possible for a machine to be mistaken about being conscious. Make it un-conscious (in some meaningful way), but make it unable to distinguish conscious from unconscious, and bias its judgment towards qualifying itself as conscious. Basically, the "mistake" would be in its definition of consciousness.