Wei_Dai comments on . - Less Wrong

2 [deleted] 28 August 2013 11:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 30 August 2013 08:18:07AM 9 points [-]

To address your question more directly, Eliezer thinks that it should be possible to create accurate but nonsentient models of humans (which an FAI would use to avoid simulating people in its environment as it tries to predict the consequences of its actions). This seems plausible, but if you attach a microphone and speaker to such a model then it would say that it's conscious even though it's not. It also seems plausible that in an attempt to optimize uploads for computational efficiency (in order to fit more people into the universe), we could unintentionally change them from sentient beings to nonsentient models. Does this convince you that it's not safe to "just ask an upload whether it's conscious"?

Comment author: cousin_it 30 August 2013 08:47:44AM *  0 points [-]

Optimizing an upload can turn it into Eliza! Nice :-)

I still think that if a neuron-by-neuron upload told you it was conscious, that would probably mean computer programs can be conscious. But now I'm less sure of that, because scanning can be viewed as optimizing. For example, if the neuron saying "I'm conscious" gets its input from somewhere that isn't scanned, the scanner might output a neuron that always says "yes". Thanks for helping me realize that!

Comment author: Pablo_Stafforini 30 August 2013 10:52:11AM *  2 points [-]

I still think that if a neuron-by-neuron upload told you it was conscious, that would mean computer programs can be conscious.

It seems that in this case the grounds for thinking that the upload is phenomenally conscious have largely to do with the fact that it is a "neuron-by-neuron" copy, rather than the fact that it can verbally report having conscious experiences.