Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

lockeandkeynes comments on Nonperson Predicates - Less Wrong

29 Post author: Eliezer_Yudkowsky 27 December 2008 01:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (175)

Sort By: Old

You are viewing a single comment's thread.

Comment author: lockeandkeynes 03 December 2010 02:47:45AM *  0 points [-]

I thinks that's all rather unnecessary. The only reason we don't like people to die is because of the continuous experience they enjoy. It's a consistent causal network we don't want dying on us. I've gathered from this that the AI would be producing models with enough causal complexity to match actual sentience (not saying "I am conscious" just because the AI hears that a lot). I think that, if it's only calling a given person-model to discover answers to questions, the thing isn't really feeling for long enough periods of time to mind whether it goes away. Also, for the predicate to be tested I imagine the model would have to be created first and at that point it's too late!

Comment author: nshepperd 03 December 2010 03:01:33AM 2 points [-]

You don't want the AI to use a sentient model to find out whether a certain action leads to a thousand years of pain and misery. Or even a couple of hours. Or minutes.