Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

eurleif comments on Nonperson Predicates - Less Wrong

29 Post author: Eliezer_Yudkowsky 27 December 2008 01:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (175)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: eurleif 30 May 2012 04:37:57AM 0 points [-]

Well, this post heavily hints that a system's moral standing is related to whether it is conscious. Elizezer mentions a need to tackle the hard problem of consciousness in order to figure out whether the simulations performed by our AI cause immoral suffering. Those simulations would be basically isolated; their inputs may be chosen based on our real-world requirements, but they don't necessarily correspond to what's actually going on in the real world; and their outputs would presumably be used in aggregate to make decisions, but not pushed directly into the outside world.

Maybe moral standing requires something else too, like self-awareness, in addition to consciousness. But wouldn't there still be a critical instruction in a self-aware and conscious program, where a conscious experience of being self-aware was produced? Wouldn't the same argument apply to any criteria given for moral standing in a deterministic program?

Comment author: TheOtherDave 30 May 2012 03:15:45PM 0 points [-]

It's not clear to me that whether a system is conscious (whatever that means) and whether it's capable of a single conscious experience (whatever that means) are the same thing.