thomblake comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 15 May 2012 08:56:36PM *  4 points [-]

Why would you not need to figure out if an oracle is an ethical patient? Why is there no such possibility as a sentient oracle?

The oracle gets asked questions like "Should intervention X be used by doctor D on patient P" and can tell you the correct answer to them without considering the moral status of the oracle.

If it were a robot, it would be asking questions like "Should I run over that [violin/dog/child] to save myself?" which does require considering the status of the robot.

EDIT: To clarify, it's not that the researcher has no reason to figure out the moral status of the oracle, it's that the oracle does not need to know its own moral status to answer its domain-specific questions.

Comment author: DanArmak 27 May 2012 10:11:56PM *  0 points [-]

What if it assigned moral status to itself and then biased its answers to make its users less likely to pull its plug one day?