Perplexed comments on Something's Wrong - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (161)
Clippy is an agent defined by a certain inhuman ethics. Therefore, your test distinguishes ethical questions from non-ethical questions.
There are meaningless non-ethical questions: "What's a froob?" Human: "I don't know." Clippy: "I don't know."
There are only non-meaningless ethical questions with some kind of assumed axiom that allows us to cross the fact-value distinction, such as Eliezer's meta-ethics or "one should always act so as to maximize paperclips."
In general: Positivism teaches us to ignore many things we should not ignore. Rationalism, however, teaches us to ignore some things, but it does not teach us to ignore ethical questions.
Experiment: ask Clippy a question about decision theory.
Hey Clippy. What decision theory do you use to determine how your actions produce paperclips?
Is this an attempt to use Riddle Theory against Clippy? Might just be the secret to defending the universe from paperclip maximizers.
No, sadly.