MugaSofer comments on Rationality Quotes January 2013 - Less Wrong

6 Post author: katydee 02 January 2013 05:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (604)

You are viewing a single comment's thread. Show more comments above.

Comment author: MixedNuts 14 January 2013 03:35:26PM 0 points [-]

Because most people who are convinced by their pet moral principle to kill kids are utterly wrong.

Comment author: MugaSofer 14 January 2013 05:40:44PM 0 points [-]

You're saying that if a Friendly superintellligence told you something was the right thing to do - however you define right - then you would trust your own judgement over theirs?

Comment author: [deleted] 14 January 2013 05:54:49PM *  0 points [-]

Acting the other way around would be trusting my judgement that the AI is friendly.

In any case, I would expect a superintelligence, friendly or not, to be able to convince me to kill my child, or do whatever.

Comment author: MugaSofer 14 January 2013 07:34:05PM -1 points [-]

Acting the other way around would be trusting my judgement that the AI is friendly.

Yes. Yes it would. Do you consider it so inconceivable that it might be the best course of action to kill one child that it outweighs any possible evidence of Friendliness?

In any case, I would expect a superintelligence, friendly or not, to be able to convince me to kill my child, or do whatever.

And so, logically, could God. Apparently FAIs don't arbitrarily reprogram people. Who knew?