MugaSofer comments on Rationality Quotes January 2013 - Less Wrong

6 Post author: katydee 02 January 2013 05:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (604)

You are viewing a single comment's thread. Show more comments above.

Comment author: AspiringRationalist 01 January 2013 10:54:04PM -2 points [-]

If god (however you perceive him/her/it) told you to kill your child -- would you do it?

If your answer is no, i my booklet you're an atheist. There is doubt in your mind. Love and morality are more important than your faith.

If your answer is yes, please reconsider.

-- Penn Jilette.

Comment author: MugaSofer 14 January 2013 03:23:50PM 0 points [-]

If your answer is yes, please reconsider.

Why?

If I encounter a being approximately equivalent to God - (almost) all-knowing, benevolent etc. - and it tells me to do something, why the hell should I refuse? If Omega told you something was the best choice according to your preferences - presumably as part of a larger game - why wouldn't you try and achieve that?

My best guess is that Mr. Jilette is confused regarding morality.

Comment author: MixedNuts 14 January 2013 03:35:26PM 0 points [-]

Because most people who are convinced by their pet moral principle to kill kids are utterly wrong.

Comment author: MugaSofer 14 January 2013 05:40:44PM 0 points [-]

You're saying that if a Friendly superintellligence told you something was the right thing to do - however you define right - then you would trust your own judgement over theirs?

Comment author: [deleted] 14 January 2013 05:54:49PM *  0 points [-]

Acting the other way around would be trusting my judgement that the AI is friendly.

In any case, I would expect a superintelligence, friendly or not, to be able to convince me to kill my child, or do whatever.

Comment author: MugaSofer 14 January 2013 07:34:05PM -1 points [-]

Acting the other way around would be trusting my judgement that the AI is friendly.

Yes. Yes it would. Do you consider it so inconceivable that it might be the best course of action to kill one child that it outweighs any possible evidence of Friendliness?

In any case, I would expect a superintelligence, friendly or not, to be able to convince me to kill my child, or do whatever.

And so, logically, could God. Apparently FAIs don't arbitrarily reprogram people. Who knew?