Mark_Friedenbach comments on The genie knows, but doesn't care - Less Wrong

54 Post author: RobbBB 06 September 2013 06:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (515)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 10 January 2014 08:29:08PM 0 points [-]

I want a say in my future and the part of the world I occupy. I do not want anything else making these decisions for me, even if it says it knows my preferences, and even still if it really does.

To answer your questions, yes, no, yes, yes, perhaps.

Comment author: ArisKatsaris 10 January 2014 08:35:09PM *  0 points [-]

If your preference is that you should have as much decision-making ability for yourself as possible, why do you think that this preference wouldn't be supported and even enhanced by an AI that was properly programmed to respect said preference?

e.g. would you be okay with an AI that defends your decision-making ability by defending humanity against those species of mind-enslaving extraterrestrials that are about to invade us? or e.g. by curing Alzheimer's? Or e.g. by stopping that tsunami that by drowning you would have stopped you from having any further say in your future?

Comment author: [deleted] 10 January 2014 08:41:06PM 1 point [-]

If your preference is that you should have as much decision-making ability for yourself as possible, why do you think that this preference wouldn't be supported and even enhanced by an AI that was properly programmed to respect said preference?

Because it can't do two things when only one choice is possible (e.g. save my child and the 1000 other children in this artificial scenario). You can design a utility function that tries to do a minimal amount of collateral damage, but you can't make one which turns out rosy for everyone.

e.g. would you be okay with an AI that defends your decision-making ability by defending humanity against those species of mind-enslaving extraterrestrials that are about to invade us? or e.g. by curing Alzheimer's? Or e.g. by stopping that tsunami that by drowning you would have stopped you from having any further say in your future?

That would not be the full extent of its action and the end of the story. You give it absolute power and a utility function that lets it use that power, it will eventually use it in some way that someone, somewhere considers abusive.

Comment author: ArisKatsaris 10 January 2014 09:43:04PM -1 points [-]

You can design a utility function that tries to do a minimal amount of collateral damage, but you can't make one which turns out rosy for everyone

Yes, but this current world without an AI isn't turning out rosy for everyone either.

That would not be the full extent of its action and the end of the story. You give it absolute power and a utility function that lets it use that power, it will eventually use it in some way that someone, somewhere considers abusive.

Sure, but there's lots of abuse in the world without an AI also.

Comment author: [deleted] 10 January 2014 10:11:20PM *  0 points [-]

Replace "AI" with "omni-powerful tyrannical dictator" and tell me if you still agree with the outcome.

Comment author: ArisKatsaris 10 January 2014 10:19:31PM -1 points [-]

If you need specify the AI to be bad ("tyrannical") in advance, that's begging the question. We're debating why you feel that any omni-powerful algorithm will necessarily be bad.

Comment author: [deleted] 10 January 2014 11:13:03PM *  0 points [-]

Look up the origin of the word tyrant, that is the sense in which I meant it, as a historical parallel (the first Athenian tyrants were actually well liked).