ikrase comments on "Stupid" questions thread - Less Wrong

40 Post author: gothgirl420666 13 July 2013 02:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (850)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 13 July 2013 08:48:17AM *  15 points [-]

For what it's worth, Eliezer's answer to your second question is here:

There is no safe wish smaller than an entire human morality. (...) With a safe genie, wishing is superfluous. Just run the genie.

Comment author: ikrase 14 July 2013 03:02:46AM 0 points [-]

I think that Obedient AI requires less fragility-of-values types of things.

Comment author: Eliezer_Yudkowsky 14 July 2013 04:22:32AM 5 points [-]

I don't see why a genie can't kill you just as hard by missing one dimension of what it meant to satisfy your wish.

Comment author: ikrase 14 July 2013 10:23:10AM 0 points [-]

I'm not talking naive obedient AI here. I'm talking a much less meta FAI that does not do analysis of metaethics or CEV or do incredibly vague, subtle wishes. (Atlantis in HPMOR may be an example of a very weak, rather irrational, poorly safeguarded Obedient AI with a very, very strange command set.)