You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ikrase comments on "Stupid" questions thread - Less Wrong Discussion

40 Post author: gothgirl420666 13 July 2013 02:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (850)

You are viewing a single comment's thread. Show more comments above.

Comment author: ikrase 14 July 2013 03:02:46AM 0 points [-]

I think that Obedient AI requires less fragility-of-values types of things.

Comment author: Eliezer_Yudkowsky 14 July 2013 04:22:32AM 5 points [-]

I don't see why a genie can't kill you just as hard by missing one dimension of what it meant to satisfy your wish.

Comment author: ikrase 14 July 2013 10:23:10AM 0 points [-]

I'm not talking naive obedient AI here. I'm talking a much less meta FAI that does not do analysis of metaethics or CEV or do incredibly vague, subtle wishes. (Atlantis in HPMOR may be an example of a very weak, rather irrational, poorly safeguarded Obedient AI with a very, very strange command set.)