Eliezer_Yudkowsky comments on Open Thread: June 2009 - Less Wrong

4 Post author: Cyan 01 June 2009 06:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (142)

You are viewing a single comment's thread. Show more comments above.

Comment author: Nick_Tarleton 02 June 2009 06:45:20AM *  1 point [-]

No easier? There's a lot of hidden content in "effect on the world", but presumably not all of Fun Theory, the entire definition of "person", etc. (or shorter descriptions that unfold into these things). An Oracle AI that worked for humans would probably work just as well for Babyeaters or Superhappies (in terms of not automatically destroying things they value; obviously, it'd make alien assumptions about cognitive style, concepts, etc.).

Comment author: Eliezer_Yudkowsky 02 June 2009 07:01:02AM 1 point [-]

I agree with that much, but the question is whether there's enough hidden content to force development of a general theory of "learning what the programmers actually meant" that would be sufficient unto full-scale FAI, or sufficient given 20% more work.