Eliezer_Yudkowsky comments on Open Thread: June 2009 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (142)
No easier? There's a lot of hidden content in "effect on the world", but presumably not all of Fun Theory, the entire definition of "person", etc. (or shorter descriptions that unfold into these things). An Oracle AI that worked for humans would probably work just as well for Babyeaters or Superhappies (in terms of not automatically destroying things they value; obviously, it'd make alien assumptions about cognitive style, concepts, etc.).
I agree with that much, but the question is whether there's enough hidden content to force development of a general theory of "learning what the programmers actually meant" that would be sufficient unto full-scale FAI, or sufficient given 20% more work.