You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheOtherDave comments on Stupid Questions Open Thread - Less Wrong Discussion

42 Post author: Costanza 29 December 2011 11:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (265)

You are viewing a single comment's thread. Show more comments above.

Comment author: Larks 30 December 2011 05:14:24AM 2 points [-]

I think it would be significantly easier to make FAI than LukeFreindly AI: for the latter, you need to do most of the work involved in the former, but also work out how to get the AI to find you (and not accidentally be freindly to someone else).

If it turns out that there's a lot of coherance in human values, FAI will resemble LukeFreindlyAI quite closely anyway.

Comment author: TheOtherDave 30 December 2011 05:19:26AM 5 points [-]

If FAI is HumanityFriendly rather than LukeFriendly, you have to work out how to get the AI to find humanity and not accidentally optimize for the extrapolated volition of some other group. It seems easier to me to establish parameters for "finding" Luke than for "finding" humanity.

Comment author: Larks 30 December 2011 05:29:22AM 0 points [-]

Yes, it depends on whether you think Luke is more different from humanity than humanity is from StuffWeCareNotOf

Comment author: TheOtherDave 30 December 2011 10:36:34AM 5 points [-]

Of course an arbitrarily chosen human's values are more similar to to the aggregated values of humanity as a whole than humanity's values are similar to an arbitrarily chosen point in value-space. Value-space is big.

I don't see how my point depends on that, though. Your argument here claims that "FAI" is easier than "LukeFriendlyAI" because LFAI requires an additional step of defining the target, and FAI doesn't require that step. I'm pointing out that FAI does require that step. In fact, target definition for "humanity" is a more difficult problem than target definition for "Luke"