You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

wedrifid comments on Stupid Questions Open Thread - Less Wrong Discussion

42 Post author: Costanza 29 December 2011 11:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (265)

You are viewing a single comment's thread. Show more comments above.

Comment author: Larks 30 December 2011 05:14:24AM 2 points [-]

I think it would be significantly easier to make FAI than LukeFreindly AI: for the latter, you need to do most of the work involved in the former, but also work out how to get the AI to find you (and not accidentally be freindly to someone else).

If it turns out that there's a lot of coherance in human values, FAI will resemble LukeFreindlyAI quite closely anyway.

Comment author: wedrifid 31 December 2011 08:42:34AM *  8 points [-]

I think it would be significantly easier to make FAI than LukeFreindly AI

Massively backwards! Creating an FAI (presumably 'friendly to humanity') requires an AI that can somehow harvest and aggregate preferences over humans in general but an FAI<Luke> just needs to scan one brain.

Comment author: Larks 31 December 2011 09:16:12PM 0 points [-]

Scanning is unlikely to be the bottleneck for a GAI, and it seems most of the difficulty with CEV is from the Extrapolation part, not the Coherence.

Comment author: wedrifid 31 December 2011 09:54:32PM 5 points [-]

Scanning is unlikely to be the bottleneck for a GAI, and it seems most of the difficulty with CEV is from the Extrapolation part, not the Coherence.

It doesn't matter how easy the parts may be, scanning, extrapolating and cohering all of humanity is harder than scanning and extrapolating Luke.

Comment author: torekp 02 January 2012 06:48:35PM 4 points [-]

Not if Luke's values contain pointers to all those other humans.