You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

David_Gerard comments on [LINK] Why I'm not on the Rationalist Masterlist - Less Wrong Discussion

21 Post author: Apprentice 06 January 2014 12:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (866)

You are viewing a single comment's thread. Show more comments above.

Comment author: David_Gerard 06 January 2014 02:37:01PM *  0 points [-]

Especially for people intending to program friendly AI, who need to understand the needs of other people (although I doubt very much AI will be developed or that MIRI will ever really start coding it. Plus I do not want it to exist. But it is just me.)

The plan to write an AI that will implement the Coherent Extrapolated Volition of all of humanity doesn't involve talking to any of the affected humans. The plan is, literally, to first build an earlier AI that will do the interacting with all those other people for them.

Comment author: Kaj_Sotala 09 January 2014 07:23:06PM *  3 points [-]

That link only explains the concept of CEV as one possible idea related to building FAI, and a problematic one at that. But you're making it sound like CEV being the only possible approach was an opinion that had already been set in stone.