You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

HamletHenna comments on In favour of a selective CEV initial dynamic - Less Wrong Discussion

12 [deleted] 21 October 2011 05:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (110)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 22 October 2011 07:43:35PM *  2 points [-]

If it were the case that the existence of other simultaneous AGI projects was considered likely as the FAI project came to fruition, then this consideration would become important.

If? IDSIA, Ben Goertzel's OpenCog, Jeff Hawkin's Numenta, Henry Markram's Blue Brain emulation project, and the SIAI are already working toward AGI and none of them are using your "selective second option". The 2011 AGI conference reviewed some fifty papers on the topic. Projects already exist. As the field grows and computing becomes cheaper, projects will increase.

You write that CEV "is the best (only?) solution that anyone has provided", so perhaps this is news. If you read the sequences, you might know that Bill Hibbard advocated using human smiles and reinforcement learning to teach friendliness. Tim Freeman has his own answer. Stuart Armstrong came up with a proposal called "Chaining God". There are regular threads on Lesswrong debating points of CEV and trying to think of alternative strategies. Lukeprog has written on the state of the field of machine ethics. Ben Goertzel has a series of writings on the subject, Thoughts on AI Morality might be a good place to start.

"Civilisation" is not intended to have any a priori ethnic connotations.

I'm glad to hear you didn't intend that. I do still believe "civilization" generally has strong cultural connotations (which wikipedia and a few dictionaries corroborate) and offered the suggestion to improve your clarity, not to accuse you of racism.

Comment author: [deleted] 22 October 2011 09:54:48PM *  2 points [-]

I have read the sequences. Since Yudkowsky so thoroughly refuted the idea of reinforcement learning I don't think that that idea deserves to be regarded as a feasible solution to Friendly AI.

On the other hand I wasn't particularly aware of the wider AGI movement, so thanks for that. Obviously when I say simultaneous AGI projects, I mean projects that are at a similarly advanced stage of development at that point in time - but your point stands.