You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

vi21maobk9vp comments on Cynical explanations of FAI critics (including myself) - Less Wrong Discussion

21 Post author: Wei_Dai 13 August 2012 09:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread.

Comment author: vi21maobk9vp 14 August 2012 06:38:20AM 0 points [-]

About agreement: for the agreement we need all our evidence to be shareable, and our priors to be close enough. Actual evidence (or hard-to-notice inferences) about possibility of significantly super-human AGI on reasonable hardware cited in the Sequences are quite limited, and not enough to overcome difference in priors.

I do think humanity will build slightly super-human AGI, but as usual with computers it will mimc our then-current idea of how human brain actually works and then be improved as the design allows. In that direction, HTM (as done by Jeff Hawkins via his current Numenta startup) may end up polished into a next big thing in machine learning or a near-flop with few uses.

Also, it is not clear that people will ever get around to building general function-optimizing AI. Maybe executing behaviours will end up being the way to safeguard AI from wild decisions.