You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

eli_sennesh comments on Steelmaning AI risk critiques - Less Wrong Discussion

26 Post author: Stuart_Armstrong 23 July 2015 10:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 03 August 2015 04:01:35AM 0 points [-]

Yes we are. I have made a detailed, extensive, citation-full, and well reviewed case that human minds are just that.

That isn't quite correct. We do have hard wiring that raises and lowers the from-the-inside importance of specific features present in our learning data. That is, we have a nontrivial inductive bias which not all possible minds will have, even when we start by assuming that all minds are semi-modular universal learners.