You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

asd comments on Steelmaning AI risk critiques - Less Wrong Discussion

26 Post author: Stuart_Armstrong 23 July 2015 10:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 23 July 2015 09:22:15PM *  1 point [-]

Thank you for that Irreducible Detail article, I remember reading it before but couldn't find it later. Hanson's argument is very convincing and intuitive, and really sheds light on what intelligence might really be about. When I think about my own intelligence, it doesn't feel like I have some overarching general module planning, but more like I have many simple heuristics, and rules of thumb, and automatic behaviors that just happen to work. This feels more like Hanson's idea of intelligence.

I think this is the single best argument against MIRI's idea of intelligence.

Here is an interesting article in the same vein.