shminux comments on AI risk, new executive summary - Less Wrong

12 Post author: Stuart_Armstrong 18 April 2014 10:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (76)

You are viewing a single comment's thread. Show more comments above.

Comment author: nshepperd 21 April 2014 03:49:09AM 0 points [-]

To use some drastically different pairing, if you agree that an amoeba can never comprehend fish, that fish can never comprehend chimps, that chimps can never understand humans, then there is no reason to stop there and proclaim that humans would understand whatever intelligence comes next.

Yes, if you look through the tower of goals, more intelligent species have more complex goals.

This seems like a bogus use of the outside view. AGI is qualitatively different to evolved intelligence, in that it is not evolved, but built by a lesser intelligence. Moreover, there's a simple explanation for the observation that more intelligent animals have more complex goals, which is that more intelligence permits more subgoals, and natural selection generally alters a species' goals by adding, rather than simplifying. This is pretty much totally inapplicable to a constructed AGI.

Comment author: shminux 21 April 2014 05:00:02AM *  -2 points [-]

I'd love to hear what actual AGI experts think about it, not just us idle forum dwellers.