Logos01 comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 15 November 2011 03:30:47AM 0 points [-]

Taking energy to get there isn't what is relevant in that context. The relevant issue is that being intelligent takes a lot of resources up. This is an important distinction. And the fact that evolution doesn't optimize for intelligence but for other goals isn't really relevant, given that an AGI presumably won't optimize itself for intelligence (a paperclip maximizer for example will make itself just as intelligent enough as it estimates is optimal for making paperclips everywhere). The point is that based on the data from one very common optimization process, it seems that intelligence is so resource intensive generally that being highly intelligent is simply very rarely worth it. (This evidence is obviously weak. The substrate matters as do other issue. But the basic point is sound.)

Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.

Comment author: Logos01 15 November 2011 04:51:21AM 0 points [-]

Note incidentally that most of the comment was not about evolved intelligences. This is not an argument occurring in isolation. See especially the other two remarks made.

Quite correct, but you're still making the fundamental error of extrapolating from evolution to non-evolved intelligence without first correcting for the "aims"/"goals" of evolution as compared to designed intelligences when it comes to how designers might approach intelligence.