HughRistik comments on Intelligence enhancement as existential risk mitigation - Less Wrong

17 [deleted] 15 June 2009 07:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (198)

You are viewing a single comment's thread. Show more comments above.

Comment author: steven0461 15 June 2009 08:14:17PM *  0 points [-]

More intelligence also means more competence at doing potentially world-destroying things, like AI/upload/nano/supervirus research. It does seem to me like the anti-risk effect from intelligence enhancement would somewhat outweigh the pro-risk effect, but I'm not sure.

Comment author: HughRistik 15 June 2009 11:48:10PM 0 points [-]

Yes, increasing intelligence would increase the variance of quality of outcomes for humanity. And the hope is that intelligence would also increase the mean quality of outcome, such that the expected value would be higher.