Richard_Hollerith2 comments on The Level Above Mine - Less Wrong

42 Post author: Eliezer_Yudkowsky 26 September 2008 09:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (387)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Richard_Hollerith2 26 September 2008 03:53:27PM 0 points [-]

If not, why aren't you in the camp of those who wish to improve human intelligence?

I'll take this one because I'm almost certain Eliezer would answer the same way.

Working on AI is a more effective way of increasing the intelligence of the space and matter around us than increasing human intelligence is. The probability of making substantial progress is higher.

Comment author: Kingreaper 03 October 2010 05:58:18PM *  1 point [-]

I disagree. Human intelligence is clearly misoptimised for many goals, and I see no clear evidence that it's easier to design a new intelligence from scratch than to optimise the human one.

They have very different possible effects "FOOM!" vs. "We are awaiting GFDCA [Genetics, Food Drugs and Cybernetics Administration] approval of this new implant/chimerism/genehack", so the average impact of human-optimisation may be lower, but my probability estimate for human-improvement tech is much higher.