You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

CarlShulman comments on Cynical explanations of FAI critics (including myself) - Less Wrong Discussion

21 Post author: Wei_Dai 13 August 2012 09:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread. Show more comments above.

Comment author: CarlShulman 15 August 2012 11:20:48PM *  6 points [-]

Discussion of intelligence enhancement via reproductive biotechnology can occur smoothly here, e.g. in Wei Dai's post and associated comment thread several months ago. Looking at those past comments, I am almost certain that I could rewrite your comment to convey the same core points and yet have it be upvoted.

I think your comment was relatively ill-received because:

1) It threw in a number of other questionable claims on different topics without extensive support, rather than focusing on one at a time, and suggested very high confidence in the agglomeration while not addressing important variables (e.g. how much would a shift in the IQ distribution help vs hurt, how much does this depend on social norms rather than just the steady advance of technology, how much leverage do a few people have on these norms by participating in ideological arguments, and so forth).

2) The style was more stream-of-consciousness and in-your-face, rather than cautiously building up an argument for consideration.

3) There was a vibe of "grr, look at that oppressive taboo!" or "Hear me, O naive ideologically-blinkered folks!" That signals to some extent that one is in a "color war" mood, or attracted to the ideological high of striking for one's views against ideological enemies. That positively invites a messy political fight rather than a focused discussion of the prospects of reproductive biotechnology to improve humanity's prospects.

4) People like Nick Bostrom have written whole papers about biological enhancement, e.g. his paper on using evolutionary heuristics to look for promising enhancement possibilities. Look at its bibliography. Or consider the Less Wrong post by Wei Dai I mentioned earlier, and others like it. People focused on AI risk are not simply unaware of the behavioral genetics or psychometrics literatures, and it's a bit annoying to have them presented as some kind of secret knock-down argument.