NancyLebovitz comments on Open Thread for January 8 - 16 2014 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (343)
If you're expecting the singularity within a century, does it make sense to put any thought into eugenics except for efforts to make it easy to avoid the worst genetic disorders?
That seems to depend on a number of assumptions -- your timeline, whether you expect a soft or a hard takeoff, the centrality of raw intelligence vs. cultural effects to research quality, possible nonlinearity of network effects on intellectual output. But I'd bet that the big one is time: if you think (unrealistically, but run with it) that you can improve a test population's intelligence by 50%, that could be very significant if you're expecting a 2100 singularity but likely won't be if you're expecting one before they graduate from college.
This could be generalised to putting any thought into anything. Will the singularity be achieved within one childhood? More smart people may be useful to apply to the problem. If you're smart, make more smart people.
Good point. The cutoff is not necessarily the singularity, either - once we have sufficiently awesome genetic engineering, there's no point to eugenics.