James_Miller comments on Why I haven't signed up for cryonics - Less Wrong

29 Post author: Swimmer963 12 January 2014 05:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: James_Miller 13 January 2014 06:01:28PM 0 points [-]

I think that AI singularity is itself a conjunctive event,

The core thesis of my book Singularity Rising is (basically) that this isn't true, for the singularity at least, because there are many paths to a singularity and making progress along any one of them will help advance the others. For example, it seems highly likely that (conditional on our high tech civilization continuing) within 40 years genetic engineering will have created much smarter humans than have ever existed and these people will excel at computer programming compared to non-augmented humans.

Comment author: V_V 13 January 2014 06:34:06PM 2 points [-]

The core thesis of my book Singularity Rising is (basically) that this isn't true, for the singularity at least, because there are many paths to a singularity and making progress along any one of them will help advance the others.

Well, I haven't read your book, hence I can't exclude that you might have made some good arguments I'm not aware of, but given the publicly available arguments I know, I don't think this is true.

For example, it seems highly likely that (conditional on our high tech civilization continuing) within 40 years genetic engineering will have created much smarter humans than have ever existed and these people will excel at computer programming compared to non-augmented humans.

Is it?

There are some neurological arguments that the human brain is near the maximum intelligence limit for a biological brain.
We are probably not going to breed people with IQ >200, perhaps we might breed people with IQ 140-160, but will there be tradeoffs that make it problematic to do it at large?
Will there be a demand for such humans?
Will they devote their efforts to AI research, or will their comparative advantage drive them to something else?
And how good will they be at developing super AI? As technology becomes more mature, making progress becomes more difficult because the low-hanging fruits have already have been picked, and intelligence itself might have diminishing returns (at very least, I would be surprised to observe an inverse linear correlation between average AI researcher IQ and time to AI).
And, of course, if singularity-inducing AI is impossible/impractical, the point is moot: these genetically enhanced Einstens will not develop it.

In general, with enough imagination you can envision many highly conjunctive ad hoc scenarios and put them into a disjunction, but I find this type of thinking highly suspicious, because you could use it to justify pretty much everything you wanted to believe.
I think it's better to recognize that we don't have any crystal ball to predict the future, and betting on extreme scenarios is probably not going to be a good deal.