luminosity comments on Cryonics Questions - Less Wrong

9 Post author: James_Miller 26 August 2010 11:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (165)

You are viewing a single comment's thread.

Comment author: luminosity 27 August 2010 07:34:43AM 4 points [-]

A little nit-picky, but:

A friendly singularity would likely produce an AI that in one second could think all the thoughts that would take a billion scientists a billion years to contemplate.

Without a source these figures seem to imply a precision that you don't back up. Are you really so confident that an AI of this level of intelligence will exist? I feel your point would be stronger by removing the implied precision. Perhaps:

A friendly singularity would likely produce a superintelligence capable of mastering nanotechnology.

Comment author: NihilCredo 29 August 2010 01:33:34PM 4 points [-]

More generally, any time the subject of AI comes up I would recommend making efforts to avoid describing it in terms that sound suspiciously like wish fulfillment, snake-oil promises, or generally any phrasing that triggers scam/sect red flags.