Kaj_Sotala comments on Draft of Muehlhauser & Salamon, 'Intelligence Explosion: Evidence and Import' - Less Wrong

17 Post author: lukeprog 22 February 2012 04:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread.

Comment author: Kaj_Sotala 22 February 2012 07:33:08AM 13 points [-]

I have to compliment you on this paper: as with Machine Ethics and Superintelligence, I can't help but to consider this one of the clearest, best argued, and most-sticking-to-the-point Singularity papers that I've read. This also seems to be considerably improved from some of the earlier drafts that I saw.

But uncertainty is not a “get out of prediction free” card. We either will or will not choose to encourage WBE development, will or will not help fund AI safety research, etc. The outcomes of these choices will depend, among other things, on whether AI is created in the near future.

I think I know what you mean by this paragraph, but its intended meaning is unclear to someone who doesn't. Also "the outcomes of these choices will depend, among other things, on whether AI is created in the near future" took me a moment to parse - as it is, it seems to be saying "the outcome of these choices will depend on the outcome these choices". (The first rule of Tautology Club is the first rule of Tautology Club.)

I suggest something like:

"But uncertainty is not a “get out of prediction free” card. We still need to decide whether or not to encourage WBE development, whether or not to help fund AI safety research, etc. Deciding either way already implies some sort of prediction - choosing not to fund AI safety research suggests that we do not think AI is near, while funding it implies that we think it might be."

or

"But uncertainty is not a “get out of prediction free” card. We still need to decide whether or not to encourage WBE development, whether or not to help fund AI safety research, etc. These choices will then lead to one outcome or another. Analogously, uncertainty about the reality and effects of climate change does not mean that our choices are irrelevant, that any choice is as good as any other, or that better information could not lead to better choices. The same is true for choices relating to the intelligence explosion."

and that brain size and IQ correlate positively in humans, with a correlation coefficient of about 0.35 (McDaniel 2005).

If you want to cite a more recent summary, there's

Rushton, J.P. & Ankney, C.D. (2009) Whole Brain Size and General Mental Ability: A Review. The International Journal of Neuroscience, 119(5), 692-732. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2668913/

For my brief summary of the claims in Rushton & Ankney, see this Cogsci Stackexchange answer.

Comment author: gwern 22 February 2012 08:29:12PM *  9 points [-]

Rushton, J.P. & Ankney, C.D. (2009) Whole Brain Size and General Mental Ability: A Review. The International Journal of Neuroscience, 119(5), 692-732. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2668913/

I did include Rushton in my research summary, IIRC, but it's probably a good idea to not cite him - Rushton is poison! An editor on Wikipedia tried to get me punished by the Arbcom for citing Rushton, even. (What saved me was being very very careful to not ever mention race in any form and specifically disclaim it.)

Comment author: lukeprog 22 February 2012 07:14:43PM 1 point [-]

as with Machine Ethics and Superintelligence...

Did you mean The Singularity and Machine Ethics?

Thank you for your comments.

Comment author: Kaj_Sotala 22 February 2012 07:46:03PM 0 points [-]

Did you mean The Singularity and Machine Ethics?

Yeah.