Kaj_Sotala comments on Draft of Muehlhauser & Salamon, 'Intelligence Explosion: Evidence and Import' - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (44)
I have to compliment you on this paper: as with Machine Ethics and Superintelligence, I can't help but to consider this one of the clearest, best argued, and most-sticking-to-the-point Singularity papers that I've read. This also seems to be considerably improved from some of the earlier drafts that I saw.
I think I know what you mean by this paragraph, but its intended meaning is unclear to someone who doesn't. Also "the outcomes of these choices will depend, among other things, on whether AI is created in the near future" took me a moment to parse - as it is, it seems to be saying "the outcome of these choices will depend on the outcome these choices". (The first rule of Tautology Club is the first rule of Tautology Club.)
I suggest something like:
"But uncertainty is not a “get out of prediction free” card. We still need to decide whether or not to encourage WBE development, whether or not to help fund AI safety research, etc. Deciding either way already implies some sort of prediction - choosing not to fund AI safety research suggests that we do not think AI is near, while funding it implies that we think it might be."
or
"But uncertainty is not a “get out of prediction free” card. We still need to decide whether or not to encourage WBE development, whether or not to help fund AI safety research, etc. These choices will then lead to one outcome or another. Analogously, uncertainty about the reality and effects of climate change does not mean that our choices are irrelevant, that any choice is as good as any other, or that better information could not lead to better choices. The same is true for choices relating to the intelligence explosion."
If you want to cite a more recent summary, there's
Rushton, J.P. & Ankney, C.D. (2009) Whole Brain Size and General Mental Ability: A Review. The International Journal of Neuroscience, 119(5), 692-732. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2668913/
For my brief summary of the claims in Rushton & Ankney, see this Cogsci Stackexchange answer.
I did include Rushton in my research summary, IIRC, but it's probably a good idea to not cite him - Rushton is poison! An editor on Wikipedia tried to get me punished by the Arbcom for citing Rushton, even. (What saved me was being very very careful to not ever mention race in any form and specifically disclaim it.)
Did you mean The Singularity and Machine Ethics?
Thank you for your comments.
Yeah.