CarlShulman comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: CarlShulman 13 August 2010 11:19:03AM 6 points [-]

Robin thinks emulations will probably come before AI, that non-emulation AI would probably be developed by large commercial or military organizations, that AI capacity would ramp up relatively slowly, and that extensive safety measures will likely prevent organizations from losing control of their AIs. He says that still leaves enough of an existential risk to be worth working on, but I don't know his current estimate. Also, some might differ from Robin in valuing a Darwinian/burning the cosmic commons outcome.

I don't know of any charitable contributions Robin has made to any organization, or any public analysis or ranking of charities by him.

Comment author: CarlShulman 13 August 2010 06:09:37PM 2 points [-]

Robin gave me an all-AI-causes existential risk estimate of between 1% and 50%, meaning that he was confident that after he spent some more time thinking he would wind up giving a probability in that range.

Comment author: XiXiDu 13 August 2010 12:15:08PM 1 point [-]

Thanks, this is the kind of informed (I believe in Hansons case) contrarian third party opinions about main isuess that I perceive to be missing.

Surely I could have found out about this myself. But if I was going to wait until I first finished my studies of the basics, i.e. catch up on formal education, then read the relevant background information and afterwards all of LW, I could as well not donate to the SIAI at all for the next half decade.

Where is the summary that is available for other issues like climate change. The Talk.origins of existential risks, especially superhuman AI?