CarlShulman comments on Should I believe what the SIAI claims? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (600)
Robin thinks emulations will probably come before AI, that non-emulation AI would probably be developed by large commercial or military organizations, that AI capacity would ramp up relatively slowly, and that extensive safety measures will likely prevent organizations from losing control of their AIs. He says that still leaves enough of an existential risk to be worth working on, but I don't know his current estimate. Also, some might differ from Robin in valuing a Darwinian/burning the cosmic commons outcome.
I don't know of any charitable contributions Robin has made to any organization, or any public analysis or ranking of charities by him.
Robin gave me an all-AI-causes existential risk estimate of between 1% and 50%, meaning that he was confident that after he spent some more time thinking he would wind up giving a probability in that range.