ciphergoth comments on Should I believe what the SIAI claims? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (600)
If I was SIAI my reasoning would be the following. First stop with the believes- believes not dichotomy and move to probabilities.
So what is the probability of a good outcome if you can't formalize friendliness before AGI? Some of them would argue infinitesimal. This is based on fast take-off winner take all type scenarios (I have a problem with this stage, but I would like it to be properly argued and that is hard).
So looking at the decision tree (under these assumptions) the only chance of a good outcome is to try to formalise FAI before AGI becomes well known. All the other options lead to extinction.
So to attack the "formalise Friendliness before AGI" position you would need to argue that the first AGIs are very unlikely to kill us all. That is the major battleground as far as I am concerned.
I'd look at it the other way: I'd take it as practically certain that any superintelligence built without explicit regard to Friendliness will be unFriendly, and ask what the probability is that through sufficiently slow growth in intelligence and other mere safeguards, we manage to survive building it.
My best hope currently rests on the AGI problem being hard enough that we get uploads first.
(This is essentially the Open Thread about everything Eliezer or SIAI have ever said now, right?)
Uploading would have quite a few benefits, but I get the impression it would make us more vulnerable to whatever tools a hostile AI may possess, not less.