ciphergoth comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: whpearson 12 August 2010 07:49:15PM 6 points [-]

If I was SIAI my reasoning would be the following. First stop with the believes- believes not dichotomy and move to probabilities.

So what is the probability of a good outcome if you can't formalize friendliness before AGI? Some of them would argue infinitesimal. This is based on fast take-off winner take all type scenarios (I have a problem with this stage, but I would like it to be properly argued and that is hard).

So looking at the decision tree (under these assumptions) the only chance of a good outcome is to try to formalise FAI before AGI becomes well known. All the other options lead to extinction.

So to attack the "formalise Friendliness before AGI" position you would need to argue that the first AGIs are very unlikely to kill us all. That is the major battleground as far as I am concerned.

Comment author: ciphergoth 13 August 2010 06:07:51AM 2 points [-]

I'd look at it the other way: I'd take it as practically certain that any superintelligence built without explicit regard to Friendliness will be unFriendly, and ask what the probability is that through sufficiently slow growth in intelligence and other mere safeguards, we manage to survive building it.

My best hope currently rests on the AGI problem being hard enough that we get uploads first.

(This is essentially the Open Thread about everything Eliezer or SIAI have ever said now, right?)

Comment author: NihilCredo 15 August 2010 12:19:51AM 1 point [-]

Uploading would have quite a few benefits, but I get the impression it would make us more vulnerable to whatever tools a hostile AI may possess, not less.