Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

PhilGoetz comments on SIAI - An Examination - Less Wrong

143 Post author: BrandonReinhart 02 May 2011 07:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (203)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 15 May 2011 04:50:15AM *  4 points [-]

I agree entirely with both of wedifrid's comments above. Just read the CEV document, and ask, "If you were tasked with implementing this, how would you do it?" I tried unsuccessfully many times to elicit details from Eliezer on several points back on Overcoming Bias, until I concluded he did not want to go into those details.

One obvious question is, "The expected value calculations that I make from your stated beliefs indicate that your Friendly AI should prefer killing a billion people over taking a 10% chance that one of them is developing an AI; do you agree?" (If the answer is "no", I suspect that is only due to time discounting of utility.)

Comment author: DrRobertStadler 13 September 2011 08:51:02PM 1 point [-]

Surely though if the FAI is in a position to be able to execute that action, it is in a position where it is so far ahead of an AI someone could be developing that it would have little fear of that possibility as a threat to CEV?

Comment author: PhilGoetz 15 September 2011 10:18:08PM 1 point [-]

It won't be very far ahead of an AI in realtime. The idea that the FAI can get far ahead, is based on the idea that it can develop very far in a "small" amount of time. Well, so can the new AI - and who's to say it can't develop 10 times as quickly as the FAI? So, how can a one-year-old FAI be certain that there isn't an AI project that has been developed secretly 6 months ago and is about to overtake it in itelligence?