Dmytry comments on Real-life expected utility maximization [response to XiXiDu] - Less Wrong

8 Post author: Gabriel 12 March 2012 07:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dmytry 15 March 2012 11:12:25AM *  0 points [-]

TBH i don't see how EU is being used with regards to the friendly AI.

The arguments are so much based on pure guessing, that their external probabilities are very low, and the differences in the utilities really could be so low that someone could conceivably say 'I wouldn't give up $1 of mine to provide $1 million for an attempt to mitigate risk of UFAI, even if you argue that UFAI tortures every possible human mind-state'. [note: i presume literal $, not resources, so the global utility of creation of 1 million $ is zero]

The only way EU comes into play is the appeal to the purely intuitive feeling we get, that the efficacy of the FAI effort can't possibly be so low as to degrade such giant utility to the trivial level of "should i chew gum or not", or even unimaginably less than that. Unfortunately, though, it can. The AI design space is multi-dimensional and very huge. The intuitive feeling may be correct, or may be entirely wrong. There's a lot of fallacies - being graded for effort in education contributes to one, the just world fallacy contributes to another - which may throw the intuitive feeling way off.