lukeprog comments on BOOK DRAFT: 'Ethics and Superintelligence' (part 1) - Less Wrong

11 Post author: lukeprog 13 February 2011 10:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (107)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 14 February 2011 06:20:09AM *  2 points [-]

Dorikka,

I don't understand this. If the singleton's utility function was written such that it's highest value was for humans to become the Affront, then making it the case that humans believed they were the Affront while not being the Affront would not satisfy the utility function. So why would the singleton do such a thing?

Comment author: Dorikka 15 February 2011 02:45:39AM 2 points [-]

I don't think that my brain was working optimally at 1am last night.

My first point was that our CEV might decide to go Baby-Eater, and so the FAI should treat the caring-about-the-real-world-state part of its utility function as a mere preference (like chocolate ice cream), and pop humanity into a nicely designed VR (though I didn't have the precision of thought necessary to put it into such language). However, it's pretty absurd for us to be telling our CEV what to do, considering that they'll have much more information than we do and much more refined thinking processes. I actually don't think that our Last Judge should do anything more than watch for coding errors (as in, we forgot to remove known psychological biases when creating the CEV).

My second point was that the FAI should also slip us into a VR if we desire a world-state in which we defect from each other (with similar results as in the prisoner's dilemma). However, the counterargument from point 1 also applies to this point.