You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Xachariah comments on Cynical explanations of FAI critics (including myself) - Less Wrong Discussion

21 Post author: Wei_Dai 13 August 2012 09:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread. Show more comments above.

Comment author: Xachariah 14 August 2012 08:15:17PM *  0 points [-]

I think we have the same sentiment, though we may be using different terminology. To paraphrase Eliezer on CEV, "Not to trust the self of this passing moment, but to try to extrapolate a me who knew more, thought faster, and were more the person I wished I were. Such a person might be able to avoid the fundamental errors. And still fearful that I bore the stamp of my mistakes, I should include all of the world in my extrapolation." Basically, I believe there is no CEV except the entire whole of human morality. Though I do admit that CEV has a hard problem in the case of mutually conflicting desires.

If you hold CEV a personal rather than universal, then I agree that the SIAI should work on that 'universal CEV', whatever it be named.

Comment author: Pentashagon 15 August 2012 01:00:46PM 2 points [-]

I just re-read EY's CEV paper and noticed that I had forgotten quite a bit since the last time I read it. He goes over most of the things I whined about. My lingering complaint/worry is that human desires won't converge, but so long as CEV just says "fail" in that case instead of "become X maximizers" we can potentially start over with individual or smaller-group CEV. A thought experiment I have in mind is what would happen if more than one group of humans independently invented FAI at the same time. Would the FAIs merge, cooperate, or fight?

I guess I am also not quite sure how FAI will actively prevent other AI projects or whole brain simulations or other FOOMable things, or if that's even the point. I guess it may be up to humans to ask the FAI how to prevent existential risks and then implement the solutions themselves.