If I'm a moral anti-realist, do necessarily I believe that provably Friendly AI is impossible?
No. I mean, I'm unsure about the possibility of provably Friendly AI but it's not obvious that anti-realism makes it impossible. Moral realism, were it the case, might make things easier but it's hard for me to imagine what that world looks like.
Let us define a morality function F() as taking as input x=the factual circumstances an agent faces in making a decision, outputting y=the decision the agent makes. It is fairly apparent that practically every agent has an F(). So ELIEZER(x) is the function that describes what Eliezer would choose in situation x. Next, define GROUP{} as the set of morality functions run by all the members of that group.
Let us define CEV() as the function that takes as input a morality function or set of morality functions and outputs a morality function that is improved/ma...
From the last thread:
Meta: