steven0461 comments on Building toward a Friendly AI team - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (95)
I just meant that it seems to be possible to improve a lot of other people's expected quality of life at the expense of relatively small decreases to one's own (but that people are generally not doing so), and that this seems like it should cause the outcome of a process with moral uncertainty between egoism and altruism to skew more toward the altruist side in some sense, though I don't understand how to deal with moral uncertainty (if anyone else does, I'd be interested in your answers to this). If by "abstract values" you mean something like making the universe as simple as possible by setting all the bits to zero, then I agree there's no asymmetry, but I wouldn't call that "egoistic" as such.
Here. Yes, SUAD was a good and relevant contribution.
You're right that it's not certain that altruism in a FAI team candidate is, all else equal, more desirable. I guess I'm just saying that if it is, then sufficiently large differences in altruism outweigh sufficiently small differences in rationality.
I have written a few more posts that are relevant to the "egoism vs altruism" question:
I guess we don't have more discussions of altruism vs egoism because making progress on the problem is hard. Typical debates about moral philosophy are not very productive, and it's probably fortunate that LW is good at avoiding them.
Do you agree? Do you think there are good arguments to be had that we're not having for some reason? Does it seem to you that most LWers are just not very interested in the problem?