benelliott comments on Jews and Nazis: a version of dust specks vs torture - Less Wrong

16 Post author: shminux 07 September 2012 08:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: benelliott 09 September 2012 03:44:48PM *  0 points [-]

I don't won't to be rude, but your first example in particular looks like somewhere where its beneficial to signal lexicographic preferences.

Since I do not really know how to optimize for any of this

What do you mean you don't know how to optimise for this! If you want and FAI then donating to SIAI almost certainly does more good than nothing, (even if they aren't as effective as they could be they almost certainly don't have zero effectiveness, if you think they have negative effectiveness then you should be persuading others not to donate). Any time spent acquiring/spending time with true friends would be better spent on earning money to donate (or encouraging others not to) if your preferences are truly lexicographic. This is what I mean when I say that in the real world, lexicographic preferences just cache out as not caring about the bottom at all.

You've also confused the issue by talking about personal preferences, which tend to be non-linear, rather than interpersonal. It may well be that the value of both ardent followers and true friends suffers diminishing returns as you get more of them, and probably tends towards an asymptote. The real question is not "do I prefer an FAI to any number of true friends" but "do I prefer a single true friend to any chance of an FAI, however small", in which case the answer, for me at least, seems to be no.