bokov comments on Arguments Against Speciesism - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (474)
Why should I believe what humans have been selected for? Why would I want to keep "us" alive?
I think those two questions are at least as begging as the reasons for my view, if not more.
What I know for sure is that I dislike my own suffering, not because I'm sapient and have it happening to me, but because it is suffering. And I want to do something in life that is about more than just me. Ultimately, this might not be a "more true" reason than "what I have been selected for", but it does appeal to me more than anything else.
All rationality requires is a goal. You may not share the same goals I have. I have noticed, however, that some people haven't thought through all the implications of their stated goals. Especially on LW, people are very quick to declare something to be of terminal value to them, which serves as a self-fulfilling prophecy unfortunately.
I discovered that intuitions are easy to change. People definitely have stronger emotional reactions to things happening to those that are close, but do they really, on an abstract level, care less about those that are distant? Do they want to care less about those that are distant, or would they take a pill that turned them into universal altruists?
And how do you do that?
If a situation arises where you can benefit your self-interest by defecting, the rational thing to do is to defect. Don't tell yourself that you're being a decent person only because of pure self-interest, you'd be deceiving yourself. Yes, if everyone followed some moral code written for societal interaction among moral agents, then everyone would be doing well (but not perfectly well). However, given that you cannot expect others to follow through, your decision to not "break the rules" is an altruistic decision for (at least) all the cases where you are unlikely enough to get caught.
You may also ask yourself whether you would press a button that inflicts suffering on a child (or a cow) far away, give you ten dollars, and makes you forget about all that happened. Would you want to self-modify to be the person who easily pushes the button? If not, just how much altruism is it going to be, and why not go for the (non-arbitrary) whole cake?
Experience and observation of others has taught me that when one tries to derive a normative code of behavior from the top-down, they often end up with something that is in subtle ways incompatible with selfish drives. They will therefore be tempted to cheat on their high-minded morals, and react to this cognitive dissonance either by coming up with reasons why it's not really cheating or working ever harder to suppress their temptations.
I've been down the egalitarian altruist route, it came crashing down (several times) until I finally learned to admit that I'm a bastard. Now instead of agonizing whether my right to FOO outweighs Bob's right to BAR, I have the simpler problem of optimizing my long-term FOO and trusting Bob to optimize his own BAR.
I still cheat, but I don't waste time on moral posturing. I try to treat it as a sign that perhaps I still don't fully understand my own utility function. Imagine how far off the mark I'd be if I was simultaneously trying to optimize Bob's!