You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

E_Ransom comments on Open thread, Sept. 1-7, 2014 - Less Wrong Discussion

4 Post author: polymathwannabe 01 September 2014 12:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (162)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 08 September 2014 03:22:05PM 1 point [-]

I care about (read: have vested interest in) people that can influence my wellbeing and choices. Because all human beings have the potential to do this, I have care about them to some degree, great or small. Because I cannot physically empathize with seven billion humans at once on an equal or appropriate level, I use a general altruistic axiom to determine how to act towards people I do not have the resources to physically care about.

That's my reason, at least, for having an altruistic axiom, explained in a terribly simple manner. I'm sure there are other, better explainations for working off altruistic axioms. I'm not making a case for the axiom, just explaining what I see as my reasons for having it.

Comment author: Metus 08 September 2014 07:54:20PM 1 point [-]

This thing is turning into a tautology. I care about people to the degree that they are useful to me. My friends and family are incredibly useful in the great state of mind they put me in. A person living in extreme poverty I have never met, not so much. They could be useful were they highly educated and had access to sufficient capital to leverage their knowledge complementary to my skills, but the initial investment far exceeds the potential gain.

What irks me is not the statement above but the tradeoff being made in utilitarianism: That the pain of other people should count as much as my pain. It simply does not.