Consistence of reciprocity?

0 Post author: yttrium 16 December 2012 07:08PM

Many people see themselves in various groups (member of the population of their home country, or their social network), and feel justified in caring more about the well-being of people in this group than about that of others. They will argue with reciprocity: "Those people pay taxes in our country, they are entitled to more support from 'us' than others!" My question is: Is this inconsistent with some rationality axioms that seem obvious? What often-adopted or reasonable axioms are there that make this inconsistent?

Comments (18)

Comment author: buybuydandavis 17 December 2012 01:29:46AM 7 points [-]

My question is: Is this inconsistent with some rationality axioms that seem obvious?

No. Rationality has not sent you orders to serve the collective. You are free to value the people and things you in fact value.

Comment author: timtyler 16 December 2012 08:13:21PM *  2 points [-]

That's cultural kin selection. It isn't necessarily bad - for example, sometimes supporting your group pays. Of course, it can be bad - when patriotism leads to dying in battle for the sake of your comrades, that isn't so great for those that fell.

Comment author: Kawoomba 16 December 2012 09:10:37PM 1 point [-]

Depends on their [the fallen's] values.

Comment author: timtyler 16 December 2012 11:53:40PM 1 point [-]

If you don't think dying is sufficiently bad, feel free to substitute an example of memetic hijacking of your choice.

Comment author: Kawoomba 17 December 2012 07:08:30PM 0 points [-]

Oh, I certainly agree, but who are we to decide how others value dying relative to other goals. Their utility function ain't faulty just because we call its features memetic hijacking.

Comment author: aelephant 16 December 2012 11:55:44PM 0 points [-]

In one of Yvain's posts he mentions that a perfect utilitarian "attaches exactly the same weight to others' welfare as to [his or her] own". Utilitarianism seems to be popular here. "Others" seems to imply all others & makes no distinctions.

Comment author: TrE 17 December 2012 07:03:46AM *  0 points [-]

Well, nobody can dictate which terminal values you should have, i.e. the utility function is not up for grabs. However, if you choose a class of things similar to you

(e.g. everything which weighs 150 lbs, everyone of your race, every human less that 5km away, all human brains which weigh more than 1 pound, everything which has human DNA or possibly everything living)

, then you can limit your utilitarianism to these things and be a perfect utilitarian w.r.t. this group. I guess she meant it this way, although I would embrace clarification or confirmation.

Comment author: Vladimir_Nesov 24 December 2012 09:28:35AM 0 points [-]

nobody can dictate which terminal values you should have, i.e. the utility function is not up for grabs

The most important case is where your can't yourself arbitrarily declare your own values.

Comment author: TrE 24 December 2012 11:31:53AM *  0 points [-]

I don't see the connection to my comment. Could you enlighten me, please?

Comment author: Vladimir_Nesov 24 December 2012 05:15:08PM *  0 points [-]

(Your wording, even if unintentionally, seemed to suggest that the statement applies mainly to the way other people won't be able to actually force arbitrary terminal values on you (even when they convince you that they have). I think the remaining case, where you do that to yourself, is particularly important, as it's not a well-known idea that this too should be guarded against.)

Comment author: Kawoomba 24 December 2012 05:21:04PM *  1 point [-]

Should it be a well-known idea [that this (=forcing terminal values on yourself) should be guarded against], or even desirable (to guard against forcing terminal values on yourself)?

Edit: Clarified

Comment author: Vladimir_Nesov 24 December 2012 05:29:22PM 0 points [-]

I don't expect that being systematically wrong about your own values would be desirable.

Comment author: Kawoomba 24 December 2012 05:50:52PM 1 point [-]

(See clarification in the grandparent)

Isn't your present self the determinant of your terminal values? The blueprint you compare against? Isn't it a tautology that your current utility function is the utility function of your present self?

If so, if at any one point in time you desire to reprogram a part of your own utility function, wouldn't that desire in itself mean that such a change is already a justified part of your present utility function?

If there is some tension between your conscious desires ("I want to feel this or that way about this or that") and your "subconscious" desires, why should that not be resolved in favor of your conscious choice?

If you consciously want to want X, but subconsciously want Y, who says which part of you takes precedence, and which is the "systematically wrong" part?

Comment author: Vladimir_Nesov 25 December 2012 02:05:06AM 1 point [-]

There is a difference between (say) becoming skilled at mathematics, and arbitrarily becoming convinced that you are, when in fact that doesn't happen. Both are changes in state of your mind, both are effected by thinking, but there are also truth conditions on beliefs about the state of mind. If you merely start believing that your values include X, that doesn't automatically make it so. The fact of whether your values include X is a separate phenomenon from your belief about whether they do. The problem is when you become convinced that you value X, and start doing things that accord with valuing X, but you are in fact mistaken. And not being able to easily and reliably say what is is you value is not grounds for accepting an arbitrary hypothesis about what it is.

Comment author: Kawoomba 25 December 2012 07:18:00AM 0 points [-]

Thanks for the answer.

Your example is an epistemic truth statement. Changing "I am good at mathematics" to "I am not good at mathematics" or vice versa does not change your utility function.

Just like saying "I am overweight" does not imply that you value being overweight, or that you don't.

I understand your point that simply saying "I value X deeply" does not override all your previous utility assessments of X. However, I disagree on how to resolve that contradiction. You want to guard against it, you'd say "it's wrong". I'd embrace it as the more important utility function of your conscious mind.

You take the position of "What I consciously want to want does not matter, it only matters what I actually want, which can well be entirely different".

My question is what elevates those subconscious and harder to access stored terminal values over those you consciously want to value.

Should it not be the opposite, since you typically have more control (and can exert more rationality) over your conscious mind than your unconscious wants and needs?

Rephrase: When there is a clear conflict between what your conscious mind wants to want, and what you subconsciously want, why should that contradiction not be resolved in favor of your consciously expressed needs, guiding your actions? Making them your actual utility function.