tom_cr
tom_cr has not written any posts yet.

If you assume.... [y]ou are, in effect, stipulating that that outcome actually has a lower utility than it's stated to have.
Thanks, that focuses the argument for me a bit.
So if we assume those curves represent actual utility functions, he seems to be saying that the shape of curve B, relative to A makes A better (because A is bounded in how bad it could be, but unbounded in how good it could be). But since the curves are supposed to quantify betterness, I am attracted to the conclusion that curve B hasn't been correctly drawn. If B is worse than A, how can their average payoffs be the same?
To put it the... (read more)
Sure, I used that as what I take to be the case where the argument would be most easily recognized as valid.
One generalization might be something like, "losing makes it harder to continue playing competitively." But if it becomes harder to play, then I have lost something useful, i.e. my stock of utility has gone down, perhaps by an amount not reflected in the inferred utility functions. My feeling is that this must be the case, by definition (if the assumed functions have the same expectation), but I'll continue to ponder.
The problem feels related to Pascal's wager - how to deal with the low-probability disaster.
Thanks very much for the taking the time to explain this.
It seems like the argument (very crudely) is that, "if I lose this game, that's it, I won't get a chance to play again, which makes this game a bad option." If so, again, I wonder if our measure of utility has been properly calibrated.
It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, but money is not identical to utility.
Nonetheless, those exponential distributions make a very interesting argument.
I'm not entirely sure, I need to mull it over a bit more.
Thanks again, I appreciate it.
I think that international relations is a simple extension of social-contract-like considerations.
If nations cooperate, it is because it is believed to be in their interest to do so. Social-contract-like considerations form the basis for that belief. (The social contract is simply that which makes it useful to cooperate.) "Clearly isn't responsible for," is a phrase you should be careful before using.
You seem to be suggesting that [government] enables [cooperation]
I guess you mean that I'm saying cooperation is impossible without government. I didn't say that. Government is a form of cooperation. Albeit a highly sophisticated one, and a very powerful facilitator.
I have my quibbles with the social contract theory of government
I appreciate your frankness. I'm curious, do you have an alternative view of how government derives legitimacy? What is it that makes the rules and structure of society useful? Or do you think that government has no legitimacy?
Values start to have costs only when they are realized or implemented.
How? Are you saying that I might hold legitimate value in something, but be worse off if I get it?
Costlessly increasing the welfare of strangers doesn't sound like altruism to me.
OK, so we are having a dictionary writers' dispute - one I don't especially care to continue. So every place I used 'altruism,' substitute 'being decent' or 'being a good egg,' or whatever. (Please check, though, that your usage is somewhat consistent.)
But your initial claim (the one that I initially challenged) was that rationality has nothing to do with value, and is manifestly false.
If you look closely, I think you should find that legitimacy of government & legal systems comes from the same mechanism as everything I talked about.
You don't need it to have media of exchange, nor cooperation between individuals, nor specialization
Actually, the whole point of governments and legal systems (legitimate ones) is to encourage cooperation between individuals, so that's a bit of a weird comment. (Where do you think the legitimacy comes from?) And specialization trivially depends upon cooperation.
Yes, these things can exist to a small degree in a post-apocalyptic chaos, but they will not exactly flourish. (That's why we call it post-apocalyptic chaos.) But the extent to which these... (read more)
Value is something that exists in a decision-making mind. Real value (as opposed to fictional value) can only derive from the causal influences of the thing being valued on the valuing agent. This is just a fact, I can't think of a way to make it clearer.
Maybe ponder this:
How could my quality of life be affected by something with no causal influence on me?
Why does it seem false?
If welfare of strangers is something you value, then it is not a net cost.
Yes, there is an old-fashioned definition of altruism that assumes the action must be non-self-serving, but this doesn't match common contemporary usage (terms like effective altruism and reciprocal altruism would be meaningless), doesn't match your usage, and is based on a gross misunderstanding of how morality comes about (if written about this misunderstanding here - see section 4, "Honesty as meta-virtue," for the most relevant part).
Under that old, confused definition, yes, altruism can not be rational (but not orthogonal to rationality - we could still try to measure how irrational any given altruistic act... (read 437 more words →)
The question is not one of your goals being 50% fulfilled
If I'm talking about a goal actually being 50% fulfilled, then it is.
"Risk avoidance" and "value" are not synonyms.
Really?
I consider risk to be the possibility of losing or not gaining (essentially the same) something of value. I don't know much about economics, but if somebody could help avoid that, would people be willing to pay for such a service?
If I'm terrified of spiders, then that is something that must be reflected in my utility function, right? My payoff from being close to a spider is less than otherwise.
I'll post a sibling comment.
That would be very kind :) No need to hurry.
I think that the communication goals of the OP were not to tell us something about a hand of cards, but rather to demonstrate that certain forms of misunderstanding are common, and that this maybe tells us something about the way our brains work.
The problem quoted unambiguously precludes the possibility of an ace, yet many of us seem to incorrectly assume that the statement is equivalent to something like, 'One of the following describes the criterion used to select a hand of cards.....,' under which, an ace is likely. The interesting question is, why?
In order to see the question as interesting, though, I first have to see the effect as real.