Posts

Sorted by New

Wiki Contributions

Comments

tom_cr9y00

I think that the communication goals of the OP were not to tell us something about a hand of cards, but rather to demonstrate that certain forms of misunderstanding are common, and that this maybe tells us something about the way our brains work.

The problem quoted unambiguously precludes the possibility of an ace, yet many of us seem to incorrectly assume that the statement is equivalent to something like, 'One of the following describes the criterion used to select a hand of cards.....,' under which, an ace is likely. The interesting question is, why?

In order to see the question as interesting, though, I first have to see the effect as real.

tom_cr10y-20

If you assume.... [y]ou are, in effect, stipulating that that outcome actually has a lower utility than it's stated to have.

Thanks, that focuses the argument for me a bit.

So if we assume those curves represent actual utility functions, he seems to be saying that the shape of curve B, relative to A makes A better (because A is bounded in how bad it could be, but unbounded in how good it could be). But since the curves are supposed to quantify betterness, I am attracted to the conclusion that curve B hasn't been correctly drawn. If B is worse than A, how can their average payoffs be the same?

To put it the other way around, maybe the curves are correct, but in that case, where does the conclusion that B is worse come from? Is there an algebraic formula to choose between two such cases? What if A had a slightly larger decay constant, at what point would A cease to be better?

I'm not saying I'm sure Dawes' argument is wrong, I just have no intuition at the moment for how it could be right.

tom_cr10y00

Sure, I used that as what I take to be the case where the argument would be most easily recognized as valid.

One generalization might be something like, "losing makes it harder to continue playing competitively." But if it becomes harder to play, then I have lost something useful, i.e. my stock of utility has gone down, perhaps by an amount not reflected in the inferred utility functions. My feeling is that this must be the case, by definition (if the assumed functions have the same expectation), but I'll continue to ponder.

The problem feels related to Pascal's wager - how to deal with the low-probability disaster.

tom_cr10y00

Thanks very much for the taking the time to explain this.

It seems like the argument (very crudely) is that, "if I lose this game, that's it, I won't get a chance to play again, which makes this game a bad option." If so, again, I wonder if our measure of utility has been properly calibrated.

It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, but money is not identical to utility.

Nonetheless, those exponential distributions make a very interesting argument.

I'm not entirely sure, I need to mull it over a bit more.

Thanks again, I appreciate it.

tom_cr10y-10

I think that international relations is a simple extension of social-contract-like considerations.

If nations cooperate, it is because it is believed to be in their interest to do so. Social-contract-like considerations form the basis for that belief. (The social contract is simply that which makes it useful to cooperate.) "Clearly isn't responsible for," is a phrase you should be careful before using.

You seem to be suggesting that [government] enables [cooperation]

I guess you mean that I'm saying cooperation is impossible without government. I didn't say that. Government is a form of cooperation. Albeit a highly sophisticated one, and a very powerful facilitator.

I have my quibbles with the social contract theory of government

I appreciate your frankness. I'm curious, do you have an alternative view of how government derives legitimacy? What is it that makes the rules and structure of society useful? Or do you think that government has no legitimacy?

tom_cr10y-10

Values start to have costs only when they are realized or implemented.

How? Are you saying that I might hold legitimate value in something, but be worse off if I get it?

Costlessly increasing the welfare of strangers doesn't sound like altruism to me.

OK, so we are having a dictionary writers' dispute - one I don't especially care to continue. So every place I used 'altruism,' substitute 'being decent' or 'being a good egg,' or whatever. (Please check, though, that your usage is somewhat consistent.)

But your initial claim (the one that I initially challenged) was that rationality has nothing to do with value, and is manifestly false.

tom_cr10y-10

If you look closely, I think you should find that legitimacy of government & legal systems comes from the same mechanism as everything I talked about.

You don't need it to have media of exchange, nor cooperation between individuals, nor specialization

Actually, the whole point of governments and legal systems (legitimate ones) is to encourage cooperation between individuals, so that's a bit of a weird comment. (Where do you think the legitimacy comes from?) And specialization trivially depends upon cooperation.

Yes, these things can exist to a small degree in a post-apocalyptic chaos, but they will not exactly flourish. (That's why we call it post-apocalyptic chaos.) But the extent to which these things can exist is a measure of how well the social contract flourishes. Don't get too hung up on exactly, precisely what 'social contract' means, it's only a crude metaphor. (There is no actual bit of paper anywhere.)

I may not be blameless, in terms clearly explaining my position, but I'm sensing that a lot of people on this forum just plain dislike my views, without bothering to take the time to consider them honestly.

tom_cr10y-40

Value is something that exists in a decision-making mind. Real value (as opposed to fictional value) can only derive from the causal influences of the thing being valued on the valuing agent. This is just a fact, I can't think of a way to make it clearer.

Maybe ponder this:

How could my quality of life be affected by something with no causal influence on me?

tom_cr10y-10

Why does it seem false?

If welfare of strangers is something you value, then it is not a net cost.

Yes, there is an old-fashioned definition of altruism that assumes the action must be non-self-serving, but this doesn't match common contemporary usage (terms like effective altruism and reciprocal altruism would be meaningless), doesn't match your usage, and is based on a gross misunderstanding of how morality comes about (if written about this misunderstanding here - see section 4, "Honesty as meta-virtue," for the most relevant part).

Under that old, confused definition, yes, altruism can not be rational (but not orthogonal to rationality - we could still try to measure how irrational any given altruistic act is, each act still sits somewhere on the scale of rationality).

It does not.

You seem very confident of that. Utterly bizarre, though, that you claim that not infringing on people's rights is not part of being nice to people.

But the social contract demands much more than just not infringing on people's rights. (By the way, where do those right come from?) We must actively seek each other out, trade (even if it's only trade in ideas, like now), and cooperate (this discussion wouldn't be possible without certain adopted codes of conduct ).

The social contract enables specialization in society, and therefore complex technology. This works through our ability to make and maintain agreements and cooperation. If you know how to make screws, and I want screws, the social contract enables you to convincingly promise to hand over screws if I give you some special bits of paper. If I don't trust you for some reason, then the agreement breaks down. You lose income, I lose the screws I need for my factory employing 500 people, we all go bust. Your knowledge of how to make screws and my expertise in making screw drivers now counts for nothing, and everybody is screwed.

We help maintain trust by being nice to each other outside our direct trading. Furthermore, by being nice to people in trouble who we have never before met, we enhance a culture of trust that people in trouble will be helped out. We therefore increase the chances that people will help us out next time we end up in the shit. Much more importantly, we reduce a major source of people's fears. Social cohesion goes up, cooperation increases, and people are more free to take risks in new technologies and / or economic ventures: society gets better, and we derive personal benefit from that.

I think we have a pretty major disagreement about that :-/

The social contract is a technology that entangles the values of different people (there are biological mechanisms that do that as well). Generally, my life is better when the lives of people around me are better. If your screw factory goes bust, then I'm negatively affected. If my neighbour lives in terror, then who knows what he might do out of fear - I am at risk. If everybody was scared about where their next meal was coming from, then I would never leave the house for fear that what food I have would be stolen in my absence - economics collapses. Because we have this entangled utility function, what's bad for others is bad for me (in expectation), and what's bad for me is bad for everybody else. For the most part, then, any self defeating behaviour (e.g. irrational attempts to be nice to others) is bad for society, and, in the long run, doesn't help anybody.

I hope this helps.

tom_cr10y00

The question is not one of your goals being 50% fulfilled

If I'm talking about a goal actually being 50% fulfilled, then it is.

"Risk avoidance" and "value" are not synonyms.

Really?

I consider risk to be the possibility of losing or not gaining (essentially the same) something of value. I don't know much about economics, but if somebody could help avoid that, would people be willing to pay for such a service?

If I'm terrified of spiders, then that is something that must be reflected in my utility function, right? My payoff from being close to a spider is less than otherwise.

I'll post a sibling comment.

That would be very kind :) No need to hurry.

Load More