Comment author: alicey 17 August 2015 03:39:16PM *  4 points [-]

Reading this was a bit annoying:

Only one statement about a hand of cards is true:

  • There is a King or Ace or both.

  • There is a Queen or Ace or both.

Which is more likely, King or Ace?

... The majority of people respond that the Ace is more likely to occur, but this is logically incorrect.

It is just communicating badly https://xkcd.com/169/ . In a common parse, Ace is more likely to occur. It would be more likely to be parsed as you intended if you had said

Only one of the following premises is true about a particular hand of cards:

(like you did on the next question!)

Comment author: tom_cr 02 October 2015 09:08:08PM 0 points [-]

I think that the communication goals of the OP were not to tell us something about a hand of cards, but rather to demonstrate that certain forms of misunderstanding are common, and that this maybe tells us something about the way our brains work.

The problem quoted unambiguously precludes the possibility of an ace, yet many of us seem to incorrectly assume that the statement is equivalent to something like, 'One of the following describes the criterion used to select a hand of cards.....,' under which, an ace is likely. The interesting question is, why?

In order to see the question as interesting, though, I first have to see the effect as real.

Comment author: SaidAchmiz 24 March 2014 06:36:02PM *  1 point [-]

I really do want to emphasize that if you assume that "losing" (i.e. encountering an outcome with a utility value on the low end of the scale) has some additional effects, whether that be "losing takes you out of the game", or "losing makes it harder to keep playing", or whatever, then you are modifying the scenario, in a critical way. You are, in effect, stipulating that that outcome actually has a lower utility than it's stated to have.

I want to urge you to take those graphs literally, with the x-axis being Utility, not money, or "utility but without taking into account secondary effects", or anything like that. Whatever the actual utility of an outcome is, after everything is accounted for — that's what determines that outcome's position on the graph's x-axis. (Edit: And it's crucial that the expectation of the two distributions is the same. If you find yourself concluding that the expectations are actually different, then you are misinterpreting the graphs, and should re-examine your assumptions; or else suitably modify the graphs to match your assumptions, such that the expectations are the same, and then re-evaluate.)

This is not a Pascal's Wager argument. The low-utility outcomes aren't assumed to be "infinitely" bad, or somehow massively, disproportionately, unrealistically bad; they're just... bad. (I don't want to get into the realm of offering up examples of bad things, because people's lives are different and personal value scales are not absolute, but I hope that I've been able to clarify things at least a bit.)

Comment author: tom_cr 24 March 2014 08:11:09PM -1 points [-]

If you assume.... [y]ou are, in effect, stipulating that that outcome actually has a lower utility than it's stated to have.

Thanks, that focuses the argument for me a bit.

So if we assume those curves represent actual utility functions, he seems to be saying that the shape of curve B, relative to A makes A better (because A is bounded in how bad it could be, but unbounded in how good it could be). But since the curves are supposed to quantify betterness, I am attracted to the conclusion that curve B hasn't been correctly drawn. If B is worse than A, how can their average payoffs be the same?

To put it the other way around, maybe the curves are correct, but in that case, where does the conclusion that B is worse come from? Is there an algebraic formula to choose between two such cases? What if A had a slightly larger decay constant, at what point would A cease to be better?

I'm not saying I'm sure Dawes' argument is wrong, I just have no intuition at the moment for how it could be right.

Comment author: SaidAchmiz 24 March 2014 05:47:21PM 0 points [-]

Just a brief comment: the argument is not predicated on being "kicked out" of the game. We're not assuming that even the lowest-utility outcomes cause you to no longer be able to continue "playing". We're merely saying that they are significantly worse than average.

Comment author: tom_cr 24 March 2014 06:18:23PM 0 points [-]

Sure, I used that as what I take to be the case where the argument would be most easily recognized as valid.

One generalization might be something like, "losing makes it harder to continue playing competitively." But if it becomes harder to play, then I have lost something useful, i.e. my stock of utility has gone down, perhaps by an amount not reflected in the inferred utility functions. My feeling is that this must be the case, by definition (if the assumed functions have the same expectation), but I'll continue to ponder.

The problem feels related to Pascal's wager - how to deal with the low-probability disaster.

Comment author: SaidAchmiz 23 March 2014 09:57:11PM *  0 points [-]

Dawes' argument, as promised.

The context is: Dawes is explaining von Neumann and Morgenstern's axioms.


Aside: I don't know how familiar you are with the VNM utility theorem, but just in case, here's a brief primer.

The VNM utility theorem presents a set of axioms, and then says that if an agent's preferences satisfy these axioms, then we can assign any outcome a number, called its utility, written as U(x); and it will then be the case that given any two alternatives X and Y, the agent will prefer X to Y if and only if E(U(X)) > E(U(Y)). (The notation E(x) is read as "the expected value of x".) That is to say, the agent's preferences can be understood as assigning utility values to outcomes, and then preferring to have more (expected) utility rather than less (that is, preferring those alternatives which are expected to result in greater utility).

In other words, if you are an agent whose preferences adhere to the VNM axioms, then maximizing your utility will always, without exception, result in satisfying your preferences. And in yet other words, if you are such an agent, then your preferences can be understood to boil down to wanting more utility; you assign various utility values to various outcomes, and your goal is to have as much utility as possible. (Of course this need not be anything like a conscious goal; the theorem only says that a VNM-satisfying agent's preferences are equivalent to, or able to be represented as, such a utility formulation, not that the agent consciously thinks of things in terms of utility.)

(Dawes presents the axioms in terms alternatives or gambles; a formulation of the axioms directly in terms of the consequences is exactly equivalent, but not quite as elegant.)

N.B.: "Alternatives" in this usage are gambles, of the form ApB: you receive outcome A with probability p, and otherwise (i.e. with probability 1–p) you receive outcome B. (For example, your choice might be between two alternatives X and Y, where in X, with p = 0.3 you get consequence A and with p = 0.7 you get consequence B, and in Y, with p = 0.4 you get consequence A and with p = 0.6 you get consequence B.) Alternatives, by the way, can also be thought of as actions; if you take action X, the probability distribution over the outcomes is so-and-so; but if you take action Y, the probability distribution over the outcomes is different.

(If all of this is old hat to you, apologies; I didn't want to assume.)


The question is: do our preferences satisfy VNM? And: should our preferences satisfy VNM?

It is commonly said (although this is in no way entailed by the theorem!) that if your preferences don't adhere to the axioms, then they are irrational. Dawes examines each axiom, with an eye toward determining whether it's mandatory for a rational agent to satisfy that axiom.

Dawes presents seven axioms (which, as I understand it, are equivalent to the set of four listed in the wikipedia article, just with a difference in emphasis), of which the fifth is Independence.

The independence axiom says that AB (i.e., A is preferred to B) if and only if ApCBpC. In other words, if you prefer receiving cake to receiving pie, you also prefer receiving (cake with probability p and death with probability 1–p) to receiving (pie with probability p and death with probability 1–p).

Dawes examines one possible justification for violating this axiom — framing effects, or pseudocertainty — and concludes that it is irrational. (Framing is the usual explanation given for why the expressed or revealed preferences of actual humans often violate the independence axiom.) Dawes then suggests another possibility:

Is such irrationality the only reason for violating the independence axiom? I believe there is another reason. Axiom 5 [Independence] implies that the decision maker cannot be affected by the skewness of the consequences, which can be conceptualized as a probability distribution over personal values. Figure 8.1 shows (Note: This is my reproduction of the figure. I've tried to make it as exact as possible.) the skewed distributions of two different alternatives. Both distributions have the same average, hence the same expected personal value, which is a criterion of choice implied by the axioms. These distributions also have the same variance.

If the distributions in Figure 8.1 were those of wealth in a society, I have a definite preference for distribution a; its positive skewness means that income can be increased from any point — an incentive for productive work. Moreover, those people lowest in the distribution are not as distant from the average as in distribution b. In contrast, in distribution b, a large number of people are already earning a maximal amount of money, and there is a "tail" of people in the negatively skewed part of this distribution who are quite distant from the average income.[5] If I have such concerns about the distribution of outcomes in society, why not of the consequences for choosing alternatives in my own life? In fact, I believe that I do. Counter to the implications of prospect theory, I do not like alternatives with large negative skews, especially when the consequences in the negatively skewed part of the distribution have negative personal value.

[5] This is Dawes' footnote; it talks about an objection to "Reaganomics" on similar grounds.

Essentially, Dawes is asking us to imagine two possible actions. Both have the same expected utility; that is, the "degree of goal satisfaction" which will result from each action, averaged appropriately across all possible outcomes of that action (weighted by probability of each outcome), is exactly equal.

But the actual probability distribution over outcomes (the form of the distrbution) is different. If you do action A, then you're quite likely to do alright, there's a reasonable chance of doing pretty well, and a small chance of doing really great. If you do action B, then you're quite likely to do pretty well, there's a reasonable chance to do ok, and a small chance of doing disastrously, ruinously badly. On average, you'll do equally well either way.

The Independence axiom dictates that we have no preference between those two actions. To prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom. Is this irrational? Dawes says no. I agree with him. Why shouldn't I prefer to avoid the chance of disaster and ruin? Consider what happens when the choice is repeated, over the course of a lifetime. Should I really not care whether I occasionally suffer horrible tragedy or not, as long as it all averages out?

But if it's really a preference — if I'm not totally indifferent — then I should also prefer less "risky" (i.e. less negatively skewed) distributions even when the expectation is lower than that of distributions with more risk (i.e. more negative skew) — so long as the difference in expectation is not too large, of course. And indeed we see such a preference not only expressed and revealed in actual humans, but enshrined in our society: it's called insurance. Purchasing insurance is an expression of exactly the preference to reduce the negative skew in the probability distribution over outcomes (and thus in the distributions of outcomes over your lifetime), at the cost of a lower expectation.

Comment author: tom_cr 24 March 2014 03:49:46PM 0 points [-]

Thanks very much for the taking the time to explain this.

It seems like the argument (very crudely) is that, "if I lose this game, that's it, I won't get a chance to play again, which makes this game a bad option." If so, again, I wonder if our measure of utility has been properly calibrated.

It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, but money is not identical to utility.

Nonetheless, those exponential distributions make a very interesting argument.

I'm not entirely sure, I need to mull it over a bit more.

Thanks again, I appreciate it.

Comment author: Nornagest 21 March 2014 10:23:13PM *  1 point [-]

Actually, the whole point of governments and legal systems [...] is to encourage cooperation between individuals [...] And specialization trivially depends upon cooperation.

I have my quibbles with the social contract theory of government, but my main objection here isn't to the theory itself, but that you're attributing features to it that it clearly isn't responsible for. You don't need post-apocalyptic chaos to find situations that social contracts don't cover: for example, there is no social contract on the international stage (pre-superpower, if you'd prefer), but nations still specialize and make alliances and transfer value.

The point of government (and therefore the social contract, if you buy that theory of legitimacy) is to facilitate cooperation. You seem to be suggesting that it enables it, which is a different and much stronger claim.

Comment author: tom_cr 21 March 2014 11:14:59PM -1 points [-]

I think that international relations is a simple extension of social-contract-like considerations.

If nations cooperate, it is because it is believed to be in their interest to do so. Social-contract-like considerations form the basis for that belief. (The social contract is simply that which makes it useful to cooperate.) "Clearly isn't responsible for," is a phrase you should be careful before using.

You seem to be suggesting that [government] enables [cooperation]

I guess you mean that I'm saying cooperation is impossible without government. I didn't say that. Government is a form of cooperation. Albeit a highly sophisticated one, and a very powerful facilitator.

I have my quibbles with the social contract theory of government

I appreciate your frankness. I'm curious, do you have an alternative view of how government derives legitimacy? What is it that makes the rules and structure of society useful? Or do you think that government has no legitimacy?

Comment author: Lumifer 21 March 2014 06:57:17PM 0 points [-]

If welfare of strangers is something you value, then it is not a net cost.

Having a particular value cannot have a cost. Values start to have costs only when they are realized or implemented.

Costlessly increasing the welfare of strangers doesn't sound like altruism to me. Let's say we start telling people "Say yes and magically a hundred lives will be saved in Chad. Nothing is required of you but to say 'yes'." How many people will say "yes"? I bet almost everyone. And we will be suspicious of those who do not -- they would look like sociopaths to us. That doesn't mean that we should call everyone but sociopaths is an altruist -- you can, of course, define altruism that way but at this point the concept becomes diluted into meaninglessness.

We continue to have major disagreements about the social contract, but that's a big discussion that should probably go off into a separate thread if you want to pursue it.

Comment author: tom_cr 21 March 2014 09:58:06PM -1 points [-]

Values start to have costs only when they are realized or implemented.

How? Are you saying that I might hold legitimate value in something, but be worse off if I get it?

Costlessly increasing the welfare of strangers doesn't sound like altruism to me.

OK, so we are having a dictionary writers' dispute - one I don't especially care to continue. So every place I used 'altruism,' substitute 'being decent' or 'being a good egg,' or whatever. (Please check, though, that your usage is somewhat consistent.)

But your initial claim (the one that I initially challenged) was that rationality has nothing to do with value, and is manifestly false.

Comment author: Nornagest 21 March 2014 08:36:46PM *  -1 points [-]

The social contract enables specialization in society, and therefore complex technology. This works through our ability to make and maintain agreements and cooperation. If you know how to make screws, and I want screws, the social contract enables you to convincingly promise to hand over screws if I give you some special bits of paper. If I don't trust you for some reason, then the agreement breaks down.

Either you're using a broader definition of the social contract than I'm familiar with, or you're giving it too much credit. The model I know with provides (one mechanism for) the legitimacy of a government or legal system, and therefore of the legal rights it establishes including an expectation of enforcement; but you don't need it to have media of exchange, nor cooperation between individuals, nor specialization. At most it might make these more scalable.

And of course there are models that deny the existence of a social contract entirely, but that's a little off topic.

Comment author: tom_cr 21 March 2014 09:38:53PM -1 points [-]

If you look closely, I think you should find that legitimacy of government & legal systems comes from the same mechanism as everything I talked about.

You don't need it to have media of exchange, nor cooperation between individuals, nor specialization

Actually, the whole point of governments and legal systems (legitimate ones) is to encourage cooperation between individuals, so that's a bit of a weird comment. (Where do you think the legitimacy comes from?) And specialization trivially depends upon cooperation.

Yes, these things can exist to a small degree in a post-apocalyptic chaos, but they will not exactly flourish. (That's why we call it post-apocalyptic chaos.) But the extent to which these things can exist is a measure of how well the social contract flourishes. Don't get too hung up on exactly, precisely what 'social contract' means, it's only a crude metaphor. (There is no actual bit of paper anywhere.)

I may not be blameless, in terms clearly explaining my position, but I'm sensing that a lot of people on this forum just plain dislike my views, without bothering to take the time to consider them honestly.

Comment author: Lumifer 21 March 2014 12:51:23AM *  0 points [-]

If altruism entails a cost to the self, then your claim that altruism is all about values seems false

Why does it seem false? It is about values, in particular the relationship between the value "welfare of strangers" and the value "resources I have".

If the social contract requires being nice to people

It does not. The social contract requires you not to infringe upon the rights of other people and that's a different thing. Maybe you can treat it as requiring being polite to people. I don't see it as requiring being nice to people.

Furthermore, being nice in a way the exposes me to undue risk is bad for society (the social contract entails shared values, so such behaviour would also expose others to risk), so under the social contract, cases where being nice is not rational do not really exist.

I think we have a pretty major disagreement about that :-/

Comment author: tom_cr 21 March 2014 03:50:03PM -1 points [-]

Why does it seem false?

If welfare of strangers is something you value, then it is not a net cost.

Yes, there is an old-fashioned definition of altruism that assumes the action must be non-self-serving, but this doesn't match common contemporary usage (terms like effective altruism and reciprocal altruism would be meaningless), doesn't match your usage, and is based on a gross misunderstanding of how morality comes about (if written about this misunderstanding here - see section 4, "Honesty as meta-virtue," for the most relevant part).

Under that old, confused definition, yes, altruism can not be rational (but not orthogonal to rationality - we could still try to measure how irrational any given altruistic act is, each act still sits somewhere on the scale of rationality).

It does not.

You seem very confident of that. Utterly bizarre, though, that you claim that not infringing on people's rights is not part of being nice to people.

But the social contract demands much more than just not infringing on people's rights. (By the way, where do those right come from?) We must actively seek each other out, trade (even if it's only trade in ideas, like now), and cooperate (this discussion wouldn't be possible without certain adopted codes of conduct ).

The social contract enables specialization in society, and therefore complex technology. This works through our ability to make and maintain agreements and cooperation. If you know how to make screws, and I want screws, the social contract enables you to convincingly promise to hand over screws if I give you some special bits of paper. If I don't trust you for some reason, then the agreement breaks down. You lose income, I lose the screws I need for my factory employing 500 people, we all go bust. Your knowledge of how to make screws and my expertise in making screw drivers now counts for nothing, and everybody is screwed.

We help maintain trust by being nice to each other outside our direct trading. Furthermore, by being nice to people in trouble who we have never before met, we enhance a culture of trust that people in trouble will be helped out. We therefore increase the chances that people will help us out next time we end up in the shit. Much more importantly, we reduce a major source of people's fears. Social cohesion goes up, cooperation increases, and people are more free to take risks in new technologies and / or economic ventures: society gets better, and we derive personal benefit from that.

I think we have a pretty major disagreement about that :-/

The social contract is a technology that entangles the values of different people (there are biological mechanisms that do that as well). Generally, my life is better when the lives of people around me are better. If your screw factory goes bust, then I'm negatively affected. If my neighbour lives in terror, then who knows what he might do out of fear - I am at risk. If everybody was scared about where their next meal was coming from, then I would never leave the house for fear that what food I have would be stolen in my absence - economics collapses. Because we have this entangled utility function, what's bad for others is bad for me (in expectation), and what's bad for me is bad for everybody else. For the most part, then, any self defeating behaviour (e.g. irrational attempts to be nice to others) is bad for society, and, in the long run, doesn't help anybody.

I hope this helps.

Comment author: SaidAchmiz 20 March 2014 10:40:21PM 0 points [-]

Re: your response to point 1: again, the options in question are probability distributions over outcomes. The question is not one of your goals being 50% fulfilled or 51% fulfilled, but, e.g., a 51% probability of your goals being 100% fulfilled vs., a 95% probability of your goals being 50% fulfilled. (Numbers not significant; only intended for illustrative purposes.)

"Risk avoidance" and "value" are not synonyms. I don't know why you would say that. I suspect one or both of us is seriously misunderstanding the other.

Re: point #2: I don't have the time right now, but sometime over the next couple of days I should have some time and then I'll gladly outline Dawes' argument for you. (I'll post a sibling comment.)

Comment author: tom_cr 20 March 2014 11:24:25PM 0 points [-]

The question is not one of your goals being 50% fulfilled

If I'm talking about a goal actually being 50% fulfilled, then it is.

"Risk avoidance" and "value" are not synonyms.

Really?

I consider risk to be the possibility of losing or not gaining (essentially the same) something of value. I don't know much about economics, but if somebody could help avoid that, would people be willing to pay for such a service?

If I'm terrified of spiders, then that is something that must be reflected in my utility function, right? My payoff from being close to a spider is less than otherwise.

I'll post a sibling comment.

That would be very kind :) No need to hurry.

Comment author: Lumifer 20 March 2014 08:34:01PM 3 points [-]

Lets define altruism as being nice to other people. Lets describe the social contract as a mutually held belief that being nice to other people improves society.

Let's define things the way they are generally understood or at least close to it. You didn't make your point.

I understand altruism, generally speaking, as valuing the welfare of strangers so that you're willing to attempt to increase it at some cost to yourself. I understand social contract as a contract, a set of mutual obligations (in particular, it's not a belief).

Comment author: tom_cr 20 March 2014 11:03:09PM 0 points [-]

Apologies if my point wasn't clear.

If altruism entails a cost to the self, then your claim that altruism is all about values seems false. I assumed we are using similar enough definitions of altruism to understand each other.

We can treat the social contract as a belief, a fact, an obligation, or goodness knows what, but it won't affect my argument. If the social contract requires being nice to people, and if the social contract is useful, then there are often cases when being nice is rational.

Furthermore, being nice in a way the exposes me to undue risk is bad for society (the social contract entails shared values, so such behaviour would also expose others to risk), so under the social contract, cases where being nice is not rational do not really exist.

Thus, if I implement the belief / obligation / fact of the social contract, and that is useful, then being nice is rational.

View more: Next