Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Yes, but that explains why people (especially male) want to be strong leaders (alpha male), not why people follow strong leaders. For people to follow strong leaders, they need to have an evolutionary advantage in doing so (hope of being the next leader, the leader granting some privileges to his most faithful followers, or something else, I don't know).
You are 10% to 20% likely to die before you enjoy even your first retirement year.
You are 10% to 20% likely to die before you enjoy even your first retirement year.
Careful, this is from birth and for all categories. The survival rate to age 65 for a healthy middle-class educated person in his late 20s or early 30s is likely much higher than the survival rate from birth (the first few years of life are quite dangerous, and many chronic diseases should be already diagnosed at age 30), and middle-class educated persons doing intellectual jobs (the typical audience of LW) live longer than factory workers or miners.
Interesting, but the point on "Democracy" seems a bit an applause light to me. We all like democracy so a community needs democracy, right ?
Well, if you look at communities, you'll see that "leader worship" is actually as least as efficient to build a strong community than democracy. I'm not saying it's the best option all things considered, but if in the purpose of crafting a community, having a strong, quasi-dictatorial leader that everyone respects tends to be a very efficient way. The "penguin" is a clear example of that : Linus, the "benevolent dictator for life" is a strong factor of the community cohesion. Democratic models can also work (to stay in the same domain, that's how Debian works, and it works very well) but they aren't the most likely path to success.
There probably are evolutionary psychology reasons behind the "strong leader" pattern, rooting into families (were the patriarch or the matriarch is the natural "strong leader") and tribes (which usually aren't very democratic), the two most primitive communities, but I won't enter the details because evopsy isn't my primary field.
I'll be there :)
But are you sure the time is correct ? 10:23PM ? I think you used the date at which you created the event instead ;) Isn't it 2pm like usual ?
Not so sure about that, just dive a few meters under water, and the pressure gets up very quickly, roughly, every 10m you dive, you get an additional atmosphere of pressure, and people are known to be able to dive below 100m with training but without special apparatus. The problems arise mostly when the pressure changes quickly (or when it gets very high), but a pressure of 10 atmosphere, with sufficient preparation and adjusting time, doesn't kill a human being.
The cholera example was definitely a bit silly - after all, "cholera" and "apple vs orange" are usually really independent in the real world, you've to make very far-fetched circumstances for them to be dependent. But an axiom is supposed to be valid everywhere - even in far-fetched circumstances ;)
But overall, I understand the thing much better now: in fact, the independence principle doesn't strictly hold in the real world, like there are no strictly right angle in the real world. But yet, like we do use the Pythagoras theorem in the real world, assuming an angle to be right when it's "close enough" to be right, we apply the VNM axioms and the related expected utility theory when we consider the independence principle to have enough validity?
But do we have any way to measure the degree of error introduced by this approximation? Do we have ways to recognize the cases where we shouldn't apply the expected utility theory, because we are too far from the ideal model?
My point never was to fully reject VNM and expected utility theory - I know they are useful, they work in many cases, ... My point was to draw attention on a potential problem (making it an approximation, making it not always valid) that I don't usually see being addressed (actually, I don't remember ever having seen it that explicitly).
Maybe the problem comes from my understanding of what the "alternative", "choice" or "act" in the VNM axioms is.
To me it's a single, atomic real-world choice you have to make: you're offered a clear choice between options, and you've to select one. Like you're offered a lottery ticket, and you can decide to buy it or not. Or to make my original example A = "in two months you'll be given a voucher to go to Ecuador", B = "in two months you'll be given a laptop" and C = "in two months you'll given a voucher to go to Iceland". And the independence axiom that, over those choices, if I chose B over C, then I must chose (0.5A, 0.5B) over (0.5A, 0.5C). In my original understanding, things like "preparation" or "what I would do with the money if I win the lottery" are things I'm free to evaluate to chose A, B or C, but aren't part of A, B or C.
The "world histories" view of benelliott seem to fix the problem at first glance, but to me it makes it even worse. If what you're choosing is not individual actions, but whole "world histories", then the independence axiom isn't false, but doesn't even make sense to me. Because the whole "world history" is necessarily different - the whole world history when offered to chose between B and C is in fact B' = "B and knowing you had to chose between B and C" vs C' = "C and knowing you had to chose between B and C", while when offered to chose between D=(0.5A, 0.5B) vs E=(0.5A, 0.5C) is in fact (0.5A² = "A and knowing you had to chose between D and E", 0.5B² = "B and knowing you had to chose between D and E") vs (0.5A², 0.5C² = "C and knowing you had to chose between D and E").
So, how do you define those (A, B, C) in the independence axiom (and the other axioms) so it doesn't fall to the first problem, without making them factor the whole state of the world, in which case you can't even formulate it?
Because in my view they did not correct any mistake I made, but they're avoiding the core problem, using rhetoric tricks such as playing on words, irony, strawman or ad hominem instead. And I'm very disappointed to see the conversion go on this way, I wasn't expecting that from LW. I was expecting people to disagree with me (most people here think NVM is justified) but I was expecting a constructive discussion, not such a bashing.
The Allais paradox (both Eliezer version and the Wikipedia article) seems to not specify at all if the reward is instantaneous or delayed, so I wouldn't say the risk aversion isn't justified - if offered the paradox with no precision, I would say there is a chance for the reward to be instant, a chance for it to be delayed, so the risk aversion should be partially considered. It's a bit of nitpicking, but not that much. There is no mention of time/delay in either of the paradox nor the axiom, and that seems to be a weakness to me.
And even without time, information can still have some value. If you chose 1B in the Allais paradox and lose, you can regret your choice (leading to, in fact, a <0 outcome) more than in you chose 2B and lose. Because of the information that it's purely because of your choice you lose money. Or people can have a lower opinion of you (which may have negative consequences on your life) if they have the information too. Regretting what was a rational decision can be considered irrational, so I don't see a problem with it being incompatible with VNM axioms. But the reaction of other people being irrational is not something you can discard the same way - if a VNM rational agent is unable to deal with human beings who aren't perfectly rational, then there is a problem.
In short : the value of information is more important when it creates uncertainty - when the results of the lottery are delayed. But even when the results are instantaneous, information still have some value (positive or negative) that can make choosing 1A over 1B and 2B over 2A the rational choice to do in some situations.
First, I did study mathematical logic, and please avoid such kind of ad hominem.
That said, if what you're referring to is the whole world state, the outcomes are, in fact, awlays different. Even if only because there is somewhere in your brain the knowledge that the choice is different.
To take the formulation in the FAQ : « The independence axiom states that, for example, if an agent prefers an apple to an orange, then she must also prefer the lottery [55% chance she gets an apple, otherwise she gets cholera] over the lottery [55% chance she gets an orange, otherwise she gets cholera]. More generally, this axiom holds that a preference must hold independently of the possibility of another outcome (e.g. cholera). »
That has no meaning if you consider whole world states, not just specific outcomes. Because in the lottery it's not "apple or orange" then but "apple with the knowledge I almost got cholera" vs "orange with the knowledge I almost got cholera". And if there is an interaction between the two, then you have different ranking between them. Maybe you had a friend who died of cholera and loved apple, and that'll change how much you appreciate apples knowing you almost had cholera. Maybe not. But anyway, if what you consider are whole world states, then by definition the whole world state is always different when you're offered even a slightly different choice. How can you define an independence principle in that case ?
All it takes is a username and password
Already have an account and just want to login?
Forgot your password?