All of Velochy's Comments + Replies

Velochy00

On the topic of "utilities in the prisoner dilemma coinciding with jailtime" I quote one of my guest blog posts: http://phd.kt.pri.ee/2009/01/27/the-real-prisoner-dilemma/

Two hardened criminals are taken to interrogation in separate cells. They are offered the usual deal: If neither confesses, both get one year probation. If both confess, both do 5 years in jail. If one confesses, he goes free but the other does 10 years hard time.

Here’s what actually goes through their minds: “Okay, if neither of us confesses, we have to go back to the rea... (read more)

Velochy10

Im just reading Thomas Schelling's Theory of Conflict and one of his key tenets is that providing an identifiable point around which the discussion can be centered will tend to lead the discussion to be centered around that (classical anchoring). However, he brings out that in many cases, having a "line in the sand" brings benefits to all sides by allowing intermediate deals to be struck when only extremes were possible before.

This article, however, clearly demonstrates that having a line in the sand can be just as bad as it can be good, as it is with all of biases. However, I really recommend Schelling hit on "what is good" (in the evolutionary sense) about this phenomenon.

Velochy10

But three people should do already. Im fairly convinced that this game is unstable in the sense it would not make sense for any of them to agree to get 1/3 as they can always guarantee themselves more by defecting with someone (even by offeing them 1/6 - epsilon which is REALLY hard to turn down). It seems that a given majority getting 1/2 each would be a more probable solution but you would really need to formalize the rules before this can be proven. Im a cryptologist so this is sadly not really my area...

7Psychohistorian
I almost posted on the three-person situation earlier, but what I wrote wasn't cogent enough. It does seem like it should work as an archetype for any N > 2. The problem is how the game is iterated. Call the players A, B, and C. If A says, "B, let's go 50-50," and you assume C doesn't get to make a counter-offer and they vote immediately, 50-50-0 is clearly the outcome. This is probably also the case for the 10-person if there's no protracted bargaining. If there is protracted bargaining, it turns into an infinite regression as long as there is an out-group, and possibly even without an outgroup. Take this series of proposals, each of which will be preferred to the one prior (format is Proposer:A gets-B gets-C gets): A:50-50-0 C:0-55-45 A:50-0-50 B: 55-45-0 C:0-55-45 A:50-0-50 ... There's clearly no stable equilibrium. It seems (though I'm not sure how to prove this) that an equal split is the appropriate efficient outcome. Any action by any individual will create an outgroup that will spin them into an infinite imbalance. Moreover, if we are to arbitrarily stop somewhere along that infinite chain, the expected value for each player is going to be 100/3 (they get part of a two-way split twice which should average to 50 each time overall, and they get zero once per three exchanges). Thus, at 33-33-33, one can't profitably defect. At 40-40-20, C could defect and have a positive expected outcome. If the players have no bargaining costs whatsoever, and always have the opportunity to bargain before a deal is voted on, and have an infinite amount of time and do not care how long it takes to reach agreement (or if agreement is reached), then it does seem like you get an infinite loop, because there's always going to be an outgroup that can outbid one of the ingroup. This same principle should also apply to the 10-person model; with infinite free time and infinite free bargaining, no equilibrium can be reached. If there is some cost to defecting, or a limitation o
Velochy00

Sorry. I thought about things a little and realized that a few things about prospect theory definately need to be scrapped as bad ideas.. The probability weighing for instance. But other quirks (such as loss aversion or having different utilities for loss vs gain) might be useful to retain...

It would really be good if I knew a bit more about the different descision theories at this point. Does anyone have any good references from where one would get an overview and good references?

1Technologos
The standard argument against anything other than EU maximization (note that consistent loss-aversion may arise from diminishing marginal utility of money; loss-aversion only is interesting when directionally inconsistent) in economics involves Dutch-booking: the ability to set people up as money pumps and extract money from them by repeatedly offering subjectively preferred choices that violate transitivity. Essentially, EU maximization might be something we want to have because it induces consistency in decision-making. For instance, imagine a preference ordering like the one in Nick_Tarleton's adjacent comment, where +10 is different from +20-10. Let us say that +9=+20-10 (without loss of generality; just pick a number on the left side). Then I can offer you +9 in exchange for +20-10 repeatedly, and you'll prefer it every time, but you ultimately lose money. The reason that rational risk aversion (which is to say, diminishing marginal utility of money) is not a money pump is that you have to reduce risk every time you extract some expected cash, and that cannot happen forever. Ultimately, then, prospect theory and related work are useful in understanding human decision-making but not in improving it.
1Nick_Tarleton
Differing utilities for loss vs. gain introduce an apparently absurd degree of path dependence, in which, say, gaining $10 is perceived differently from gaining $20 and immediately thereafter losing $10. Loss vs. gain asymmetry isn't in conflict with expected utility maximization (though nonlinear probability weighing is), but it is inconsistent with stronger intuitions about what we should be doing. "Different decision theories" is usually used to mean, e.g., causal decision theory vs. evidential decision theory vs. whatever it is Eliezer has developed. Which of these you use is (AFAIK) orthogonal to what preferences you have, so I assume that doesn't answer your real question. Any reference on different types of utilitarianism might be a little more like what you're looking for, but I can't think of anyone who's catalogued different proposed selfish utility functions.
Velochy00

One thing that came to mind just this morning: Why is expected utility maximization the most rational thing to do? As I understand it (and Im a CS, not Econ. major), prospect theory and the utility function weighing used in it are usually accepted as how most "irrational" people make their descisions. But this might not be because they are irrational but rather because our utility functions do actually behave that way in which case we should abandon EU and just try to maximize well being with all the quirks PT introduces (such as loss being more... (read more)

0Velochy
Sorry. I thought about things a little and realized that a few things about prospect theory definately need to be scrapped as bad ideas.. The probability weighing for instance. But other quirks (such as loss aversion or having different utilities for loss vs gain) might be useful to retain... It would really be good if I knew a bit more about the different descision theories at this point. Does anyone have any good references from where one would get an overview and good references?
Velochy10

Hello,

My name is Margus Niitsoo and Im a 22 year old Computer Science doctorial student in Tartu, Estonia. I have wide interests that span religion and psychology as well (I am a pantheist by the way.. so somewhat religious but unaffected by most of the classical theism bashing). I got here through OB which I got to when reading about AI and the thing that shall not be named.

I do not identify myself as a rationalist for I only recently understood how emotional a person I really am and id like to enjoy it before trying to get it under control again. However, I am interested in understanding human behaviour as best I can and this blog has given me many new insights I doubt I could have gotten somewhere else.

3MBlume
Note that rationality does not necessarily oppose emotion. Feeling Rational The Twelve Virtues
1thomblake
Note that rationality and emotion are not mutually exclusive, and thinking that they are can get you into trouble. Good reference, anyone? I'd recommend Aristotle. ETA: Yes, Vladimir_Nesov's link, below, is what I was looking for.
Velochy20

Another thing that comes off the top of my head is that one might try to get some groups already interested in this topic (in theory) to read LW and OB. One such group I can think of are LaVeyan Satanists. In theory, it is a religion of rationality (although, in practice, it is rather far from it quite often.. Im just lucky to know a specimen who embodies the theory)... Then again, this might not be an association we want (especially in US.. it would even be rather bad here in Estonia where most of the country is atheistic).. but there should be some other... (read more)

Velochy50

The game of "Paranoid Debating" ( http://lesswrong.com/lw/77/selecting_rationalist_groups/6lb ) would make for a great gameshow and it would definately increase the popularity of rationality. Someone should try pitching it to a TV station...

Velochy60

Just reminding everyone of one more sad thing - every good cause to rally people under generally needs an enemy. And if there isnt one, it usually develops or is found. People somehow just want to be against things rather than for them..

Also, atheism seems to be one of the few things most of us here have in common so Matt Newports post hits a nail there. We have a tradition of bashing theism. Traditions go a long way towards cementing a sense of community, so they do have a positive side. But the fact is that once a tradition has developed, people who break it are usually viewed as outsiders in some sense so it makes sense for people to stick to the traditons.

Velochy20

Mental energy is actually a limiting factor and I believe that this is the cause for more failures than people care to admit. That is, we as humans have a tendency to pick our battles as we have a limited amount of time and thinking resources and as such only invest large amounts of both only on a very small set of descisions. This means that most descisions do get done rather automatically.. which (as has been argued in previous articles) is rather normal. However, I think that a rationalist should be able to determine wether the thing he messed up was so... (read more)

Velochy20

One thing might be worth mentioning. To most religions, helping others is one of their shared core values. This means that everyone joining a church can expect (even on a rational level) that he recieves a warm welcome. As the communities themselves also feel they ought to be warm to newcomers, that is what usually happends too.

The problem rationalists are facing is that the community is essentially dog eat dog where everyone tries their best to scrutinize others thoughts (as this is the "rational" thing to do) and then to bash the hell out of th... (read more)

Velochy30

I just noticed a study that might be relevant here cited in a classic Social psychology book I was reading (Baron, Byrne). The article they refer to is Graziano et al. "Social influence, sex differences, and judgments of beauty : putting the Interpersonal back in interpersonal attraction", 1993 and the result relevant here is that when women are shown an assessment of a man by some other woman, their own assesment moves towards it. This would make the result discussed here just a corollary of a self-fulfilling prophecy. I do not have time to read either of the articles at the moment but it would probably do some good if someone looked over the Graziano paper and verified whether or not it is relevant?