findis
findis has not written any posts yet.

The standard definition of "rationality" in economics is "having complete and transitive preferences", and sometimes "having complete and transitive preferences and adhering to the von Neumann-Morgenstern axioms". Not the way it's used on Less Wrong.
I think the really cool thing about game theory is how far you can go by stating the form of a game and deriving what someone will do, or the possible paths they may take, assuming only that they have rational preferences.
Wouldn't a rational consequentialist estimate the odds that the policy will have unpredictable and harmful consequences, and take this into consideration?
Regardless of how well it works, consequentialism essentially underlies public policy analysis and I'm not sure how one would do it otherwise. (I'm talking about economists calculating deadweight loss triangles and so on, not politicians arguing that "X is wrong!!!")
Why is whether your decision actually changes the boxes important to you? [....] If you argue yourself into a decision theory that doesn't serve you well, you've only managed to shoot yourself in the foot.
In the absence of my decision affecting the boxes, taking one box and leaving $1000 on the table still looks like shooting myself in the foot. (Of course if I had the ability to precommit to one-box I would -- so, okay, if Omega ever asks me this I will take one box. But if Omega asked me to make a decision after filling the boxes and before I'd made a precommitment... still two boxes.)
I think I'm going to back out of this discussion until I understand decision theory a bit better.
Do you choose to hit me or not?
No, I don't, since you have a time-turner. (To be clear, non-hypothetical-me wouldn't hit non-hypothetical-you either.) I would also one-box if I thought that Omega's predictive power was evidence that it might have a time turner or some other way of affecting the past. I still don't think that's relevant when there's no reverse causality.
Back to Newcomb's problem: Say that brown-haired people almost always one-box, and people with other hair colors almost always two-box. Omega predicts on the basis of hair color: both boxes are filled iff you have brown hair. I'd two-box, even though I have brown hair. It would be logically inconsistent... (read more)
you will achieve a net gain of $4950*p(x) over a non-committer (a very small number admittedly given that p(x) is tiny, but for the sake of the thought experiment all that matters is that it's positive.)
Given that someone who makes such a precommitment comes out ahead of someone who doesn't - shouldn't you make such a commitment right now?
Right now, yes, I should precommit to pay the $100 in all such situations, since the expected value is p(x)*$4950.
If Omega just walked up to me and asked for $100, and I had never considered this before, the value of this commitment is now p(x)*$4950 - $100, so I would not pay unless I thought there was more than a 2% chance this would happen again.
The difference between this scenario and the one you posited before, where Ann's mom makes her prediction by reading your philosophy essays, is that she's presumably predicting on the basis of how she would expect you to choose if you were playing Omega.
Ok, but what if Ann's mom is right 99% of the time about how you would choose when playing her?
I agree that one-boxers make more money, with the numbers you used, but I don't think that those are the appropriate expected values to consider. Conditional on the fact that the boxes have already been filled, two-boxing has a $1000 higher expected value. If I know only one box is... (read more)
I think it is worth preserving a distinction between the specific kind of signaling Patrick describes and a weaker definition, because "true signaling" explains a specific phenomenon: in equilibrium, there seems to be too much effort expended on something, but everyone is acting in their own best interest. "High-quality" people do something to prove they are high quality, and "low-quality" people imitate this behavior. If education is a signal, people seem to get "too much" education for what their jobs require.
As in an exam problem I recently heard about: Female bullfrogs prefer large male bullfrogs. Large bullfrogs croak louder. In the dark, small bullfrogs croak loudly to appear large. To signal... (read more)
Differences in conformity: women may conform a bit more to widespread social views (at least, to views of "their social class") and/or compartimentalize more between what they learn about a specific topic and their general views. This would mean female scientists would be slightly less likely to be atheists in religious countries, female theology students would be slightly less likely to be fanatics in not-that-fanatical societies, etc.
We need to look at differences between men and women conditional on the fact that they've become economists, not just differences between men and women. Becoming a professional economist requires more nonconformity for a woman than for a man -- deciding to pursue a gender-atypical job,... (read more)
To be properly isomorphic to the Newcomb's problem, the chance of the predictor being wrong should approximate to zero.
If I thought that the chance of my friend's mother being wrong approximated to zero, I would of course choose to one-box. If I expected her to be an imperfect predictor who assumed I would behave as if I were in the real Newcomb's problem with a perfect predictor, then I would choose to two-box.
Hm, I think I still don't understand the one-box perspective, then. Are you saying that if the predictor is wrong with probability p, you would take two-boxes for high p and one box for a sufficiently small p (or just... (read more)
Yep. The most common model that yields a rational agent who will choose to restrict zir own future actions is beta-delta discounting, or time inconsistent preferences. I've had problem sets with such questions, usually involving a student procrastinating on an assignment; I don't think I can copy them, but let me know if you want me to sketch out how such a problem might look.
Actually, maybe the most instrumental-rationality-enhancing topics to cover that have legitimate game theoretic aspects are in behavioral economics. Perhaps you could construct examples where you contrast the behavior of an agent who interprets probabilities in a funny way, as in Prospect Theory, with an agent who obeys the vNM axioms.