Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
findis00

Yep. The most common model that yields a rational agent who will choose to restrict zir own future actions is beta-delta discounting, or time inconsistent preferences. I've had problem sets with such questions, usually involving a student procrastinating on an assignment; I don't think I can copy them, but let me know if you want me to sketch out how such a problem might look.

Actually, maybe the most instrumental-rationality-enhancing topics to cover that have legitimate game theoretic aspects are in behavioral economics. Perhaps you could construct examples where you contrast the behavior of an agent who interprets probabilities in a funny way, as in Prospect Theory, with an agent who obeys the vNM axioms.

findis10

The standard definition of "rationality" in economics is "having complete and transitive preferences", and sometimes "having complete and transitive preferences and adhering to the von Neumann-Morgenstern axioms". Not the way it's used on Less Wrong.

I think the really cool thing about game theory is how far you can go by stating the form of a game and deriving what someone will do, or the possible paths they may take, assuming only that they have rational preferences.

findis30

Wouldn't a rational consequentialist estimate the odds that the policy will have unpredictable and harmful consequences, and take this into consideration?

Regardless of how well it works, consequentialism essentially underlies public policy analysis and I'm not sure how one would do it otherwise. (I'm talking about economists calculating deadweight loss triangles and so on, not politicians arguing that "X is wrong!!!")

findis00

Why is whether your decision actually changes the boxes important to you? [....] If you argue yourself into a decision theory that doesn't serve you well, you've only managed to shoot yourself in the foot.

In the absence of my decision affecting the boxes, taking one box and leaving $1000 on the table still looks like shooting myself in the foot. (Of course if I had the ability to precommit to one-box I would -- so, okay, if Omega ever asks me this I will take one box. But if Omega asked me to make a decision after filling the boxes and before I'd made a precommitment... still two boxes.)

I think I'm going to back out of this discussion until I understand decision theory a bit better.

findis00

Do you choose to hit me or not?

No, I don't, since you have a time-turner. (To be clear, non-hypothetical-me wouldn't hit non-hypothetical-you either.) I would also one-box if I thought that Omega's predictive power was evidence that it might have a time turner or some other way of affecting the past. I still don't think that's relevant when there's no reverse causality.

Back to Newcomb's problem: Say that brown-haired people almost always one-box, and people with other hair colors almost always two-box. Omega predicts on the basis of hair color: both boxes are filled iff you have brown hair. I'd two-box, even though I have brown hair. It would be logically inconsistent for me to find that one of the boxes is empty, since everyone with brown hair has both boxes filled. But this could be true of any attribute Omega uses to predict.

I agree that changing my decision conveys information about what is in the boxes and changes my guess of what is in the boxes... but doesn't change the boxes.

findis-10

you will achieve a net gain of $4950*p(x) over a non-committer (a very small number admittedly given that p(x) is tiny, but for the sake of the thought experiment all that matters is that it's positive.)

Given that someone who makes such a precommitment comes out ahead of someone who doesn't - shouldn't you make such a commitment right now?

Right now, yes, I should precommit to pay the $100 in all such situations, since the expected value is p(x)*$4950.

If Omega just walked up to me and asked for $100, and I had never considered this before, the value of this commitment is now p(x)*$4950 - $100, so I would not pay unless I thought there was more than a 2% chance this would happen again.

findis00

The difference between this scenario and the one you posited before, where Ann's mom makes her prediction by reading your philosophy essays, is that she's presumably predicting on the basis of how she would expect you to choose if you were playing Omega.

Ok, but what if Ann's mom is right 99% of the time about how you would choose when playing her?

I agree that one-boxers make more money, with the numbers you used, but I don't think that those are the appropriate expected values to consider. Conditional on the fact that the boxes have already been filled, two-boxing has a $1000 higher expected value. If I know only one box is filled, I should take both. If I know both boxes are filled, I should take both. If I know I'm in one of those situations but not sure of which it is, I should still take both.

Another analogous situation would be that you walk into an exam, and the professor (who is a perfect or near-perfect predictor) announces that he has written down a list of people whom he has predicted will get fewer than half the questions right. If you are on that list, he will add 100 points to your score at the end. The people who get fewer than half of the questions right get higher scores, but you should still try to get questions right on the test... right? If not, does the answer change if the professor posts the list on the board?

I still think I'm missing something, since a lot of people have thought carefully about this and come to a different conclusion from me, but I'm still not sure what it is. :/

findis00

I think it is worth preserving a distinction between the specific kind of signaling Patrick describes and a weaker definition, because "true signaling" explains a specific phenomenon: in equilibrium, there seems to be too much effort expended on something, but everyone is acting in their own best interest. "High-quality" people do something to prove they are high quality, and "low-quality" people imitate this behavior. If education is a signal, people seem to get "too much" education for what their jobs require.

As in an exam problem I recently heard about: Female bullfrogs prefer large male bullfrogs. Large bullfrogs croak louder. In the dark, small bullfrogs croak loudly to appear large. To signal that they are the true large frogs, large ones croak even louder. When everyone is croaking as loudly as they can, croaking quietly makes a frog look incapable of croaking loudly and therefore small. Result: swamps are really noisy at night.

Or, according to this paper, people "expect a high-quality firm to undertake ambitious investments". Investment is a signal of quality: low-quality firms invest more ambitiously to look high-quality. Then high-quality firms invest more to prove they are the true high-quality firms. Result: firms over-invest.

In this sense, you can also signal that you are serious about a friendship, job, or significant other, but only where your resources are limited. An expensive engagement ring is a good signal of your seriousness -- hence, expensive diamond engagement rings instead cubic zirconium. Or, applying to college and sending a video of yourself singing the college's fight song is a good signal that you will attend if admitted, and writing a gushing essay is a cheap imitation signal of that devotion. Hence, high school seniors look like they spend way too much effort telling colleges how devoted they are.

So you might use signaling to explain why "too many" people get "useless" degrees studying classics, or why swamps are "too loud", or engagement rings are "too expensive". I don't think it's true that too many people pretend to be Republicans, or too many birthday cards or sent.

findis70

Differences in conformity: women may conform a bit more to widespread social views (at least, to views of "their social class") and/or compartimentalize more between what they learn about a specific topic and their general views. This would mean female scientists would be slightly less likely to be atheists in religious countries, female theology students would be slightly less likely to be fanatics in not-that-fanatical societies, etc.

We need to look at differences between men and women conditional on the fact that they've become economists, not just differences between men and women. Becoming a professional economist requires more nonconformity for a woman than for a man -- deciding to pursue a gender-atypical job, having peers and mentors that are mostly male, and delaying having children or putting a lot of time into family life until you're 30, at least.

Different subfields in economics: Maybe "economics" shouldn't be considered one big blob - there may be some subfields that have more in common with other social sciences (and thus have a more female student body, and a more "liberal" outlook), and some more in common with maths and business.

There are more women in fields you might expect to be more liberal, and fewer in fields like theory. http://www.cepr.org/meets/wkcn/3/3530/papers/Dolado.pdf Women seem to be more concentrated in public economics (taxes) and economic development. They are less concentrated in theory... and in the large field of "other". When you define the fields differently women are especially well represented (compared to the mean) in "health, education, and welfare" and "labour and demographic economics".

It would be interesting to see how, say, health economists view employer-provided health insurance rules.

findis00

To be properly isomorphic to the Newcomb's problem, the chance of the predictor being wrong should approximate to zero.

If I thought that the chance of my friend's mother being wrong approximated to zero, I would of course choose to one-box. If I expected her to be an imperfect predictor who assumed I would behave as if I were in the real Newcomb's problem with a perfect predictor, then I would choose to two-box.

Hm, I think I still don't understand the one-box perspective, then. Are you saying that if the predictor is wrong with probability p, you would take two-boxes for high p and one box for a sufficiently small p (or just for p=0)? What changes as p shrinks?

Or what if Omega/Ann's mom is a perfect predictor, but for a random 1% of the time decides to fill the boxes as if it made the opposite prediction, just to mess with you? If you one-box for p=0, you should believe that taking one box is correct (and generates $1 million more) in 99% of cases and that two boxes is correct (and generates $1000 more) in 1% of cases. So taking one box should still have a far higher expected value. But the perfect predictor who sometimes pretends to be wrong behaves exactly the same as an imperfect predictor who is wrong 1% of the time.

Load More