Agreed, but I think the reason this experiment is interesting is that it previously didn't occur to people (or at least to me) that trustingness is a possible alternative explanation of the classic marshmallow experiment, rather than self control. It was a blind spot.
It didn't occur to me either, but ironically it was the first thing my wife suggested when I told her about the marshmallow experiment yesterday (it came up in the context of that professor's comments about fat people, self control, and PhD programs recently). This post's timing was thus quite serendipitous.
It probably occurred to her because she is a doctor who works with primarily poor patients, many of whom are black and hispanic, and so is used to the associated mistrust when crossing cultural and socio-economic lines.
I really don't see how this casts doubt on the original experiment. Suppose we express a child's decision as maximizing expected reward minus the cost of waiting, where the latter takes "self control" as a parameter. If we lower expected reward, (nearly) all the kids eat the marshmallow. If we raise expected reward (by reinforcing waiting twice), about half the kids wait. But still, 6/14 kids in the second group didn't wait, so clearly there's variance from another source.
The other source of variance could still be the children's "trustingness." The more trusting children could have a higher expected reward even after the kids are shown that the adults are reliable/unreliable. So the results are consistent with both of the following hypotheses:
- More trusting children will wait longer and self control is not relevant
- More trusting children will wait longer and children with more self control will wait longer
But this experiment ruled out the following:
- It doesn't matter if a child is more trusting; only self control affects how long they wait
But this experiment ruled out the following:
- It doesn't matter if a child is more trusting; only self control affects how long they wait
I agree, but I don't think anyone believed that nothing else matters to marshmallow eating.
We have to distinguish between the propositions:
(P1) A significant fraction of the variance in marshmallow eating among children observed in past experiments is explained by trustingness.
(P2) Inducing large changes in trustingness in children produces changes in marshmallow eating behavior.
This study supports (P2), but it is only informative about (P1) to the extent someone previously assigned substantial probability mass to the proposition:
(P3) There is large variation in childrens' trustingness, but trustingness doesn't affect childrens' marshmallow eating decision.
I suspect most people didn't assign much probability to (P3), and so this study shouldn't change their opinion very much.
I really don't see how this casts doubt on the original experiment. Suppose we express a child's decision as maximizing expected reward minus the cost of waiting, where the latter takes "self control" as a parameter. If we lower expected reward, (nearly) all the kids eat the marshmallow. If we raise expected reward (by reinforcing waiting twice), about half the kids wait. But still, 6/14 kids in the second group didn't wait, so clearly there's variance from another source.
One way to tease out this connection might be to compare the kids who waited to the kids who tried to hold out and ate the marshmallow late (say after 10 minutes). Presumably the latter group trusted the adults, and their failure to wait was due to lack of self control. Now compare those two groups 10 years later.
Two-boxers think that decisions are things that can just fall out of the sky uncaused. (This can be made precise by a suitable description of how two-boxers set up the relevant causal diagram; I found Anna Salamon's explanation of this particularly clear.) This is a view of how decisions work driven by intuitions that should be dispelled by sufficient knowledge of cognitive and / or computer science. I think acquiring such background will make you more sympathetic to the perspective that one should think in terms of winning agent types and not winning decisions.
I also think there's a tendency among two-boxers not to take the stakes of Newcomb's problem seriously enough. Suppose that instead of offering you a million dollars Omega offers to spare your daughter's life. Now what do you do?
Two-boxers think that decisions are things that can just fall out of the sky uncaused.
But don't LW one-boxers think that decision ALGORITHMS are things that can just fall out of the sky uncaused?
As an empirical matter, I don't think humans are psychologically capable of time-consistent decisions in all cases. For instance, TDT implies that one should one-box even in a version of Newcomb's in which one can SEE the content of the boxes. But would a human being really leave the other box behind, if the contents of the boxes were things they REALLY valued (like the lives of close friends), and they could actually see their contents? I think that would be hard for a human to do, even if ex ante they might wish to reprogram themselves to do so.
I'm going to make a meta-comment here.
I think that your ultimate goal should NOT be to convince your dad that you are right and he is wrong. If he eventually changes his mind, he's going to have to do that on his own. Debates just don't change participants' minds very often.
Instead, your goal should be to make him respect your beliefs as genuine.
Christians generally respect people who are genuinely seeking truth, in part because the Bible promises that "those who seek will find". The good news is that you ARE legitimately seeking truth, so you should be able to convince him of this.
Hopefully you already have a good relationship with your father based on mutual love and respect. You want to build on that and preserve it as much as possible. He is going to be your dad for the rest of your life, and how you interact with him now is going to determine in part how that relationship develops.
More practically: It sounds like you aren't sure exactly why you've changed your mind, and are having difficulty articulating it. Nobody on this site is going to be able to articulate it for you. Rationality is a method, not a conclusion. So here is my suggestion: do a stack-trace on your change of belief. It happened, so it is causally entangled with some set of arguments and evidence you encountered. Go back and try to figure out what caused you to change your mind. Reconstruct as best you can, in your own words, as exactly and precisely as possible, why you changed your mind.
This exercise will help you to understand what you believe and why. Discussing this with your father will be grounds for a future relationship based on mutual love and respect. That should be the goal here.
Last piece of advice: spend some time with your dad doing something other than arguing. Go to a baseball game or something. Try to get some father-son time where you're not just talking about your beliefs. You want him to get used to the fact that you're the same person, and you don't want this to dominate your relationship.
I think people often dismiss systems like STV/IRV by essentially saying "Arrow's theorem implies you can still vote tactically, so it's just as bad". But there's a big difference: in STV it's much harder to figure out how to vote tactically.
In First Past The Post systems, tactical voting is blindingly obvious: if there are two candidates you like, but you don't think that your favourite has enough popularity to win outright, then you should vote for the other one, to avoid splitting the vote. This is easy to understand, and it's also easy to detect circumstances where it would be beneficial for you to vote other than your preferences.
OTOH, even though there are times where you can vote tactically in STV, they're harder to understand, and crucially, it's much harder to recognise such opportunities: you need a lot more information.
This means that, in general, STV would cut down on tactical voting a great deal, simply because it makes it harder.
I imagine that if polls showed that we were in a situation where strategic voting might be useful for people with certain preferences, the news media would report on it and people would learn about it.
I can see the headline now: "Mathematician says that if your preferences are 'A > B > C', you should vote 'B > A > C' in November!"
Such situations could be recognized by poll questions like "What is your preference ordering over these 3 candidates?" Candidate B's campaign would have a large incentive to publicize this information.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
FWIW I'm a grad student in econ, and in my experience the undergrad and graduate macro are completely different. I recall Greg Mankiw sharing a similar sentiment on his blog at some point, but can't be bothered to look it up.
I would say that undergrad and grad econ are very different methodologically (at least at most schools), but a lot of the content is the same.
Stephen Williamson's intermediate macro textbook tries to bring in a lot of grad-level models/concepts, albeit in a "toy" form.