Thanks, but I meant not a check on what these CDT-studying-type people would DO if actually in that situation, but a check on whether they actually say that two-boxing would be the "rational" thing to do in that hypothetical situation.
I haven't considered you transparency question, no. Does that mean Omega did exactly what he would have done if the boxes were opaque, except that they are in fact transparent (a fact that did not figure into the prediction)? Because in that case I'd just see the million in B, and the thousand in A, and of course take 'em both.
Otherwise, Omega should be able to predict as well as me that, if I knew the rules of this game were that, if I decided to predictably choose to take only box B and leave A alone, box B would contain a million, and both boxes are transparent (and this transparency is figured into the prediction), I would expect to see a million in box B, take it, and just walk away from the paltry thousand in A.
This make sense?
Ahah. So do you remember if you were confused in yourself, for reasons generated by your own brain, or just by your knowledge that some experts were saying two-boxing was the 'rational' strategy?
But you're not saying that you would ever have actually decided to two-box rather than take box B if you found yourself in that situation, are you?
I mean, you would always have decided, if you found yourself in that situation, that you were the kind of person Omega would have predicted to choose box B, right?
I am still so majorly confused here. :P
Ha! =]
Okay, I DO expect to see lots of 'people are crazy, the world is mad' stuff, yeah, I just wouldn't expect to see it on something like this from the kind of people who work on things like Causal Decision Theory! :P
So I guess what I really want to do first is CHECK which option is really most popular among such people: two-boxing, or predictably choosing box B?
Problem is, I'm not sure how to perform that check. Can anyone help me there?
Meaning you're in the same boat as me? Confused as to why this ever became a point of debate in the first place?
Yup yup, you're right, of course.
What I was trying to say, then, is that I don't understand why there's any debate about the validity of a decision theory that gets this wrong. I'm surprised everyone doesn't just go, "Oh, obviously any decision theory that says two-boxing is 'rational' is an invalid theory."
I'm surprised that this is a point of debate. I'm surprised, so I'm wondering, what am I missing?
Did I manage to make my question clearer like that?
Awesome. =]
If say, "This isn't about a test of rationality itself, but a test for true free-thinking. All good rationalists must be free-thinkers, but not all free-thinkers are necessarily good rationalists", is that a good summary?
You know, I honestly don't even understand why this is a point of debate. One boxing and taking box B (and being the kind of person who will predictably do that) seem so obviously like the rational strategy that it shouldn't even require explanation.
And not obvious in the same way most people think the monty hill problem (game show, three doors, goats behind two, sports-car behind one, ya know?) seems 'obvious' at first.
In the case of the monty hill problem, you play with it, and the cracks start to show up, and you dig down to the surprising truth.
In this case, I don't see how anyone could see and cracks in the first place.
Am I missing something here?
Also Victoria, BC. Home of the sasquatch and pacific tree octopus, and where the conservative party is named 'the liberals'.
I don't think this is at all true that theism is a uniquely awful example.
What about Ayn Rand's Objectivism? I just ctrl-f'd for that on this page and I'm AMAZED to report that no one else seems to have mentioned this obvious example yet.
Things like dualism are right out, but also non-mystical things like homeopathy.
Seems to me there are lots of simply referencable ideas or schools or thought that rationality cleans up.
Secondly, though, I think that rationality often leads reliably to positions that are just a bit less easy to squirt out in a name or phrase like that. Take... free trade, for instance.
The popular common positions are to argue for or against it (mind-killing politics), but I'd think a person using rationality should reliably come to the more nuanced position that free trade is mathematically proven to be the most optimal, but there are problems in the details of switching to it that need to be addressed (ie, what happens when the safety standards between two areas are different? And while yes, everyone will be better off on average afterwards, what about those people who will be negatively affected by the transition? Do we want some policy for helping them through or what?)
So rationality should lead reliably to similar conclusions in most areas, but the conclusions will often be more complex and nuanced than are commonly squawked back and forth in pop-culture.
Secondly, big parts of these rational positions will be the "I don't know so let's find out" spirit, with the agreement being on how to start looking.