All of Eoghanalbar's Comments + Replies

I don't think this is at all true that theism is a uniquely awful example.

What about Ayn Rand's Objectivism? I just ctrl-f'd for that on this page and I'm AMAZED to report that no one else seems to have mentioned this obvious example yet.

Things like dualism are right out, but also non-mystical things like homeopathy.

Seems to me there are lots of simply referencable ideas or schools or thought that rationality cleans up.

Secondly, though, I think that rationality often leads reliably to positions that are just a bit less easy to squirt out in a name or phras... (read more)

-1TheAncientGeek
How is one small movement of amateur philosophers dissing another going to play? It's like the Mormon saying the seventh day Adventist are wrong, (Agree that Objectivism sucks BTW)
0FeepingCreature
I re-found this yesterday and I'm just gonna link it. Empirical data against libertarianism!

Thanks, but I meant not a check on what these CDT-studying-type people would DO if actually in that situation, but a check on whether they actually say that two-boxing would be the "rational" thing to do in that hypothetical situation.

I haven't considered you transparency question, no. Does that mean Omega did exactly what he would have done if the boxes were opaque, except that they are in fact transparent (a fact that did not figure into the prediction)? Because in that case I'd just see the million in B, and the thousand in A, and of course ta... (read more)

Ahah. So do you remember if you were confused in yourself, for reasons generated by your own brain, or just by your knowledge that some experts were saying two-boxing was the 'rational' strategy?

But you're not saying that you would ever have actually decided to two-box rather than take box B if you found yourself in that situation, are you?

I mean, you would always have decided, if you found yourself in that situation, that you were the kind of person Omega would have predicted to choose box B, right?

I am still so majorly confused here. :P

2Sniffnoy
I have no idea! IIRC I leaned towards one-boxing, but I was honestly confused about it.

Ha! =]

Okay, I DO expect to see lots of 'people are crazy, the world is mad' stuff, yeah, I just wouldn't expect to see it on something like this from the kind of people who work on things like Causal Decision Theory! :P

So I guess what I really want to do first is CHECK which option is really most popular among such people: two-boxing, or predictably choosing box B?

Problem is, I'm not sure how to perform that check. Can anyone help me there?

0timtyler
I think this is the position of classical theorists on self-modifiying agents: From Rationality, Dispositions, and the Newcomb Paradox: They agree that agents who can self-modify will take one box. But they call that action "irrational". So, the debate really boils down to the definition of the term "rational" - and is not really concerned with the decision that rational agents who can self-modifiy will actually take. If my analysis here is correct, the dispute is really all about terminology.
2wedrifid
It is fairly hard to perform such checks. We don't have many situations which are analogous to Newcomb's problem. We don't have perfect predictors and most situations humans are in can be considered "iterated". At least, we can consider most people to be using their 'iterated' reasoning by mistake when we put them in once off situations. The closest analogy that we can get reliable answers out of is the 'ultimatum game' with high stakes... in which people really do refuse weeks worth of wages. By the way, have you considered what you would do if the boxes were transparent? Just sitting there. Omega long gone and you can see piles of cash in front of you... It's tricky. :)

Meaning you're in the same boat as me? Confused as to why this ever became a point of debate in the first place?

0Sniffnoy
...no? I didn't realize that the decision theory could be varied, that the obvious decision theory could be invalid, so I hit a point of confusion with little idea what to do about it.

Yup yup, you're right, of course.

What I was trying to say, then, is that I don't understand why there's any debate about the validity of a decision theory that gets this wrong. I'm surprised everyone doesn't just go, "Oh, obviously any decision theory that says two-boxing is 'rational' is an invalid theory."

I'm surprised that this is a point of debate. I'm surprised, so I'm wondering, what am I missing?

Did I manage to make my question clearer like that?

1wedrifid
It's a good question. You aren't missing anything. And "people are crazy, the world is mad" isn't always sufficient. ;)
3Sniffnoy
I can say that for me personally, the hard part - that I did not get past till reading about it here - was noticing that there is actually such a variable as "what decision theory to use"; using a naive CDT sort of thing simply seemed rational /a priori/. Insufficient grasp of the nameless virtue, you could say.

Awesome. =]

If say, "This isn't about a test of rationality itself, but a test for true free-thinking. All good rationalists must be free-thinkers, but not all free-thinkers are necessarily good rationalists", is that a good summary?

You know, I honestly don't even understand why this is a point of debate. One boxing and taking box B (and being the kind of person who will predictably do that) seem so obviously like the rational strategy that it shouldn't even require explanation.

And not obvious in the same way most people think the monty hill problem (game show, three doors, goats behind two, sports-car behind one, ya know?) seems 'obvious' at first.

In the case of the monty hill problem, you play with it, and the cracks start to show up, and you dig down to the surprising truth.

In this case, I don't see how anyone could see and cracks in the first place.

Am I missing something here?

3RobinZ
One factor you may not have considered: the obvious rational metastrategy is causal decision theory, and causal decision theory picks the two-box strategy.
0wedrifid
It is the obvious rational strategy... which is why using a decision theory that doesn't get this wrong is important.

Also Victoria, BC. Home of the sasquatch and pacific tree octopus, and where the conservative party is named 'the liberals'.

Oh thanks! Quick reply, there. I don't suppose you might know if/how I can enable email notification of replies to stuff I say here?

I think Brin kind of has his own, what was the word he used... "blog-munity", and he's pretty busy on top of that (or SHOULD be, anyway) with that novel that's supposed to be an update to "Earth".

I'm just starting to look through the "Sequences" here. A lot of it feels very familiar to me, as I became a major Richard Feynman fan at a relatively young age myself, but I am sure I can find plenty to... (read more)

1Jack
No emails. But the replies show up in your inbox (which is that little envelope beneath your karma score which turns red when you get new mail).

Hi. Just got here yesterday by way of a link from the "Harry Potter and the Methods of Rationality" story, which I loved. I found the story by way of a link from David Brin's blog (I've been a fan of Brin for a long time now).

1Jack
Frankly, I'm surprised Brin hasn't showed up here himself. (Welcome btw!)