Posts

Sorted by New

Wiki Contributions

Comments

Damon Runyon clearly has not considered point spreads.

"Many beliefs about procedure are exactly the opposite-- take believing that truth can be taken from the Bible. That procedure is self-justifying and there is no way to dispute it from within the assumptions of the procedure."

That's my point about rationality - the way I think about it, it would catch its own contradictions. In essence, a rationalist would recognize that rationalists don't "win." So as a result, committing yourself to rationality doesn't actually commit you to an outcome, as perhaps following a scripture would.

The bigger problem, I believe, is that most professed commitment to a procedure is superficial, and that instead most people simply bend the procedure to a preferred outcome. "The Devil may cite scripture for his purpose." The key, of course, is following the procedure accurately, and this is the community that'll keep you in line if you try to bend procedure to your preferred conclusion.

I'm inclined to agree on your latter point: looking at the results of the survey, it seems like it would be easy to go from 'rationalist' as a procedural label to 'rationalist' as shorthand for 'atheist male computer programmer using bayesian rules.' Of course, that's a common bias, and I think this community is as ready as any to fight it.

As for the former, I tried to address that by pointing out that rationalism means that we've already decided that updating priors is more effective than prayer. That said, I have a perhaps idealistic view of rationality, in that I think it's flexible enough to destroy itself, if necessary. I'd like to think that if we learned that our way of reasoning is inferior, we'd readily abandon it. A little too idealistic, perhaps.

That said, I will say that I find purely procedural labels less dangerous than substantive ones. You've alluded to the danger of conflating it with substantive labels like atheism, but that's a separate danger worth looking out for.

I think the danger here is far smaller than people are making it out to be. There is a major difference between the label "rationalist" and most other identities as Paul Graham refers to them. The difference is that "rationalist" is a procedural label; most identities are at least partially substantive, using procedural/substantive in the sense that the legal system does.

"Rationalist," which I agree is an inevitable shorthand that emerges when the topic of overcoming bias is discussed frequently, is exclusively a procedural label: such a person is expected to make decisions and seek truth using a certain process. This process includes Bayesian updating of priors based on evidence, etc. However, such a label doesn't commit the rationalist to any particular conclusion ex ante: the rationalist doesn't have to be atheist or theist, or accept any other fact as true and virtually unassailable. He's merely committed to the process of arriving at conclusions.

Other identities are largely substantive. They commit the bearer to certain conclusions about the state of the world. A Christian believes in a god with certain attributes and a certain history of the world. A Communist believes that a certain government system is better than all others. These identities are dangerous: once they commit you to a conclusion, you're unlikely to challenge it with evidence to ensure it is in fact the best one. That's the kind of identity Paul Graham is warning against.

Of course, these labels have procedural components: a Christian would solve a moral dilemma using the Bible; a Communist would solve an economic problem using communist theory. Similarly, rationalism substantively means you've arrived at the conclusion than you're biased and you can't trust your gut or your brain like most people do, but that's the extent of your substantive assumptions.

Since rationalism is a procedural identity rather than a substantive one, I see few of the dangers of using the term "rationalist" freely here.

Exposing yourself to any judgments, period, is risky. The OB crowd is perhaps the best-commenting community I've come across: they read previous comments and engage the arguments made there. How many other bloggers are like Robin Hanson and consistently read and reply to comments? Anyway, as a result, any comment is bound to be read and often responded to by others. There may not have been a point value attached, but judgments were made.

hhadzimu15y-10

Agreed. I have trouble accepting this as a true irrationality. It strikes me as merely a preference. You lose time you could be listening to song A because of your desire to have the same play count for song B, but this is because you prefer the world where playcounts are equal to the world where they are unequal but you hear a specific song more. Is that really an irrational preference?

I also agree with VN's disclaimer: this time spent [wasted?] on equalizing playcounts could probably be used for something else. But at what point does the preference for a certain aesthetic outcome become irrational? What about someone who prefers a blue shirt to a red one? What about someone who can't enjoy a television show because the living room is messy? Someone who can't enjoy a party because there's an odd number of people attending? Someone who insists on eating the same lunch every day? Some of these are probably indicators of OCD, but it's really just an extreme point on a spectrum of aesthetic and similar preferences. At what point do preferences become irrational?

I have to echo orthonormal: information, if processed without bias [availability bias, for example], should improve our decisions, and getting information is not always easy. I don't see how this raises any questions about the rational process, or as you say, principled fashion.

"But by what principled fashion should you choose not to eat the fugu?"

This seems like a situation where the simplest expected value calculation would give you the 'right' answer. In this case, the expected value of eating oysters is 1, the expected value of eating the fugu is the expected value of eating an unknown dish, which you'd probably base on your prior experiences with unknown dishes offered for sale in restaurants of that type. [I assume you'd expect lower utility in some places than others.] In this case, that would kill you, but that is not a failure of rationality.

In a situation without the constraints of the example, research on fugu would obviously provide you with the info you need. A web-enabled phone and google would provide you with everything you need to know to make the right call.

Humans actually solve this type of problem all the time, though the scales are perhaps less. A driver on a road trip may settle for low-quality food [a fast food chain, representing the oysters] for the higher certainty of his expected value [convenience, uniform quality]. It's simply the best use of available information.

Chicago, IL.

Eliezer Yudkowsky does not sleep. He waits.

You're probably right, but this is a workaround around the question. In law school, they'd accuse you of fighting the hypothetical. You're in the least convenient possible world here: you're wide awake, 100%, for the entire relevant duration.

http://lesswrong.com/lw/2k/the_least_convenient_possible_world/

Load More