In several places in the sequences, Eliezer writes condescendingly about "Traditional Rationality". The impression given is that Traditional Rationality was OK in its day, but that today we have better varieties of rationality available.
That is fine, except that it is unclear to me just what the traditional kind of rationality included, and it is also unclear just what it failed to include. In one essay, Eliezer seems to be saying that Traditional Rationality was too concerned with process, whereas it should have been concerned with winning. In other passages, it seems that the missing ingredient in the traditional version was Bayesianism (a la Jaynes). Or sometimes, the missing ingredient seems to be an understanding of biases (a la Kahneman and Tversky).
In this essay, Eliezer laments that being a traditional rationalist was not enough to keep him from devising a Mysterious Answer to a mysterious question. That puzzles me because I would have thought that traditional ideas from Peirce, Popper, and Korzybski would have been sufficient to avoid that error. So apparently I fail to understand either what a Mysterious Answer is or just how weak the traditional form of rationality actually is.
Can anyone help to clarify this? By "Traditional Rationality", does Eliezer mean to designate a particular collection of ideas, or does he use it more loosely to indicate any thinking that is not quite up to his level?
Um, I think you are possibly taking a poetic remark too seriously. If they had said "uncertainty is part of everyday life" would you have objected?
Heuristics are not necessarily genetic. They can be learned. I see nothing in their paper that implies that they were genetic, and having read a fair amount of what both T & K wrote, there's no indication that I saw that they strongly thought that any of these heuristics were genetic.
Ok. This confuses me. Let's says that humans use genetic heuristics, how is that a low opinion? Moreover, how does that prevent us from being universal knowledge creators? You also seem to be confused about whether or not something is a good epistemology being related to whether or not a given entity uses it. Whether humans use induction and whether induction is a good epistemological approach are distinct questions.
This seems close to, if anything, Christian apologists saying how if humans don't have souls then everything is meaningless. Do you see the connection here? Just because humans have flaws doesn't make humans terrible things. We've split the atom. We've gone to the Moon. We understand the subtle behavior of the prime numbers. We can look back billions of years in time to the birth of the universe. How does thinking we have flaws mean one has a low opinion of humans?
I'm curious, when a psychologist finds a new form of optical illusion, do you discount it in the same way? Does caring about that or looking for those constitute a low opinion of humans?
That's a tortured reading of the sentence. The point is that they wanted to see if humans engaged in conjunction errors. So they constructed situations where, if humans were using the representativeness heuristic or similar systems the errors would be likely to show up. This is, from the perspective of Popper in LScD, a good experimental protocol, since if it didn't happen, it would be a serious blow to the idea that humans use a representativeness heuristic to estimate likelyhood. They aren't admitting "bias"- their point is that since their experimental constructions were designed to maximize the opportunity for a representativeness heuristic to show up, they aren't a good estimate for how likely these errors are to occur in the wild.
So it seems to me that you are essentially saying that you disagree with their experimental evidence on philosophical grounds. If your evidence disagrees with your philosophy the solution is not to deny the evidence.
In some contexts, yes. For example, foreign policy experts working with economists or financial institutions sometimes will make probability estimates for them to work with. But let's say they never do. How is that at all relevant to the questions at hand? Do you really think that the idea of estimating a probability is so strange and technical that highly educated individuals shouldn't be expected to be able to understand what is being asked of them? And yet you think that Tversky had a low opinion of humans? Moreover, even if they did have trouble understanding what was meant, do you expect that would cause all the apparent bias to go by sheer coincidence just as one would expect given the conjunction fallacy?