If not rationality, then what?
LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims: Obtain a better model of the world by updating on the evidence of things unpredicted by your current model. Succeed at your given goals by using your (constantly updating) model to predict which actions will maximize success.
Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, which enable goal-accomplishing actions. The way to have correct beliefs is to update your beliefs when their predictions fail.
Stating it this baldly gets me to wonder about alternatives. What if we deny each of these premises and see what we get? Other than Bayes' world, which other worlds might we be living in?
Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra's world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it. In the world of heroic myth, it is not oracles but rather heroes and villains who create change in the world. Heroes and villains are people who possess great virtue or vice — strong-willed tendencies to face difficult challenges, or to do what would repulse others. Heroes and villains defy oracles, and come to their predicted triumphs or fates not through prediction, but in spite of it.
Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals. The facts of the world are relatively close to our priors, but our goals are not known to us initially, and are in fact very difficult to discover. We might consider this to be Buddha's world, the world of contemplation — in which understanding the nature of the self is substantially more important to success than understanding the external world. When we make choose actions that cause bad effects, we aren't so much acting on faulty beliefs about the world, but pursuing goals that are illusory or empty of satisfaction.
There are other models as well, that could be extrapolated from denying other premises (explicit or implicit) of Bayes' world. Each of these models should relate prediction, action, and goals in different ways. We might imagine Lovecraft's world, Qoheleth's world, or Nietzsche's world.
Each of these models of the world — Bayes' world, Cassandra's world, Buddha's world, and the others — does predict different outcomes. If we start out thinking that we are in Bayes' world, what evidence might suggest that we are in Cassandra's or Buddha's world?
Edited lightly — In the first couple of paragraphs, I've clarified that I'm talking about epistemic and instrumental rationality as advice for humans, not about whether we live in a world where Bayesian math works. The latter seems obviously true.
Other than Bayes' world, which other worlds might we be living in?
A world with causes and effects. (Bayes' world as described is Cassandra's world, for the usual reasons of "prediction" not being what you want for choosing actions).
[ There was something else here, having to do with how it is hard to use causal info in a Bayesian way, but I deleted it for now in order to think about it more. You can ask me about it if interested. The moral is, it's not so easy to just be Bayesian with arbitrary types of information. ]
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Thread started before the end of the last thread to ecourage Monday as first day.