I have a problem: I'm not sure what this community is about.
To illustrate, recently I've been experimenting with a number of tricks to overcome my akrasia. This morning, a succession of thoughts struck me:
- The readers of Less Wrong have been interested in the subject of akrasia, maybe I should make a top-level post of my experiences once I see what works and what doesn't.
- But wait, that would be straying into the territory of traditional self-help, and I'm sure there are already plenty of blogs and communities for that. It isn't about rationality anymore.
- But then, we have already discussed akrasia several times, isn't this then also on-topical?
- (Even if this was topical, wouldn't a simple recount of "what worked for me" be too Kaj-optimized to work for very many others?)
Part of the problem seems to stem from the fact that we have a two-fold definition of rationality:
- Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed "truth" or "accuracy", and we're happy to call it that.
- Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".
If this community was only about epistemic rationality, there would be no problem. Akrasia isn't related to epistemic rationality, and neither are most self-help tricks. Case closed.
However, by including instrumental rationality, we have expanded the sphere of potential topics to cover practically anything. Productivity tips, seduction techniques, the best ways for grooming your physical appearance, the most effective ways to relax (and by extension, listing the best movies / books / video games of all time), how you can most effectively combine different rebate coupons and where you can get them from... all of those can be useful in achieving your values.
Expanding our focus isn't necessarily a bad thing, by itself. It will allow us to attract a wider audience, and some of the people who then get drawn here might afterwards also become interested in e-rationality. And many of us would probably find the new kinds of discussions useful in their personal lives. The problem, of course, is that epistemic rationality is a relatively narrow subset of instrumental rationality - if we allow all instrumental rationality topics, we'll be drowned in them, and might soon lose our original focus entirely.
There are several different approaches as far as I can see (as well as others I can't see):
- Treat discussions of both as being fully on the same footing - if i-rationality discussions overwhelm e-rationality ones, that's just how it goes.
- Concentrate purely on e-rationality, and ban i-rationality discussions entirely.
- Allow i-rationality discussions, but don't promote top-level posts on the topic.
- Allow i-rationality discussions, but require a stricter criteria for promoting top-level posts on the topic.
- Allow i-rationality discussions, but only in the comments of dedicated monthly posts, resembling the "open topic" and "rationality quotes" series we have now.
- Allow i-rationality discussions, but try to somehow define the term so that silly things like listing the best video games of all time get excluded.
- Screw trying to make an official policy on this, let's just see what top-level posts people make and what gets upvoted.
- Some combination of the above.
I honestly don't know which approach would be the best. Do any of you?
(If this post is too long, read only the last paragraph.)
Evidence that regards statements. I guess the "regarding statements" bit was redundant. Anyway, let me try to give some examples.
First, let me postulate a guy named Delta. Delta is an extremely rational robot who, given the evidence, always comes up with the best possible conclusion.
Andy the Apathetic is presented with a court case. Before he ever looks at the case, he decides that the probability the defendant is guilty is 50%. In fact, he never looks at the case; he tosses it aside and gives that 50% as his final judgement. Andy is rational-neutral, as he discarded evidence regardless of its direction; his probability is useless, but if I told Delta how Andy works and Andy's final judgement, Delta would agree with it.
Barney the Biased is presented with the same court case. Before he ever looks at the case, he decides that the probability that the defendant is guilty is 50%. Looking through the evidence, he decides to discard everything suggesting that the defendant is innocent; he concludes that the defendant has a 99.99% chance of being guilty and gives that as his final judgement. Barney is not rational-neutral, as he discarded evidence with regard to its direction; his probability is almost useless (but not as useless as Andy's), and if I told Delta how Barney works and Barney's final judgement, Delta might give a probability of only 45%.
Finally, Charlie the Careful is presented with the same court case. Before he ever looks at the case, he decides that the probability that the defendant is guilty is 50%. Looking through the evidence, he takes absolutely everything into account, running the numbers and keeping Bayes' law between his eyes at all times; eventually, after running a complete analysis, he decides that the probability that the defendant is guilty is 23.14159265%. Charlie is rational-neutral, as he discarded evidence regardless of its direction (in fact, he discarded no evidence); if I told Delta how Charliie works and Charlie's final judgement, Delta would agree with it.
So, here's another definition of rational neutrality I came up with by writing this: you are rational-neutral if, given only your source code, it's impossible to come up with a function that takes one of your probability estimates and returns a better probability estimate.
Upon thinking about that second definition of rational neutrality, I find myself thinking that that can't be right. It's identical to calibration. And even a rational-neutral agent that's been "repaired" by applying the best possible probability estimate adjustment function will still return the same ordinal probabilities: Barney the Biased, even after adjustment, will return higher probabilities for statements he is biased toward than statements he is biased against.
I would have said this:
... (read more)