Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I have a problem: I'm not sure what this community is about.
To illustrate, recently I've been experimenting with a number of tricks to overcome my akrasia. This morning, a succession of thoughts struck me:
- The readers of Less Wrong have been interested in the subject of akrasia, maybe I should make a top-level post of my experiences once I see what works and what doesn't.
- But wait, that would be straying into the territory of traditional self-help, and I'm sure there are already plenty of blogs and communities for that. It isn't about rationality anymore.
- But then, we have already discussed akrasia several times, isn't this then also on-topical?
- (Even if this was topical, wouldn't a simple recount of "what worked for me" be too Kaj-optimized to work for very many others?)
Part of the problem seems to stem from the fact that we have a two-fold definition of rationality:
- Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible. This correspondence is commonly termed "truth" or "accuracy", and we're happy to call it that.
- Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".
If this community was only about epistemic rationality, there would be no problem. Akrasia isn't related to epistemic rationality, and neither are most self-help tricks. Case closed.
However, by including instrumental rationality, we have expanded the sphere of potential topics to cover practically anything. Productivity tips, seduction techniques, the best ways for grooming your physical appearance, the most effective ways to relax (and by extension, listing the best movies / books / video games of all time), how you can most effectively combine different rebate coupons and where you can get them from... all of those can be useful in achieving your values.
Followup to: Newcomb's Problem and Regret of Rationality
"Rationalists should win," I said, and I may have to stop saying it, for it seems to convey something other than what I meant by it.
Where did the phrase come from originally? From considering such cases as Newcomb's Problem: The superbeing Omega sets forth before you two boxes, a transparent box A containing $1000 (or the equivalent in material wealth), and an opaque box B that contains either $1,000,000 or nothing. Omega tells you that It has already put $1M in box B if and only if It predicts that you will take only box B, leaving box A behind. Omega has played this game many times before, and has been right 99 times out of 100. Do you take both boxes, or only box B?
A common position - in fact, the mainstream/dominant position in modern philosophy and decision theory - is that the only reasonable course is to take both boxes; Omega has already made Its decision and gone, and so your action cannot affect the contents of the box in any way (they argue). Now, it so happens that certain types of unreasonable individuals are rewarded by Omega - who moves even before they make their decisions - but this in no way changes the conclusion that the only reasonable course is to take both boxes, since taking both boxes makes you $1000 richer regardless of the unchanging and unchangeable contents of box B.
And this is the sort of thinking that I intended to reject by saying, "Rationalists should win!"
Said Miyamoto Musashi: "The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him."
Said I: "If you fail to achieve a correct answer, it is futile to protest that you acted with propriety."
This is the distinction I had hoped to convey by saying, "Rationalists should win!"
Followup to: Just about every post in February, and some in March
Some reader is bound to declare that a better title for this post would be "37 Ways That You Can Use Words Unwisely", or "37 Ways That Suboptimal Use Of Categories Can Have Negative Side Effects On Your Cognition".
But one of the primary lessons of this gigantic list is that saying "There's no way my choice of X can be 'wrong'" is nearly always an error in practice, whatever the theory. You can always be wrong. Even when it's theoretically impossible to be wrong, you can still be wrong. There is never a Get-Out-Of-Jail-Free card for anything you do. That's life.
Besides, I can define the word "wrong" to mean anything I like - it's not like a word can be wrong.
Personally, I think it quite justified to use the word "wrong" when:
- A word fails to connect to reality in the first place. Is Socrates a framster? Yes or no? (The Parable of the Dagger.)
- Your argument, if it worked, could coerce reality to go a different way by choosing a different word definition. Socrates is a human, and humans, by definition, are mortal. So if you defined humans to not be mortal, would Socrates live forever? (The Parable of Hemlock.)
- You try to establish any sort of empirical proposition as being true "by definition". Socrates is a human, and humans, by definition, are mortal. So is it a logical truth if we empirically predict that Socrates should keel over if he drinks hemlock? It seems like there are logically possible, non-self-contradictory worlds where Socrates doesn't keel over - where he's immune to hemlock by a quirk of biochemistry, say. Logical truths are true in all possible worlds, and so never tell you which possible world you live in - and anything you can establish "by definition" is a logical truth. (The Parable of Hemlock.)
- You unconsciously slap the conventional label on something, without actually using the verbal definition you just gave. You know perfectly well that Bob is "human", even though, on your definition, you can never call Bob "human" without first observing him to be mortal. (The Parable of Hemlock.)
- The act of labeling something with a word, disguises a challengable inductive inference you are making. If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future. But if you call the blue eggs "bleggs" and the red cubes "rubes", you may reach into the barrel, feel an egg shape, and think "Oh, a blegg." (Words as Hidden Inferences.)