Will Newsome has suggested that I repost my tweets to LessWrong. With some trepidation, and after going through my tweets and categorizing them, I picked the ones that seemed the most rationality-oriented. I held some in reserve to keep the post short; those could be posted later in a separate post or in the comments here. I'd be happy to expand on anything here that requires clarity.
Epistemology
- Test your hypothesis on simple cases.
- Forming your own opinion is no more necessary than building your own furniture.
- The map is not the territory.
- Thoughts about useless things are not necessarily useless thoughts.
- One of the successes of the Enlightenment is the distinction between beliefs and preferences.
- One of the failures of the Enlightenment is the failure to distinguish whether this distinction is a belief or a preference.
- Not all entities comply with attempts to reason formally about them. For instance, a human who feels insulted may bite you.
Group Epistemology
- The best people enter fields that accurately measure their quality. Fields that measure quality poorly attract low quality.
- It is not unvirtuous to say that a set is nonempty without having any members of the set in mind.
- If one person makes multiple claims, this introduces a positive correlation between the claims.
- We seek a model of reality that is accurate even at the expense of flattery.
- It is no kindness to call someone a rationalist when they are not.
- Aumann-inspired agreement practices may be cargo cult Bayesianism.
- Godwin's Law is not really one of the rules of inference.
- Science before the mid-20th century was too small to look like a target.
- If scholars fail to notice the common sources of their inductive biases, bias will accumulate when they talk to each other.
- Some fields, e.g. behaviorism, address this problem by identifying sources of inductive bias and forbidding their use.
- Some fields avoid the accumulation of bias by uncritically accepting the biases of the founder. Adherents reason from there.
- If thinking about interesting things is addictive, then there's a pressure to ignore the existence of interesting things.
- Growth in a scientific field brings with it insularity, because internal progress measures scale faster than external measures.
Learning
- It's really worthwhile to set up a good study environment. Table, chair, quiet, no computers.
- In emergencies, it may be necessary for others to forcibly accelerate your learning.
- There's a difference between learning a skill and learning a skill while remaining human. You need to decide which you want.
- It is better to hold the sword loosely than tightly. This principle also applies to the mind.
- Skills are packaged into disciplines because of correlated supply and correlated demand.
- Have a high discount rate for learning and a low discount rate for knowing.
- "What would so-and-so do?" means "try using some of so-and-so's heuristics that you don't endorse in general."
- Train hard and improve your skills, or stop training and forget your skills. Training just enough to maintain your level is the worst idea.
- Gaining knowledge is almost always good, but one must be wary of learning skills.
Instrumental Rationality
- As soon as you notice a pattern in your work, automate it. I sped up my book-writing with code I should've written weeks ago.
- Your past and future decisions are part of your environment.
- Optimization by proxy is worse than optimization for your true goal, but usually better than no optimization.
- Some tasks are costly to resume because of mental mode switching. Maximize the cost of exiting these tasks.
- Other tasks are easy to resume. Minimize external costs of resuming these tasks, e.g. by leaving software running.
- First eat the low-hanging fruit. Then eat all of the fruit. Then eat the tree.
- Who are the masters of forgetting? Can we learn to forget quickly and deliberately? Can we just forget our vices?
- What sorts of cultures will endorse causal decision theory?
- Big agents can be more coherent than small agents, because they have more resources to spend on coherence.
I like that you're being terse. Many of these are puzzles - I need to discover a way to interpret them that allows me to like them.
Of these, I'm unsure:
Huh? It's clearly a belief (part of a map). That seems easily grasped once the question is posed. The early Enlightenment wasn't meta- enough for your liking, for not posing it?
You're implying that the insularity is a result of researchers gravitating toward areas where they're rewarded by other researchers more so than something that directly helps outsiders? I don't understand "scale faster".
In this metaphor, are learning and knowing investments that will return future cash? Why should there be different discount rates? Do you think the utility of learning something is more uncertain than the utility of knowing something? I don't understand how to act on this advice at all. Are you saying I should try hard not to forget what I already know, instead of learning new things?
What's meant is clear; the reason for it isn't.
Do you mean precommitment (burning the ships), or do you mean just to be aware of the cost of such a decision?
No. We can do neither. I can direct my attention slightly, and them maybe I'll forget.
Endorse? You mean, use it somehow?
By learning, I mean gaining knowledge. Humans can receive enjoyment both from having stuff and from gaining stuff, and knowledge is not an exception.
It's true that a dynamically-consistent agent can't have different discount rates for different terminal values, but bounded rationalists might talk about instrumental values using the same sort of math they use for terminal values. In that context it makes sense to use different discount rates for different sorts of good.