Wow this was a lot harder than I expected. I thought about my favorite technique for a solid fifteen minutes before deciding on this. Mostly because I realized that I don't have explicit techniques that I usually use. I definitely use certain things like Immunity to Change Mapping, the Ideological Turing Test, and Neutral Hours. However I think (78%) the majority of the benefit I've gained from rationality lies in a single hammer, or the ability to update from evidence and think about the best option in a scenario. This mostly happens on the 5-second level although I will quite frequently (multiple times a day) stop to viscerally update my priors.
A rather unsettling realization to find that most of my success in making more rational decisions stems from this one place. I'm not sure whether to be distraught or happy. On one hand this could be an indication that I'm very poorly diversified in my skills, or don't have enough of them on the reflex level yet. On the other hand this could indicate the value of an approach like A Thousand Heads in a Row suggests; optimizing many small decisions such that they chain together to form systematic and repeatable progress.
The list below is there for the sake of completeness and in hopes that it encourages other people to try the exercise.
Technique: Bayesian Expected Value Calculation
1.) Combining with Murphyjutsu to determine the leverage points on any plans I make
2.) Sifting through a class syllabus and schedule to determine the best use of my time towards the desired grade
3.) Realizing that my urge to read something else and not do the exercise was a policy statement about what I expected myself to do in the future. Doing the exercise makes me more likely to do other exercises in the future and embrace the singularity mindset.
4.) Trying new foods, or suppliments in response to the value of information
5.) I should get a flu shot
6.) Trying to determine the best way to lead a conversation or extract information from someone
7.) Realizing I might need diversified thinking skills
8.) Realizing I should explictly model my rationality techniques
9.) Checking the value and cost of my current action vs my ideal action at regular intervals.
10.) Realizing there is probably a good chance this technique can be upgraded by some means
One of my favorite optimization techniques is currency conversions - valuing a bunch of different things with a single number that can be used to make trade-offs. This technique has pitfalls, but it does a lot better than the naive human approach of just comparing alternatives based on a single variable and ignoring all the others.
Here are 12 (!) applications of currency conversions, roughly ordered from most to least obvious:
Following Swerve's example above, I've also decided to try out your exercise and post my results. My favorite instrumental rationality technique is Oliver Habryka's Fermi Modeling. The way I usually explain it (with profuse apologies to Habryka for possibly butchering the technique) is that you quickly generate models of the problem using various frameworks and from various perspectives, then weighting the conclusions of those models based on how closely they seem to conform to reality. (@habryka, please correct me if this is not what Fermi Modeling is.)
For your exercise, I'll try to come up with variants/applications of Fermi modeling that are useful in other contexts.
I guess Fermi modeling isn't so much a single hammer, as much as the "hammer" of the nail mindset. So some of the applications or variants I generated above seem to be ways of applying more hammers to a fixed nail, instead of applying the same fixed hammer to different nails.
Awesome! Glad to have made it into your top techniques! The explanation seems as good as a one-sentence explanation of Femi Modeling gets.
The simple explaination makes sense. However I'm sure there's a lot more to this than the is conveyed in one-sentence. I'd really like to get my hands on a more in-depth explaination if it's possible. A google search of the term "Fermi Modeling" as well as searching your LW post history has not yielded anything. Is there a post somewhere I can read?
I second the interest in #10. Benjamin Franklin famously employed this strategy with philosophers and rhetoricians, by writing essays in the famous person's style and then comparing with source material to see how successful he was.
Related to #10, I've found that building up understanding of complex topics (e.g., physics, mathematics, machine learning, etc.) is unusually enhanced by following the history of their development. Especially in mathematical topics, where the drive for elegant proofs leads to presentations that strip away the messy history of all the cognitive efforts that went into solving the problem in the first place.
I suppose this is really just an unconventional application of the general principle of learning from history.
I think there's a lot of the intuitions and thought processes that let you come up with new discoveries in mathematics and machine learning that aren't generally taught in classes or covered in textbooks. People are also quite bad at conveying their intuitions behind topics directly when asked to in Q&As and speeches. I think that at least in machine learning, hanging out with good ML researchers teaches me a lot about how to think about problems, in a way that I haven't been able to get even after reading their course notes and listening to their presentations. Similarly, I suspect that autobiographies may help convey the experience of solving problems in a way that actually lets you learn the intuitions or thought processes used by the author.
Some of those are definitely stretching. =P
#10 is extremely thought-provoking, I wonder how much lost intuition is buried in "flavor of the month" scientific fields and approaches of history. Do you have examples of special features of Feynman's and Watson's (say) approaches?
Yeah, I agree on the stretching point.
The main distinguishing thing about Feynman, at least from reading Feynman's two autobiographies, seemed to be how irreverent he is. He doesn't do science because it's super important, he does science he finds fun or interesting. He is constantly going on rants about the default way of looking at things (at least his inner monologue is) and ignoring authority, whether by blowing up at the science textbooks he was asked to read, ignoring how presidential committees traditionally functioned, or disagreeing with doctors. He goes to strip clubs because he likes interacting with pretty girls. It's really quite different from the rather stodgy utilitarian/outside mindset I tend to reference by default, and I think reading his autobiographies me a lot more of what Critch calls "Entitlement to believe" .
When I adopt this "Feynman mindset" in my head, this feels like letting my inner child out. I feel like I can just go and look at things and form hypotheses and ask questions, irrespective of what other people think. I abandon the feeling that I need to do what is immediately important, and instead go look at what I find interesting and fun.
From Watson's autobiography, I mainly got a sense of how even great scientists are drive a lot by petty desires, such as the fear that someone else would beat them to a discovery, or how annoying your collaborators are. For example, it seemed that a driving factor for Watson and Crick's drive to work on DNA was the fear that Linus Pauling would discover the true structure first. A lot of their failure to collaborate better with Rosalind Franklin was due to personal clashes with her. Of course, Watson does also display some irreverence to authority; he held fast to his belief that their approach to finding the structure of DNA would work, even when multiple more senior scientists disagreed with him. But I think the main thing I got out of the book was a visceral appreciation for how important social situations are for motivating even important science.
When I adopt this "Watson mindset" in my head, I think about the social situation I'm in, and use that to motivate me. I call upon the irritation I feel when people are just acting a little too suboptimal, or that people are doing things for the wrong reasons. I see how absolutely easy many of the problems I'm working on are, and use my irritation at people having thus failed to solve them to push me to work harder. This probably isn't a very healthy mindset to have in the long term, and there are obvious problems with it, but it feels very effective to get me to push past schleps.
"Prefer a few large, systematic decisions to many small ones."
Favorite technique: think for 5 minutes by the clock (I don't always use the clock).
The current CFAR workshop has a few places where participants explicitly Nail (Hamming Questions and Resolve Cycles, off the top of my head), and sometimes has a place where participants are at least exposed to the concept of Hammering (which we call Overlearning).
I was once told by an older graduate student to explicitly keep two lists: a list of problems and a list of techniques. Then, anytime I hear about a new problem, I add it to the problem list and check it against my technique list, and anytime I hear about a new technique, I add it to the technique list and check it against my problem list. I never did it (in mathematics) but it does seem like a sensible idea (in mathematics).
I'd like to do a version of this in rationality, but I find that my bugs lists decay rapidly; after a period of as little as a few days my sense of what my real bugs are shifts and I have to regenerate the bugs list from scratch or else it feels dead. I don't keep a technique list because I find explicitly applying techniques to be mostly a chore but there might be a version of that that doesn't suck for me.
Weird thought, but if your bugs list decays quickly, maybe you've not found the most important bugs? In other fields (e.g. mathematics), we continue working on the same problem for years/decades.
It's not that my bugs change all that much or often per se - I've had many of the same bug symptoms - but that I keep changing my sense of what the right frame to describe the bug is.
Sometimes I think the power of all these "keep a list of" techniques really lives in the generalized ability to keep lists.
This feels very true. One could rephrase it as the ability to ensure that what you learn/figure out/discover gets solidfied and built upon. I'm in the process of experimenting with the best way for me to keep in mind the highest leverage bugs in my life.
Okay, so my three core rationality tools are the intellectual turing test, making bets, and fermi estimates / fermi modelling (and also something like 'get curious' but I've not made my thinking there totally explicit yet).
Let's go with making bets (aka 'the tool that helps you update on evidence'):
Making bets is one of those things that sounds good in principle but I haven't gotten around to doing (sort of like this list 10 applications exercise). I feel like my time horizons are simply too short right now to keep track of multi-year bets, which seem to be all the interesting ones that people want to take. Any tips?
You can play betting games where the bets resolve instantly because you can look up the answers, e.g. trading a contract with a friend worth the base-10 logarithm of the mass of the sun, in dollars. Basically competitive Fermi estimates. You can do this any time you want to look something up, right before looking it up.
Find some small area of your life where you can keep making bets with a friend. For example, if you go to the shops once a week with a partner, you can bet each week on whether a particular product will be sold out, or whether a certain product will have a reduced price. Two guys who live in my house play Smash Bros regularly, and they make bets about how games will go when e.g. a new friend plays with them.
What do you do regularly with friends that you could make bets about?
I get a lot of mileage out of using Rationalist Taboo, or out of thinking about concepts rather than about words.
All of the following hot-button questions are very easily solved using this technique. As Scott Alexander points out, you can get a reputation as a daring and original thinker just by using this one thing over and over again, one of the best Hammers in the rationality community.
Taking examples and simple tools from the healthiest research fields (maths, physics, cs) is really great, and the exercise you gave at the end was excellent and produced some awesome comments. For these reasons, I've curated the post.
When it comes to the idea of Hammers and Nails it's useful to keep in mind is that you will get a hit when you use a hammer on a nail that nobody else tried to use on that particular nail before.
The more uncommon your hammer happens to be the more likely it is that there weren't hundreds of other people before you who tried to use it on the nail.
Feynman also explicitly spoke about hammer-mode “[so] I had got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me”, also some exerpts here https://www.farnamstreetblog.com/2016/07/mental-tools-richard-feynman/
Favorite technique: Argue with yourself about your conclusions.
By which I mean if I have any reasonable doubt about some idea, belief, or plan I split my mind into two debaters who take opposite sides of the issue, each of which wants to win and I use my natural competitiveness to drive insight into an issue.
I think the accustomed use of this would be investigating my deeply held beliefs and trying to get to their real weak points, but it is also useful for:
I think murphyjitsu is my favorite technique.
Going through Hammertime for the second time now. I tried to figure out 10 not too usual ways in which to utilize predictions and forecasting. Not perfectly happy with the list of course, but a few of these ideas do seem (and to my experience actually are; 1 and 2 in particular) quite useful.
Ten uses for goal factoring that I personally would not normally consider:
1. When I'm craving a certain food or just hungry in general, I could break it up into flavors, textures, and nutrients I desire and cook up something new that fits exactly what I want
2. For shopping lists
3. For choosing what items to keep or discard when decluttering
4. Create a business plan by having some target users goal factor their problem
5. Goal factor my relationship with someone I'm close to and have them do the same, then share the results with each other.
6. Political cause prioritization.
7. Goal factor X so I can write a poem about X
8. To make my dating website profiles more honest
9. To choose which friends to hang out with more
10. To see whether I truly understand my friend's hard situation, I could put myself in their shoes and imagine them going through the goal factoring process for the hard problem they're dealing with. After getting what I think are their motivations, tell them.
Ok... I'm not sure if it can be counted as "instrumental technique", but I often think in terms of Kahneman's System 1/ & System 2/.
The exercise ... was hard, so I included also ideas which are common but can be "rediscovered" this way
Hammer: when there’s low downside, you’re free to try things. (Yeah, this is a corollary of expected utility maximization that seems obvious, but I still feel like I needed to explicitly and recently learn it.) Ten examples:
My favourite trick is "noticing when I am not actually upset/angry/tired with someone or something". I started doing it before I learned about LW - back then I called it "don't fall down before you're hit" in my head. For example, I come to visit a friend who has a young child, and have to sit outside for half an hour before she picks her phone - but the weather is fine, and I notice I'm not actually annoyed by having to wait.
If all you have is a hammer, everything looks like a nail.
The most important idea I've blogged about so far is Taking Ideas Seriously, which is itself a generalization of Zvi's More Dakka. This post is an elaboration of how to fully integrate a new idea.
I draw a dichotomy between Hammers and Nails:
A Hammer is someone who picks one strategy and uses it to solve as many problems as possible.
A Nail is someone who picks one problem and tries all the strategies until it gets solved.
Human beings are generally Nails, fixating on one specific problem at a time and throwing their entire toolkit at it. A Nail gets good at solving important problems slowly and laboriously but can fail to recognize the power and generality of his tools.
Sometimes it's better to be a Hammer. Great advice is always a hammer: an organizing principle that works across many domains. To get the most mileage out of a single hammer, don't stop at using it to tackle your current pet problem. Use it everywhere. Ideas don't get worn down from use.
Regardless of which you are at a given moment, be systematic because Choices are Bad.
Only a Few Tricks
I am reminded of a classic speech of the mathematician Gian-Carlo Rota. His fifth point is to be a Hammer (emphasis mine):
The greatest mathematicians of all time created vast swathes of their work by applying a single precious technique to every problem they could find. My favorite book of mathematics is The Probabilistic Method, by Alon and Spencer. It never ceases to amaze me that this same method applies to:
It's amusing to note that in the same speech, Rota expounded the benefits of being a Nail just two points later:
Both mindsets are vital.
To be a Nail is to study a single problem from every angle. It is often the case that each technique sheds light on only one side of the problem, and by circumambulating it via the application of many hammers at once, one corners the problem in a deep way. This remains true well past a problem's resolution - insight can continue to be drawn from it as other methods are applied and more satisfying proofs attained.
Usually even the failure of certain techniques sheds light on shape of the difficulty. One classic example of an enlightening failure is the consistent overcounting (by exactly a factor of two!) of primes by sieve methods. This failure is so serious and unfixable that it has its own name: the Parity Problem.
Dually, to be a Hammer is to study a single technique from every angle. In the case of the probabilistic method, a breadth of cheap applications were found immediately by simply systematically studying uniform random constructions. However, particularly adept Hammers like Erdős upgraded the basic method into a superweapon by steadfastly applying it to harder and harder problems. Variations of the Probabilistic Method like the Lovász Local Lemma, Shearer's entropy lemma, and the Azuma-Hoeffding inequality are now canon due to the persistence of Hammers.
Be Systematic
The upshot is not that Hammers are better than Nails. Rather, there is a place for both Hammers and Nails, and in particular both mindsets are far superior to the wishy-washy blind meandering that characterizes overwhelmed novices. There may be an endless supply of advice - even great advice - on the internet, and yet any given person should organize their life around systematically applying a few tricks or solving a few problems.
Taking an idea seriously is difficult and expensive. You'll have to tear down competing mental real estate and build a whole new palace for it. You'll have to field test it all over the place without getting superstitious. You'll have to gently titrate for the amount you need until you have enough Dakka.
Therefore, be a Hammer and make that idea pay rent. Hell, you're the president, the emperor, the king. There's no rent control in your head! Get that idea for all its got.
Exercise for the reader: all things have their accustomed uses. Give me ten unaccustomed uses of your favorite instrumental rationality technique! (Bonus points for demonstrating intent to kill.)