I'd be interested in lukeprog's (or CFAR's) thoughts on how to implement "tight feedback loops" into every day instrumental rationality (as opposed to running a business or project).
I'd be interested in writing this one. I don't your divide is a real one; it's basically the same skill. But it's still worth talking about in that context.
I just launched the alpha of forget.io, a service for developing habits and recording data in self-experimentation. It texts you on your phone; you text it back. My stereotypical question (and the one I invented it for) is "How happy are you on a scale of 1-10?" Free to minicamp participants; costs a small fee for everyone else (although only enough to pay for the text messages).
Open Request for Writing Assistance
I have several rough drafts of things I'd like to post to Less Wrong. The one I'm currently working on is about Solomonoff Induction. I seem to be best-motivated by active feedback while writing; this thread is mainly to request feedback while writing in the future. If you'd be interesting in reading what I'm writing every few paragraphs (either because you'd find it interesting or in order to cause it to be written), I would very much appreciate that.
As long as I'm making that request, I might as well make two more: I would also like to hire someone who can edit writing for flow, and someone who can copy-edit. These could be two people or maybe you're amazing and can be both people. I will be willing to pay around $10/hr to anyone interested. If you're editing for flow I'd like to see a sample of your writing.
Thanks in advance to any volunteer readers!
Can you point us to the more interesting checklist resources?
Absolutely. I can give better resources if you can be more specific as to what you're looking for.
I recommend The Checklist Manifesto first as an overview, as well as a basic understanding of akrasia, and trying and failing to make and use some checklists yourself.
The resources I spent most of my time with were very specific to what I was working on, and so I wouldn't recommend them. However, just in case someone finds it useful, Human Factors of Flight-Deck Checklists: The Normal Checklist draws attention to some common failure modes of checklists outside the checklist itself.
This is awesome. I might remove the examples, print down the rest of the list, and read it every morning when I get up and every night before going to sleep. OTOH I have a few quibbles with some examples:
Recent example from Anna: Jumping off the Stratosphere Hotel in Las Vegas in a wire-guided fall. I knew it was safe based on 40,000 data points of people doing it without significant injury, but to persuade my brain I had to visualize 2 times the population of my college jumping off and surviving. Also, my brain sometimes seems much more pessimistic, especially about social things, than I am, and is almost always wrong.
For some reason my brain is more comfortable working with numbers that with visualizations, instead. That can be bad for signalling: a few years ago there was a terrorist attack in London which affected IIRC about 300 people; my mother told me “you should call [your friend who's there] and ask him if he's all right”, and I answered “there are 10 million people in London, so the probability that he was involved is about 1 in 30,000, which is less than the probability that he would die naturally in...”; my mother called me heartless before I even finished the sentence.
Recent example from Anna's brother: Trying to decide whether to move to Silicon Valley and look for a higher-paying programming job, he tried a reframe to avoid the status quo bias: If he was living in Silicon Valley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.)
There's a huge difference: someone living in Silicon Valley on $70K + x and considering whether to stay there or move to Santa Barbara and earn x would be used to living on $70K + x; whereas someone living in Santa Barbara on x and considering whether to move to Silicon Valley and earn x + $70K or stay there would be used to living on x. This would affect how much each of them would enjoy a given amount of money. Also, the former would already have a social circle in Silicon Valley, and the latter wouldn't.
Recent example from Anna: I noticed that every time I hit 'Send' on an email, I was visualizing all the ways the recipient might respond poorly or something else might go wrong, negatively reinforcing the behavior of sending emails. I've (a) stopped doing that (b) installed a habit of smiling each time I hit 'Send' (which provides my brain a jolt of positive reinforcement). This has resulted in strongly reduced procrastination about emails.
Huh, no. If they are likely to respond badly, I want to believe they are likely to respond badly. If they aren't likely to respond badly, I want to believe they aren't likely to respond badly. What is true is already so, owning it up doesn't make it worse. The solution to that problem is to think twice and re-read the email and think about ways to make it less likely for it to be interpreted in an unintended way before hitting Send.
This is awesome. I might remove the examples, print down the rest of the list, and read it every morning when I get up and every night before going to sleep.
Interesting you should say that. About a week ago I simplified this into a more literal checklist designed to be used as part of a nightly wind-down, to see if it could maintain or instill habits. I designed the checklist based largely on empirical results from NASA's review of the factors for effectiveness of pre-flight safety checklists used by pilots, although I chased down a number of other checklist-related resources. I'm currently actively testing effects on myself and others, both trying to test to make sure it would actually be used, and getting the time down to the minimum possible (it's hovering around two minutes).
P.S. I'm not associated with CFAR but the checklist is an experiment on their request.
If you were to test your suggestion for two weeks, I would be interested to hear the results. My prediction (with 80% certainty) is: Lbh jvyy trg cbfvgvir erfhygf sbe n avtug be gjb. Jvguva gra qnlf, lbh jvyy svaq gur yvfg nirefvir / gbb zhpu jbex naq fgbc ernqvat vg, ortva gb tynapr bire vg jvgubhg cebprffvat nalguvat, be npgviryl fgbc gb svk bar bs gur nobir ceboyrzf. (Gur nezl anzr znxrf zr yrff pregnva guna hfhny--zl fgrerbglcr fnlf lbh znl or oberq naq/be qvfpvcyvarq.)
No, by via rationality, I mean via rationality. You cannot use the rational part of their brain to convince them that it is good to be rational, because the rational part of them already knows that, it's just not in charge.
Convincing them, through the rational part of themselves, that eating a certain food gives them stomachache, is often easy. But that's a completely different problem, with no real relation to the problem I was talking about.
So, let's call the thing I'm talking about "winning". It is EXTREMELY helpful although not logically necessary to think winning is a good idea in order to win. I'm talking about how to convince people of that helpful step, so they can, next, learn how to win, and finally, apply the knowledge and win.
Either you're talking about a rationality that doesn't consist of winning, or I'm hearing: "You cannot use the 'winning' part of their brain to convince them that it is good to win, because the 'winning' part of them already knows that, it's just not in charge." Why on earth should I restrict myself to some arbitrary 'winning' part of their brain, if such a thing existed, to convince them that it's good to win? That sounds silly.
Please let me know if I even make sense.
- Figure out your goals, and then make plans for when you get off work to optimize for those. Working as a cashier doesn't seem optimal for almost any purpose--maybe you could start by figuring out how to make money more efficiently, if that's your goal?
- Learn the major system or memory palace. This would let you store a list of things to think about or do when at work. It's also quite easy to practice while at work, once you get the basics down. I'd recommend this first, if you really won't be allowed to write.
- Solve problems. See what problem-solving methods work and which don't. See what kinds of problems you are worst/best at, and become better at those. Math problems, world-modeling (prediction and underlying event deduction), and introspection are especially easy to do in your head.
- Try to figure out why stuff around you is the way it is. (Why did that person buy that item?). Make predictions. Calibrate and get higher accuracy as well.
- Introspect. Find out why you believe what you believe, and whether you should.
- Don't improve your rationality, do something else with your time.
- Optimize your job as a cashier, as much as is possible. Figure out how to do stuff in the least time. Experiment when interacting with customers to see if you can get tips or interesting conversation. Get a different job (manager?) at the same establishment somehow. A useful problem will motivate you more than a non-useful problem.
- Combine all these.
Why not practise mental arithmetic, like 45 times 23. It's not really rationality, but can't hurt. It's probably good for your brain somehow.
Or you could try doing fun pointless economics or physics calculations. If you're a cashier at a supermarket, you could calculate how far the chemical potential energy in a can of soup or whatever would propel it into the air, and do the calculation for as many products as you can find. Or figure out what proportion of the money that comes through you would have had to have stolen and invested twenty years ago on order to get your current salary. Or something like that. I dunno.
(Note: Here in Australia, cashier might have a different meaning. I hope I didn't offend you by implying you were a check out guy in a supermarket.)
Downvoted for "It's probably good for your brain somehow."
Yeah, I don't watch TED anymore. Any other specific suggestions?
I can't give another suggestion unless you tell me what's undesirable about watching TED. There's a transcript on the site, but he uses graphics copiously, so I'm curious how useful it is. Less Wrong says it is too long to post as a comment.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I've often found the examples in some rationality skill discussions difficult to relate to, even though the skill in question seems relevant. The context something is discussed in will make it more or less accessible to different people, even when it's the same skill and beneficial to all concerned.
EXACTLY.