Privileging the Question
Related to: Privileging the Hypothesis
Remember the exercises in critical reading you did in school, where you had to look at a piece of writing and step back and ask whether the author was telling the whole truth? If you really want to be a critical reader, it turns out you have to step back one step further, and ask not just whether the author is telling the truth, but why he's writing about this subject at all.
-- Paul Graham
There's an old saying in the public opinion business: we can't tell people what to think, but we can tell them what to think about.
-- Doug Henwood
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
Here are some political questions that seem to commonly get discussed in US media: should gay marriage be legal? Should Congress pass stricter gun control laws? Should immigration policy be tightened or relaxed?
These are all examples of what I'll call privileged questions (if there's an existing term for this, let me know): questions that someone has unjustifiably brought to your attention in the same way that a privileged hypothesis unjustifiably gets brought to your attention. The questions above are probably not the most important questions we could be answering right now, even in politics (I'd guess that the economy is more important). Outside of politics, many LWers probably think "what can we do about existential risks?" is one of the most important questions to answer, or possibly "how do we optimize charity?"
Why has the media privileged these questions? I'd guess that the media is incentivized to ask whatever questions will get them the most views. That's a very different goal from asking the most important questions, and is one reason to stop paying attention to the media.
The problem with privileged questions is that you only have so much attention to spare. Attention paid to a question that has been privileged funges against attention you could be paying to better questions. Even worse, it may not feel from the inside like anything is wrong: you can apply all of the epistemic rationality in the world to answering a question like "should Congress pass stricter gun control laws?" and never once ask yourself where that question came from and whether there are better questions you could be answering instead.
I suspect this is a problem in academia too. Richard Hamming once gave a talk in which he related the following story:
Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, "Do you mind if I join you?" They can't say no, so I started eating with them for a while. And I started asking, "What are the important problems of your field?" And after a week or so, "What important problems are you working on?" And after some more time I came in one day and said, "If what you are doing is not important, and if you don't think it is going to lead to something important, why are you at Bell Labs working on it?" I wasn't welcomed after that; I had to find somebody else to eat with!
Academics answer questions that have been privileged in various ways: perhaps the questions their advisor was interested in, or the questions they'll most easily be able to publish papers on. Neither of these are necessarily well-correlated with the most important questions.
So far I've found one tool that helps combat the worst privileged questions, which is to ask the following counter-question:
What do I plan on doing with an answer to this question?
With the worst privileged questions I frequently find that the answer is "nothing," sometimes with the follow-up answer "signaling?" That's a bad sign. (Edit: but "nothing" is different from "I'm just curious," say in the context of an interesting mathematical or scientific question that isn't motivated by a practical concern. Intellectual curiosity can be a useful heuristic.)
(I've also found the above counter-question generally useful for dealing with questions. For example, it's one way to notice when a question should be dissolved, and asked of someone else it's one way to help both of you clarify what they actually want to know.)
[LINK] Causal Entropic Forces
This paper seems relevant to various LW interests. It smells like The Second Law of Thermodynamics, and Engines of Cognition, but I haven't wrapped my head enough around either to say more than that. Abstract:
Recent advances in fields ranging from cosmology to computer science have hinted at a possible deep connection between intelligence and entropy maximization, but no formal physical relationship between them has yet been established. Here, we explicitly propose a first step toward such a relationship in the form of a causal generalization of entropic forces that we find can cause two defining behaviors of the human “cognitive niche”—tool use and social cooperation—to spontaneously emerge in simple physical systems. Our results suggest a potentially general thermodynamic model of adaptive behavior as a nonequilibrium process in open systems.
Post Request Thread
This thread is another experiment roughly in the vein of the Boring Advice Repository and the Solved Problems Repository.
There are some topics I'd like to see more LW posts on, but I feel underqualified to post about them relative to my estimate of the most qualified LWer on the topic. I would guess that I am not the only one. I would further guess that there are some LWers who are really knowledgeable about various topics and might like to write about one of them but are unsure which one to choose.
If my guesses are right, these people should be made aware of each other. In this thread, please comment with a request for a LW post (Discussion or Main) on a particular topic. Please upvote such a comment if you would also like to see such a post, and comment on such a comment if you plan on writing such a post. If you leave a writing-plan comment, please edit it once you actually write the post and link to the post so as to avoid duplication of effort in the future.
Let's see what happens!
Edit: it just occurred to me that it might also be reasonable to comment indicating what topics you'd be interested in writing about and then asking people to tell you which ones they'd like you to write about the most. So try that too!
Solved Problems Repository
Follow-up to: Boring Advice Repository
Many practical problems in instrumental rationality appear to be wide open. Two I've been annoyed by recently are "what should I eat?" and "how should I exercise?" However, some appear to be more or less solved. For example, various mnemonic techniques like memory palaces, along with spaced repetition, seem to more or less solve the problem of memorization.
I would like people to use this thread to post other examples of solved problems in instrumental rationality. I'm pretty sure you all collectively know good examples; there's a comment I can't find from a user who said something like "taking a flattering photograph of yourself is a solved problem," and it's likely that there are other useful examples like this that aren't common knowledge. Err on the side of posting solutions which may not be universal but are still likely to be helpful to many people.
(This thread is allowed to not be boring! Go wild!)
Boring Advice Repository
This is an extension of a comment I made that I can't find and also a request for examples. It seems plausible that, when giving advice, many people optimize for deepness or punchiness of the advice rather than for actual practical value. There may be good reasons to do this - e.g. advice that sounds deep or punchy might be more likely to be listened to - but as a corollary, there could be valuable advice that people generally don't give because it doesn't sound deep or punchy. Let's call this boring advice.
An example that's been discussed on LW several times is "make checklists." Checklists are great. We should totally make checklists. But "make checklists" is not a deep or punchy thing to say. Other examples include "google things" and "exercise."
I would like people to use this thread to post other examples of boring advice. If you can, provide evidence and/or a plausible argument that your boring advice actually is useful, but I would prefer that you err on the side of boring but not necessarily useful in the name of more thoroughly searching a plausibly under-searched part of advicespace.
Upvotes on advice posted in this thread should be based on your estimate of the usefulness of the advice; in particular, please do not vote up advice just because it sounds deep or punchy.
Think Like a Supervillain
See also: Everything I Needed To Know About Life, I Learned From Supervillains
Mr. Malfoy would hardly shrink from talk of ordinary murder, but even he was shocked - yes you were Mr. Malfoy, I was watching your face - when Mr. Potter described how to use his classmates' bodies as raw material. There are censors inside your mind which make you flinch away from thoughts like that. Mr. Potter thinks purely of killing the enemy, he will grasp at any means to do so, he does not flinch, his censors are off.
A while back, I claimed the Less Wrong username Quirinus Quirrell, and started hosting a long-running, approximate simulation of him in my brain. I have mostly used the account trivially - to play around with crypto-novelties, say mildly offensive things I wouldn't otherwise, and poke fun at Clippy. Several times I have doubted the wisdom of hosting such a simulation. Quirrell's values are not my own, and the plans that he generates (which I have never followed) are mostly bad when viewed in terms of my values. However, I have chosen to keep this occasional alter-identity, because he sees things that would otherwise be invisible to me.
I was once asked whether I would rather be a superhero or a supervillain, and I probably shouldn't tell you how little time it took for me to answer "supervillain."
Being a superhero sounds awful, at least if you intend to keep being recognized as a superhero. Superheroes are bound by the chains of public opinion. A superhero can only do what people generally agree is good for superheroes to do. If you stray too far off the beaten path in search of how best to use your superpowers to actually save the world, you could easily end up doing things that look, at first glance, somewhat to incredibly evil. And if people are going to turn against you once you start actually optimizing, you might as well just be a supervillain to begin with. They look like they're having more fun anyway.
You probably won't get the chance to decide between being a superhero or a supervillain, but you do get the chance to decide what kind of person you think of yourself as, and I think you should think of yourself more as a supervillain than as a superhero. Why?
In the same way that being a superhero limits what you can do, thinking of yourself as a superhero limits what you can think. And if you want to save the world, you can't afford to limit what you can think. Humanity faces many difficult problems, and the space of possible solutions to any one of these problems is large. If you have censors in your mind that are preventing you from looking at parts of this space because some of your moral intuitions don't like them ("that's not the kind of thing a superhero would do!"), you're crippling your ability to search for solutions to problems. For example, your moral intuitions are likely to flinch away from solutions to problems that involve you causing bad things to happen but be okay with solutions to problems that involve you failing to prevent bad things from happening (think of the trolley problem, or Batman's policy of not killing his enemies).
Edit (2/19): But thinking of yourself as a supervillain has the opposite effect. It's easier not to flinch at certain kinds of ideas, which now come more easily to mind and may not have otherwise occurred to you. For example, on Facebook, Eliezer recently mentioned a thread where people were posting examples of things that they valued at a billion dollars or more, such as their cats. With a supervillain module running in the background, I noticed and pointed out that this constituted a thread where people publicly described how they could be ransomed. I can't exactly test this, but I don't think this kind of idea would have occurred to me before I installed the supervillain module. (This is a tame example. I won't give less tame examples for obvious reasons.)
There are many things you can't say, but you don't have to say everything you think. Until someone discovers a technique for reliably reading human minds, think whatever thoughts best help you accomplish your goals without worrying about any moral labels they may or may not, upon reflection, ultimately warrant. Moral labels are for a later step in the decision process than the part where you generate ideas.
Rationalist Lent
As I understand it, Lent is a holiday where we celebrate the scientific method by changing exactly one variable in our lives for 40 days. This seems like a convenient Schelling point for rationalists to adopt, so:
What variable are you going to change for the next 40 days?
(I am really annoyed I didn't think of this yesterday.)
Thoughts on the January CFAR workshop
So, the Center for Applied Rationality just ran another workshop, which Anna kindly invited me to. Below I've written down some thoughts on it, both to organize those thoughts and because it seems other LWers might want to read them. I'll also invite other participants to write down their thoughts in the comments. Apologies if what follows isn't particularly well-organized.
Feelings and other squishy things
The workshop was totally awesome. This is admittedly not strong evidence that it accomplished its goals (cf. Yvain's comment here), but being around people motivated to improve themselves and the world was totally awesome, and learning with and from them was also totally awesome, and that seems like a good thing.
Also, the venue was fantastic. CFAR instructors reported that this workshop was more awesome than most, and while I don't want to discount improvements in CFAR's curriculum and its selection process for participants, I think the venue counted for a lot. It was uniformly beautiful and there were a lot of soft things to sit down or take naps on, and I think that helped everybody be more comfortable with and relaxed around each other.
Main takeaways
Here are some general insights I took away from the workshop. Some of them I had already been aware of on some abstract intellectual level but hadn't fully processed and/or gotten drilled into my head and/or seen the implications of.
- Epistemic rationality doesn't have to be about big things like scientific facts or the existence of God, but can be about much smaller things like the details of how your particular mind works. For example, it's quite valuable to understand what your actual motivations for doing things are.
- Introspection is unreliable. Consequently, you don't have direct access to information like your actual motivations for doing things. However, it's possible to access this information through less direct means. For example, if you believe that your primary motivation for doing X is that it brings about Y, you can perform a thought experiment: imagine a world in which Y has already been brought about. In that world, would you still feel motivated to do X? If so, then there may be reasons other than Y that you do X.
- The mind is embodied. If you consistently model your mind as separate from your body (I have in retrospect been doing this for a long time without explicitly realizing it), you're probably underestimating the powerful influence of your mind on your body and vice versa. For example, dominance of the sympathetic nervous system (which governs the fight-or-flight response) over the parasympathetic nervous system is unpleasant, unhealthy, and can prevent you from explicitly modeling other people. If you can notice and control it, you'll probably be happier, and if you get really good, you can develop aikido-related superpowers.
- You are a social animal. Just as your mind should be modeled as a part of your body, you should be modeled as a part of human society. For example, if you don't think you care about social approval, you are probably wrong, and thinking that will cause you to have incorrect beliefs about things like your actual motivations for doing things.
- Emotions are data. Your emotional responses to stimuli give you information about what's going on in your mind that you can use. For example, if you learn that a certain stimulus reliably makes you angry and you don't want to be angry, you can remove that stimulus from your environment. (This point should be understood in combination with point 2 so that it doesn't sound trivial: you don't have direct access to information like what stimuli make you angry.)
- Emotions are tools. You can trick your mind into having specific emotions, and you can trick your mind into having specific emotions in response to specific stimuli. This can be very useful; for example, tricking your mind into being more curious is a great way to motivate yourself to find stuff out, and tricking your mind into being happy in response to doing certain things is a great way to condition yourself to do certain things. Reward your inner pigeon.
Here are some specific actions I am going to take / have already taken because of what I learned at the workshop.
- Write a lot more stuff down. What I can think about in my head is limited by the size of my working memory, but a piece of paper or a WorkFlowy document don't have this limitation.
- Start using a better GTD system. I was previously using RTM, but badly. I was using it exclusively from my iPhone, and when adding something to RTM from an iPhone the due date defaults to "today." When adding something to RTM from a browser the due date defaults to "never." I had never done this, so I didn't even realize that "never" was an option. That resulted in having due dates attached to RTM items that didn't actually have due dates, and it also made me reluctant to add items to RTM that really didn't look like they had due dates (e.g. "look at this interesting thing sometime"), which was bad because that meant RTM wasn't collecting a lot of things and I stopped trusting my own due dates.
- Start using Boomerang to send timed email reminders to future versions of myself. I think this might work better than using, say, calendar alerts because it should help me conceptualize past versions of myself as people I don't want to break commitments to.
I'm also planning to take various actions that I'm not writing above but instead putting into my GTD system, such as practicing specific rationality techniques (the workshop included many useful worksheets for doing this) and investigating specific topics like speed-reading and meditation.
The arc word (TVTropes warning) of this workshop was "agentiness." ("Agentiness" is more funtacular than "agency.") The CFAR curriculum as a whole could be summarized as teaching a collection of techniques to be more agenty.
Miscellaneous
A distinguishing feature the people I met at the workshop seemed to have in common was the ability to go meta. This is not a skill which was explicitly mentioned or taught (although it was frequently implicit in the kind of jokes people told), but it strikes me as an important foundation for rationality: it seems hard to progress with rationality unless the thought of using your brain to improve how you use your brain, and also to improve how you improve how you use your brain, is both understandable and appealing to you. This probably eliminates most people as candidates for rationality training unless it's paired with or maybe preceded by meta training, whatever that looks like.
One problem with the workshop was lack of sleep, which seemed to wear out both participants and instructors by the last day (classes started early in the day and conversations often continued late into the night because they were unusually fun / high-value). Offering everyone modafinil or something at the beginning of future workshops might help with this.
Overall
Overall, while it's too soon to tell how big an impact the workshop will have on my life, I anticipate a big impact, and I strongly recommend that aspiring rationalists attend future workshops.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)