Privileging the Question
Related to: Privileging the Hypothesis
Remember the exercises in critical reading you did in school, where you had to look at a piece of writing and step back and ask whether the author was telling the whole truth? If you really want to be a critical reader, it turns out you have to step back one step further, and ask not just whether the author is telling the truth, but why he's writing about this subject at all.
-- Paul Graham
There's an old saying in the public opinion business: we can't tell people what to think, but we can tell them what to think about.
-- Doug Henwood
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
Here are some political questions that seem to commonly get discussed in US media: should gay marriage be legal? Should Congress pass stricter gun control laws? Should immigration policy be tightened or relaxed?
These are all examples of what I'll call privileged questions (if there's an existing term for this, let me know): questions that someone has unjustifiably brought to your attention in the same way that a privileged hypothesis unjustifiably gets brought to your attention. The questions above are probably not the most important questions we could be answering right now, even in politics (I'd guess that the economy is more important). Outside of politics, many LWers probably think "what can we do about existential risks?" is one of the most important questions to answer, or possibly "how do we optimize charity?"
Why has the media privileged these questions? I'd guess that the media is incentivized to ask whatever questions will get them the most views. That's a very different goal from asking the most important questions, and is one reason to stop paying attention to the media.
The problem with privileged questions is that you only have so much attention to spare. Attention paid to a question that has been privileged funges against attention you could be paying to better questions. Even worse, it may not feel from the inside like anything is wrong: you can apply all of the epistemic rationality in the world to answering a question like "should Congress pass stricter gun control laws?" and never once ask yourself where that question came from and whether there are better questions you could be answering instead.
I suspect this is a problem in academia too. Richard Hamming once gave a talk in which he related the following story:
Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, "Do you mind if I join you?" They can't say no, so I started eating with them for a while. And I started asking, "What are the important problems of your field?" And after a week or so, "What important problems are you working on?" And after some more time I came in one day and said, "If what you are doing is not important, and if you don't think it is going to lead to something important, why are you at Bell Labs working on it?" I wasn't welcomed after that; I had to find somebody else to eat with!
Academics answer questions that have been privileged in various ways: perhaps the questions their advisor was interested in, or the questions they'll most easily be able to publish papers on. Neither of these are necessarily well-correlated with the most important questions.
So far I've found one tool that helps combat the worst privileged questions, which is to ask the following counter-question:
What do I plan on doing with an answer to this question?
With the worst privileged questions I frequently find that the answer is "nothing," sometimes with the follow-up answer "signaling?" That's a bad sign. (Edit: but "nothing" is different from "I'm just curious," say in the context of an interesting mathematical or scientific question that isn't motivated by a practical concern. Intellectual curiosity can be a useful heuristic.)
(I've also found the above counter-question generally useful for dealing with questions. For example, it's one way to notice when a question should be dissolved, and asked of someone else it's one way to help both of you clarify what they actually want to know.)
Memetic Tribalism
Related: politics is the mind killer, other optimizing
When someone says something stupid, I get an urge to correct them. Based on the stories I hear from others, I'm not the only one.
For example, some of my friends are into this rationality thing, and they've learned about all these biases and correct ways to get things done. Naturally, they get irritated with people who haven't learned this stuff. They complain about how their family members or coworkers aren't rational, and they ask what is the best way to correct them.
I could get into the details of the optimal set of arguments to turn someone into a rationalist, or I could go a bit meta and ask: "Why would you want to do that?"
Why should you spend your time correcting someone else's reasoning?
One reason that comes up is that it's valuable for some reason to change their reasoning. OK, when is it possible?
-
You actually know better than them.
-
You know how to patch their reasoning.
-
They will be receptive to said patching.
-
They will actually change their behavior if the accept the patch.
It seems like it should be rather rare for those conditions to all be true, or even to be likely enough for the expected gain to be worth the cost, and yet I feel the urge quite often. And I'm not thinking it through and deciding, I'm just feeling an urge; humans are adaptation executors, and this one seems like an adaptation. For some reason "correcting" people's reasoning was important enough in the ancestral environment to be special-cased in motivation hardware.
I could try to spin an ev-psych just-so story about tribal status, intellectual dominance hierarchies, ingroup-outgroup signaling, and whatnot, but I'm not an evolutionary psychologist, so I wouldn't actually know what I was doing, and the details don't matter anyway. What matters is that this urge seems to be hardware, and it probably has nothing to do with actual truth or your strategic concerns.
It seems to happen to everyone who has ideas. Social justice types get frustrated with people who seem unable to acknowledge their own privilege. The epistemological flamewar between atheists and theists rages continually across the internet. Tech-savvy folk get frustrated with others' total inability to explore and use Google. Some aspiring rationalists get annoyed with people who refuse to decompartmentalize or claim that something is in a separate magisteria.
Some of those border on being just classic blue vs green thinking, but from the outside, the rationality example isn't all that different. They all seem to be motivated mostly by "This person fails to display the complex habits of thought that I think are fashionable; I should {make fun | correct them | call them out}."
I'm now quite skeptical that my urge to correct reflects an actual opportunity to win by improving someone's thinking, given that I'd feel it whether or not I could actually help, and that it seems to be caused by something else.
The value of attempting a rationality-intervention has gone back down towards baseline, but it's not obvious that the baseline value of rationality interventions is all that low. Maybe it's a good idea, even if there is a possible bias supporting it. We can't win just by reversing our biases; reversed stupidity is not intelligence.
The best reason I can think of to correct flawed thinking is if your ability to accomplish your goals directly depends on their rationality. Maybe they are your business partner, or your spouse. Someone specific and close who you can cooperate with a lot. If this is the case, it's near the same level of urgency as correcting your own.
Another good reason (to discuss the subject at least) is that discussing your ideas with smart people is a good way to make your ideas better. I often get my dad to poke holes in my current craziness, because he is smarter and wiser than me. If this is your angle, keep in mind that if you expect someone else to correct you, it's probably not best to go in making bold claims and implicitly claiming intellectual dominance.
An OK reason is that creating more rationalists is valuable in general. This one is less good than it first appears. Do you really think your comparative advantage right now is in converting this person to your way of thinking? Is that really worth the risk of social friction and expenditure of time and mental energy? Is this the best method you can think of for creating more rationalists?
I think it is valuable to raise the sanity waterline when you can, but using methods of mass instruction like writing blog posts, administering a meetup, or launching a whole rationality movement is a lot more effective than arguing with your mom. Those options aren't for everybody of course, but if you're into waterline-manipulation, you should at least be considering strategies like them. At least consider picking a better time.
Another reason that gets brought up is that turning people around you into rationalists is instrumental in a selfish way, because it makes life easier for you. This one is suspect to me, even without the incentive to rationalize. Did you also seriously consider sabotaging people's rationality to take advantage of them? Surely that's nearly as plausible a-priori. For what specific reason did your search process rank cooperation over predation?
I'm sure there are plenty of good reasons to prefer cooperation, but of course no search process was ever run. All of these reasons that come to mind when I think of why I might want to fix someone's reasoning are just post-hoc rationalizations of an automatic behavior. The true chain of cause-and-effect is observe->feel->act; no planning or thinking involved, except where it is necessary for the act. And that feeling isn't specific to rationality, it affects all mental habits, even stupid ones.
Rationality isn't just a new memetic orthodoxy for the cool kids, it's about actually winning. Every improvement requires a change. Rationalizing strategic reasons for instinctual behavior isn't change, it's spending your resources answering questions with zero value of information. Rationality isn't about what other people are doing wrong; it's about what you are doing wrong.
I used to call this practice of modeling other people's thoughts to enforce orthodoxy on them "incorrect use of empathy", but in terms of ev-psych, it may be exactly the correct use of empathy. We can call it Memetic Tribalism instead.
(I've ignored the other reason to correct people's reasoning, which is that it's fun and status-increasing. When I reflect on my reasons for writing posts like this, it turns out I do it largely for the fun and internet status points, but I try to at least be aware of that.)
Entropy, and Short Codes
Followup to: Where to Draw the Boundary?
Suppose you have a system X that's equally likely to be in any of 8 possible states:
{X1, X2, X3, X4, X5, X6, X7, X8.}
There's an extraordinarily ubiquitous quantity—in physics, mathematics, and even biology—called entropy; and the entropy of X is 3 bits. This means that, on average, we'll have to ask 3 yes-or-no questions to find out X's value. For example, someone could tell us X's value using this code:
X1: 001 X2: 010 X3: 011 X4: 100 X5: 101 X6: 110 X7: 111 X8: 000
So if I asked "Is the first symbol 1?" and heard "yes", then asked "Is the second symbol 1?" and heard "no", then asked "Is the third symbol 1?" and heard "no", I would know that X was in state 4.
Now suppose that the system Y has four possible states with the following probabilities:
Y1: 1/2 (50%) Y2: 1/4 (25%) Y3: 1/8 (12.5%) Y4: 1/8 (12.5%)
Then the entropy of Y would be 1.75 bits, meaning that we can find out its value by asking 1.75 yes-or-no questions.
Attention control is critical for changing/increasing/altering motivation
I’ve just been reading Luke’s “Crash Course in the Neuroscience of Human Motivation.” It is a useful text, although there are a few technical errors and a few bits of outdated information (see [1], updated information about one particular quibble in [2] and [3]).
There is one significant missing piece, however, which is of critical importance for our subject matter here on LW: the effect of attention on plasticity, including the plasticity of motivation. Since I don’t see any other texts addressing it directly (certainly not from a neuroscientific perspective), let’s cover the main idea here.
Summary for impatient readers: focus of attention physically determines which synapses in your brain get stronger, and which areas of your cortex physically grow in size. The implications of this provide direct guidance for alteration of behaviors and motivational patterns. This is used for this purpose extensively: for instance, many benefits of the Cognitive-Behavioral Therapy approach rely on this mechanism.
Feed the spinoff heuristic!
Follow-up to:
Parapsychology: the control group for science
Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields
Recent renewed discussions of the parapsychology literature and Daryl Bem's recent precognition article brought to mind the "market test" of claims of precognition. Bem tells us that random undergraduate students were able to predict with 53% accuracy where an erotic image would appear in the future. If this effect was actually real, I would rerun the experiment before corporate earnings announcements, central bank interest rate changes, etc, and change the images based on the reaction of stocks and bonds to the announcements. In other words, I could easily convert "porn precognition" into "hedge fund trillionaire precognition."
If I was initially lacking in the capital to do trades, I could publish my predictions online using public key cryptography and amass an impressive track record before recruiting investors. If anti-psi prejudice was a problem, no one need know how I was making my predictions. Similar setups could exploit other effects claimed in the parapsychology literature (e.g. the remote viewing of the Scientologist-founded Stargate Project of the U.S. federal government). Those who assign a lot of credence to psi may want to actually try this, but for me this is an invitation to use parapsychology as control group for science, and to ponder a general heuristic for crudely estimating the soundness of academic fields for outsiders.
One reason we trust that physicists and chemists have some understanding of their subjects is that they produce valuable technological spinoffs with concrete and measurable economic benefit. In practice, I often make use of the spinoff heuristic: If an unfamiliar field has the sort of knowledge it claims, what commercial spinoffs and concrete results ought it to be producing? Do such spinoffs exist? What are the explanations for their absence?
For psychology, I might cite systematic desensitization of specific phobias such as fear of spiders, cognitive-behavioral therapy, and military use of IQ tests (with large measurable changes in accident rates, training costs, etc). In financial economics, I would raise the hundreds of billions of dollars invested in index funds, founded in response to academic research, and their outperformance relative to managed funds. Auction theory powers tens of billions of dollars of wireless spectrum auctions, not to mention evil dollar-auction sites.
This seems like a great task for crowdsourcing: the cloud of LessWrongers has broad knowledge, and sorting real science from cargo cult science is core to being Less Wrong. So I ask you, Less Wrongers, for your examples of practical spinoffs (or suspicious absences thereof) of sometimes-denigrated fields in the comments. Macroeconomics, personality psychology, physical anthropology, education research, gene-association studies, nutrition research, wherever you have knowledge to share.
ETA: This academic claims to be trying to use the Bem methods to predict roulette wheels, and to have passed statistical significance tests on his first runs. Such claims have been made for casinos in the past, but always trailed away in failures to replicate, repeat, or make actual money. I expect the same to happen here.
Background material for reading Judgment Under Uncertainty?
After seeing it constantly referenced in the Sequences and elsewhere, I've picked up Kahneman and Tversky's book/collection of papers Judgment Under Uncertainty: Heuristics and Biases. I was wondering if anyone here who's read it or knows the subject would recommend any prefatory material so that it makes more sense/is more meaningful.
Personal background: [Personal information deleted] From that and from this site, I'm passingly familiar with e.g. the representativeness heuristic and Bayesian probability, but I've never had to use it much in any academic setting.
Any advice before I delve into it?
Exclude the supernatural? My worldview is up for grabs.
Background
I was raised in the Churches of Christ and my family is all very serious about Christianity. About 3 years ago, I started to ask some hard questions, and the answers from other Christians were very unsatisfying. I used to believe that the Bible was, you know, inspired by a loving God, but its endorsement of genocide, the abuse of slaves, and the mistreatment of women and children really started to bother me.
I set out to study these issues as much as I could. I stayed up past midnight for weeks reading what Christians have to say, and this process triggered a real crisis of faith. What started out as a search for answers on Biblical genocide led me to places like commonsenseatheism.com. I learned that the Bible has serious credibility problems on lots of issues that no one ever told me about. Wow.
My Question
Now I'm pretty sure that the God of the Bible is man-made and Jesus of Nazareth was probably a failed prophet, but I don't have good reasons to reject the supernatural all together. I'm working through the sequences, but this process is slow. I will probably struggle with this question for months, maybe longer.
Excluding the Supernatural was interesting, but it left me wanting a more thorough explanation. Where do you think I should go from here? Should I just continue reading the sequences, and re-read them until the ideas gel? I'm coming from 30 years of Sunday School level thinking. It's not like I grew up with words like "epistemology" and "epiphenomenalism". If there is no supernatural, and I can be confident about that, I will need to re-evaluate a lot of things. My worldview is up for grabs.
Reductionism reading list
- Yudkowsky, Reductionism sequence (2008)
- Ladyman et al., Every Thing Must Go: Metaphysics Naturalized (2007)
- Bickle, Philosophy and Neuroscience: A Ruthlessly Reductive Account (2003)
- Bickle, 'Real Reduction in Real Neuroscience' (2008)
- Glimcher, Foundations of Neuroeconomic Analysis (2010)
- Ney, 'Reductionism' [for Internet Encyclopedia of Philosophy] (2008)
- Hohwy & Kallestrup (eds.), Being Reduced (2008)
- Drescher, Good and Real (2006)
I can't endorse everything in all these works, but they each provide insights into understanding reduction.
What else do ya'll recommend?
Metaphilosophical Mysteries
Creating Friendly AI seems to require us humans to either solve most of the outstanding problems in philosophy, or to solve meta-philosophy (i.e., what is the nature of philosophy, how do we practice it, and how should we program an AI to do it?), and to do that in an amount of time measured in decades. I'm not optimistic about our chances of success, but out of these two approaches, the latter seems slightly easier, or at least less effort has already been spent on it. This post tries to take a small step in that direction, by asking a few questions that I think are worth investigating or keeping in the back of our minds, and generally raising awareness and interest in the topic.
Model Uncertainty, Pascalian Reasoning and Utilitarianism
Related to: Confidence levels inside and outside an argument, Making your explicit reasoning trustworthy
A mode of reasoning that sometimes comes up in discussion of existential risk is the following.
Person 1: According to model A (e.g. some Fermi calculation with probabilities coming from certain reference classes), pursuing course of action X will reduce existential risk by 10-5 ; existential risk has an opportunity cost of 1025 DALYs (*), therefore model A says the expected value of pursuing course of action X is 1020 DALYs. Since course of action X requires 109 dollars, the number of DALYs saved per dollar invested in course of action X is 1011. Hence course of action X is 1010 times as cost-effective as the most cost-effective health interventions in the developing world.
Person 2: I reject model A; I think that appropriate probabilities involved in the Fermi calculation may be much smaller than model A claims; I think that model A fails to incorporate many relevant hypotheticals which would drag the probability down still further.
Person 1: Sure, it may be that model A is totally wrong, but there's nothing obviously very wrong with it. Surely you'd assign at least a 10-5 chance that it's on the mark? More confidence than this would seem to indicate overconfidence bias, after all, plenty of smart people believe in model A and it can't be that likely that they're all wrong. So you think that the side-effects of pursuing course of action X are systematically negative, even your own implicit model gives a figure of at least 105 $/DALY saved, and that's a far better investment than any other philanthropic effort that you know of, so you should fund course of action X even if you think that model A is probably wrong.
(*) As Jonathan Graehl mentions, DALY stands for Disability-adjusted life year.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)