Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ChrisHallquist 18 October 2013 08:33:47AM 4 points [-]

Luke asked me to look into this literature for a few hours. Here's what I found.

The original paper (Tversky and Koehler 1994) is about disjunctions, and how unpacking them raises people’s estimate of the probaility. So for example, asking people to estimate the probability someone died of “heart disease, cancer, or other natural causes” yields a higher probability estimate than if you just ask about “natural causes.”

They consider the hypothesis this might be because they take the researcher’s apparent emphasis as evidence that’s it’s more likely, but they tested & disconfirmed this hypothesis by telling people to take the last digit of their phone number and estimate the percentage of couples that have that many children. Percentages sum to greater than 1.

Finally, they check whether experts are vulnerable to this bias by doing an experiment similar to the first experiment, but using physicians at Stanford University as the subjects and asking them about a hypothetical case of a woman admitted to an emergency room. They confirmed that yes, experts are vulnerable to this mistake too.

This phenomenon is known as “subadditivity.” A subsequent study (RottenStreich and Tversky 1997) found that subadditivity can even occur when dealing with explicit conjunctions. Macci et al. (1999) found evidence of superadditivity: ask some people how probable it is that the freezing point of alcohol is below that of gasoline, other people how probable it is that the freezing point of gasoline is below that of alcohol, average answers sum to less than 1.

Other studies try to refine the mathematical model of how people make judgements in these kinds of cases, but the experiments I’ve described are the most striking empirical results, I think. One experiment that talks about unpacking conjunctions (rather than disjunctions, like the experiments I’ve described so far) is Boven and Epley (2003, particularly their first experiment, where they ask people how much an oil refinery should be punished for pollution. This pollution is described either as leading to an increase in “asthma, lung cancer, throat cancer, or all varieties of respiratory diseases,” or just as leading to an increase in “all varieties of respiratory diseases.” In the first condition, people want to punish refinery more. But, in spite of being notably different from previous unpacking experiments, still not what Eliezer was talking about.

Below are some other messy notes I took:

http://commonsenseatheism.com/wp-content/uploads/2013/10/Fox-Tversky-A-belief-based-account-of-decision-under-uncertainty.pdf Uses support theory to develop account of decision under uncertainty.

http://commonsenseatheism.com/wp-content/uploads/2013/10/Brenner-Koehler-Subjective-probability-of-disjunctive-hypotheses-local-weight-models-for-decomposition-and-evidential-support.pdf Something about local weights; didn't look at this one much.

http://commonsenseatheism.com/wp-content/uploads/2013/10/Chen-et-al-The-relation-between-probability-and-evidence-judgment-an-extension-of-support-theory.pdf Tweaking math behind support theory to allow for superadditivity.

http://commonsenseatheism.com/wp-content/uploads/2013/10/Brenner-et-al-Modeling-patterns-of-probability-calibration-with-random-support-theory.pdf Introduces notion of random support theory.

http://bear.warrington.ufl.edu/brenner/papers/bilgin-brenner-jesp08.pdf Unpacking effects weaker when dealing with near future as opposed to far future.

Other articles debating how to explain basic support theory results: http://bcs.siu.edu/facultypages/young/JDMStuff/Sloman%20(2004)%20unpacking.pdf http://aris.ss.uci.edu/~lnarens/Submitted/problattice11.pdf http://eclectic.ss.uci.edu/~drwhite/pw/NarensNewfound.pdf

Comment author: Nick_Beckstead 29 October 2014 04:41:58PM 0 points [-]

What this shows is that people are inconsistent in a certain way. If you ask them the same question in two different ways (packed vs. unpacked) you get different answers. Is there any indication of which is the better way to ask the question, or whether asking it some other way is better still? Without an answer to this question, it's unclear to me whether we should talk about an "unpacking fallacy" or a "failure to unpack fallacy".

Comment author: Nick_Beckstead 16 October 2014 11:24:42PM 1 point [-]

I have audiobook recommendations here.

Comment author: lukeprog 24 February 2014 04:32:02PM 1 point [-]

When I was just starting out in September 2013, I realized that vanishingly few of the books I wanted to read were available as audiobooks, so it didn't make sense for me to search Audible for titles I wanted to read: the answer was basically always "no." So instead I browsed through the top 2000 best-selling unabridged non-fiction audiobooks on Audible, added a bunch of stuff to my wishlist, and then scrolled through the wishlist later and purchased the ones I most wanted to listen to.

These days, I have a better sense of what kind of books have a good chance of being recorded as audiobooks, so I sometimes do search for specific titles on Audible.

Some books that I really wanted to listen to are available in ebook but not audiobook, so I used this process to turn them into audiobooks. That only barely works, sometimes. I have to play text-to-speech audiobooks at a lower speed to understand them, and it's harder for my brain to stay engaged as I'm listening, especially when I'm tired. I might give up on that process, I'm not sure.

Most but not all of the books are selected because I expect them to have lots of case studies in "how the world works," specifically with regard to policy-making, power relations, scientific research, and technological development. This is definitely true for e.g. Command and Control, The Quest, Wired for War, Life at the Speed of Light, Enemies, The Making of the Atomic Bomb, Chaos, Legacy of Ashes, Coal, The Secret Sentry, Dirty Wars, The Way of the Knife, The Big Short, Worst-Case Scenarios, The Information, and The Idea Factory.

Comment author: Nick_Beckstead 25 February 2014 09:47:38AM 0 points [-]

Thanks!

Comment author: lukeprog 31 October 2013 11:10:57PM *  8 points [-]

Okay. In this comment I'll keep an updated list of audiobooks I've heard since Sept. 2013, for those who are interested. All audiobooks are available via iTunes/Audible unless otherwise noted.

Outstanding:
* Tetlock, Expert Political Judgment
* Pinker, The Better Angels of Our Nature (my clips)
* Schlosser, Command and Control (my clips)
* Yergin, The Quest (my clips)
* Osnos, Age of Ambition (my clips)

Worthwhile if you care about the subject matter:
* Singer, Wired for War (my clips)
* Feinstein, The Shadow World (my clips)
* Venter, Life at the Speed of Light (my clips)
* Rhodes, Arsenals of Folly (my clips)
* Weiner, Enemies: A History of the FBI (my clips)
* Rhodes, The Making of the Atomic Bomb (available here) (my clips)
* Gleick, Chaos (my clips)
* Wiener, Legacy of Ashes: The History of the CIA (my clips)
* Freese, Coal: A Human History (my clips)
* Aid, The Secret Sentry (my clips)
* Scahill, Dirty Wars (my clips)
* Patterson, Dark Pools (my clips)
* Lieberman, The Story of the Human Body
* Pentland, Social Physics (my clips)
* Okasha, Philosophy of Science: VSI
* Mazzetti, The Way of the Knife (my clips)
* Ferguson, The Ascent of Money (my clips)
* Lewis, The Big Short (my clips)
* de Mesquita & Smith, The Dictator's Handbook (my clips)
* Sunstein, Worst-Case Scenarios (available here) (my clips)
* Johnson, Where Good Ideas Come From (my clips)
* Harford, The Undercover Economist Strikes Back (my clips)
* Caplan, The Myth of the Rational Voter (my clips)
* Hawkins & Blakeslee, On Intelligence
* Gleick, The Information (my clips)
* Gleick, Isaac Newton
* Greene, Moral Tribes
* Feynman, Surely You're Joking, Mr. Feynman! (my clips)
* Sabin, The Bet (my clips)
* Watts, Everything Is Obvious: Once You Know the Answer (my clips)
* Greenblatt, The Swerve: How the World Became Modern (my clips)
* Cain, Quiet: The Power of Introverts in a World That Can't Stop Talking
* Dennett, Freedom Evolves
* Kaufman, The First 20 Hours
* Gertner, The Idea Factory (my clips)
* Olen, Pound Foolish
* McArdle, The Up Side of Down
* Rhodes, Twilight of the Bombs (my clips)
* Isaacson, Steve Jobs (my clips)
* Priest & Arkin, Top Secret America (my clips)
* Ayres, Super Crunchers (my clips)
* Lewis, Flash Boys (my clips)
* Dartnell, The Knowledge (my clips)
* Cowen, The Great Stagnation
* Lewis, The New New Thing (my clips)
* McCray, The Visioneers (my clips)
* Jackall, Moral Mazes (my clips)
* Langewiesche, The Atomic Bazaar
* Ariely, The Honest Truth about Dishonesty (my clips)

Comment author: Nick_Beckstead 24 February 2014 02:10:07PM 0 points [-]

Could you say a bit about your audiobook selection process?

Comment author: lukeprog 06 January 2014 09:37:35PM 0 points [-]

Some empirical discussion of this issue can be found in Hochschild (2012) and the book it discusses, Zaller (1992).

Comment author: Nick_Beckstead 06 January 2014 11:27:54PM 0 points [-]

I'd say Hothschild's stuff isn't that empirical. As far as I can tell, she just gives examples of cases where (she thinks) people do follow elite opinion and and should, don't follow it but should, do follow it but shouldn't, and don't follow it and shouldn't. There's nothing systematic about it.

Hochschild's own answer to my question is:

When should citizens reject elite opinion leadership?In principle, the answer is easy: the mass public should join the elite consensus when leaders’ assertions are empirically supported and morally justified. Conversely, the public should not fall in line when leaders’ assertions are either empirically unsupported, or morally unjustified, or both. p. 536

This view seems to be the intellectual cousin of the view that we should just believe what is supported by good epistemic standards, regardless of what others think. (These days, philosophers are calling this a "steadfast" (as contrasted with "conciliatory") view of disagreement.) I didn't talk about this kind of view, largely because I find it very unhelpful.

I haven't looked at Zaller yet but it appears to mostly be about when people do (rather than should) follow elite opinion. It sounds pretty interesting though.

Comment author: MichaelVassar 10 December 2013 05:30:32PM 10 points [-]

I spent many hours explaining a sub-set of these criticisms to you in Dolores Park soon after we first met, but it strongly seemed to me that that time was wasted. I appreciate that you want to be lawful in your approach to reason, and thus to engage with disagreement, but my impression was that you do not actually engage with disagreement, you merely want to engage with disagreement, basically, I felt that you believe in your belief in rational inquiry, but that you don't actually believe in rational inquiry.

I may, of course, be wrong, and I'm not sure how people should respond in such a situation. It strongly seems to me that a) leftist movements tend to collapse in schizm, b) rightist movements tend to converge on generic xenophobic authoritarianism regardless of their associated theory. I'd rather we avoid both of those situations, but the first seems like an inevitable result of not accommodating belief in belief, while the second seems like an inevitable result of accommodating it. My instinct is that the best option is to not accommodate belief in belief and to keep a movement small enough that schizm can be avoided. The worst thing for an epistemic standard is not the person who ignores or denies it, but the person who tries to mostly follow it when doing so feels right or is convenient while not acknowledging that they aren't following it when it feels weird or inconvenient, as that leads to a community of people with such standards engaging in double-think WRT whether their standards call for weird or inconvenient behavior. OTOH, my best guess is that about 50 people is as far as you can get with my proposed approach.

Comment author: Nick_Beckstead 12 December 2013 11:42:49AM *  6 points [-]

What I mostly remember from that conversation was disagreeing about the likely consequences of "actually trying". You thought elite people in the EA cluster who actually tried had high probability of much more extreme achievements than I did. I see how that fits into this post, but I didn't know you had loads of other criticism about EA, and I probably would have had a pretty different conversation with you if I did.

Fair enough regarding how you want to spend your time. I think you're mistaken about how open I am to changing my mind about things in the face of arguments, and I hope that you reconsider. I believe that if you consulted with people you trust who know me much better than you, you'd find they have different opinions about me than you do. There are multiple cases where detailed engagement with criticism has substantially changed my operations.

Comment author: MichaelVassar 10 December 2013 05:10:24PM 2 points [-]

I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people. Hopefully, some such groups will congeal into effective trade networks. If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.

Comment author: Nick_Beckstead 12 December 2013 11:36:07AM 1 point [-]

If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.

I think we agree about this much more than we disagree. After writing this post, I had a conversation with Anna Salamon in which she suggested that--as you suggest--exploring such disagreements with some urgency was probably more important than getting the short-term decision right. I agree with this and I'm thinking about how to live up to that agreement more.

Regarding the rest of it, I did say "or give less weight to them".

I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people.

Thanks for answering the main question.

I and at least one other person I highly trust have gotten a lot of mileage out of paying a lot attention to cues like "Person X wouldn't go for this" and "That cluster of people that seems good really wouldn't go for this", and trying to think through why, and putting weight on those other approaches to the problem. I think other people do this too. If that counts as "following the standards that seems credible to me upon reflection", maybe we don't disagree too much. If it doesn't, I'd say it's a substantial disagreement.

Comment author: benkuhn 02 December 2013 06:39:18PM 5 points [-]

The main thing that I personally think we don't need as much of is donations to object-level charities (e.g. GiveWell's top picks). It's unclear to me how much this can be funged into more self-reflection for the general person, but for instance I am sacrificing potential donations right now in order to write this post and respond to criticism...

I think in general, a case that "X is bad so we need more of fixing X" without specific recommendations can also be useful in that it leaves the resource allocation up to individual people. For instance, you decided that your current plans are better than spending more time on social-movement introspection, but (hopefully) not everyone who reads this post will come to the same conclusion.

I think "writing blogposts criticizing mistakes that people in the EA community commonly make" is a moderate strawman of what I'd actually like to see, in that it gets us closer to being a successful movement but clearly won't be sufficient on its own.

Why do you think basic fact-finding would be particularly helpful? Seems to me that if we can't come to nontrivial conclusions already, the kind of facts we're likely to find won't help very much.

Comment author: Nick_Beckstead 02 December 2013 08:36:40PM 1 point [-]

The main thing that I personally think we don't need as much of is donations to object-level charities (e.g. GiveWell's top picks). It's unclear to me how much this can be funged into more self-reflection for the general person, but for instance I am sacrificing potential donations right now in order to write this post and respond to criticism...

I am substantially less enthusiastic about donations to object-level charities (for their own sake) than I am for opportunities for us to learn and expand our influence. So I'm pretty on board here.

I think "writing blogposts criticizing mistakes that people in the EA community commonly make" is a moderate strawman of what I'd actually like to see, in that it gets us closer to being a successful movement but clearly won't be sufficient on its own.

That was my first pass at how I'd try to start to try to increase the "self-awareness" of the movement. I would be interested in hearing more specifics about what you'd like to see happen.

Why do you think basic fact-finding would be particularly helpful? Seems to me that if we can't come to nontrivial conclusions already, the kind of facts we're likely to find won't help very much.

A few reasons. One is that the model for research having an impact is: you do research --> you find valuable information --> people recognize your valuable information --> people act differently. I have become increasingly pessimistic about people's ability to recognize good research on issues like population ethics. But I believe people can recognize good research on stuff like shallow cause overviews.

Another consideration is our learning and development. I think the above consideration applies to us, not just to other people. If it's easier for us to tell if we're making progress, we'll learn how to learn about these issues more quickly.

I believe that a lot of the more theoretical stuff needs to happen at some point. There can be a reasonable division of labor, but I think many of us would be better off loading up on the theoretical side after we had a stronger command of the basics. By "the basics" I mean stuff like "who is working on synthetic biology?" in contrast with stuff like "what's the right theory of population ethics?".

You might have a look at this conversation I had with Holden Karnofsky, Paul Christiano, Rob Wiblin, and Carl Shulman. I agree with a lot of what Holden says.

Comment author: Nick_Beckstead 02 December 2013 06:02:19PM *  13 points [-]

I'd like to see more critical discussion of effective altruism of the type in this post. I particularly enjoyed the idea of "pretending to actually try." People doing sloppy thinking and then making up EA-sounding justifications for their actions is a big issue.

As Will McAskill said in a Facebook comment, I do think that a lot of smart people in the EA movement are aware of the issues you're bringing up and have chosen to focus on other things. Big picture, I find claims like "your thing has problem X so you need to spend more resources on fixing X" more compelling when you point to things we've been spending time on and say that we should have done less of those things and more of the thing you think we should have been doing. E.g., I currently spend a lot of my time on research, advocacy, and trying to help improve 80,000 Hours and I'd be pretty hesitant to switch to writing blogposts criticizing mistakes that people in the EA community commonly make, though I've considered doing so and agree this would be help address some of the issues you've identified. But I would welcome more of that kind of thing.

I disagree with your perspective that the effective altruism movement has underinvested in research into population ethics. I wrote a PhD thesis which heavily featured population ethics and aimed at drawing out big-picture takeaways for issues like existential risk. I wouldn't say I settled all the issues, but I think we'd make more progress as movement if we did less philosophy and more basic fact-finding of the kind that goes into GiveWell shallow cause overviews.

Disclosure: I am a Trustee for the Centre for Effective Altruism and I formerly worked at GiveWell as a summer research analyst.

Comment author: MichaelVassar 02 December 2013 04:39:29PM 12 points [-]

This is MUCH better than I expected from the title. I strongly agree with essentially the entire post, and many of my qualms about EA are the result of my bringing these points up with, e.g. Nick Beckstead and not seeing them addressed or even acknowledged.

Comment author: Nick_Beckstead 02 December 2013 04:49:24PM *  12 points [-]

I would love to hear about your qualms with the EA movement if you ever want to have a conversation about the issue.

Edited: When I first read this, I thought you were saying you hadn't brought these problems up with me, but re-reading it it sounds like you tried to raise these criticisms with me. This post has a Vassar-y feel to it but this is mostly criticism I wouldn't say I'd heard from you, and I would have guessed your criticisms would be different. In any case, I would still be interested in hearing more from you about your criticisms of EA.

View more: Next