Nick_Beckstead

Wiki Contributions

Comments

Sorted by

What this shows is that people are inconsistent in a certain way. If you ask them the same question in two different ways (packed vs. unpacked) you get different answers. Is there any indication of which is the better way to ask the question, or whether asking it some other way is better still? Without an answer to this question, it's unclear to me whether we should talk about an "unpacking fallacy" or a "failure to unpack fallacy".

I have audiobook recommendations here.

Could you say a bit about your audiobook selection process?

I'd say Hothschild's stuff isn't that empirical. As far as I can tell, she just gives examples of cases where (she thinks) people do follow elite opinion and and should, don't follow it but should, do follow it but shouldn't, and don't follow it and shouldn't. There's nothing systematic about it.

Hochschild's own answer to my question is:

When should citizens reject elite opinion leadership?In principle, the answer is easy: the mass public should join the elite consensus when leaders’ assertions are empirically supported and morally justified. Conversely, the public should not fall in line when leaders’ assertions are either empirically unsupported, or morally unjustified, or both. p. 536

This view seems to be the intellectual cousin of the view that we should just believe what is supported by good epistemic standards, regardless of what others think. (These days, philosophers are calling this a "steadfast" (as contrasted with "conciliatory") view of disagreement.) I didn't talk about this kind of view, largely because I find it very unhelpful.

I haven't looked at Zaller yet but it appears to mostly be about when people do (rather than should) follow elite opinion. It sounds pretty interesting though.

What I mostly remember from that conversation was disagreeing about the likely consequences of "actually trying". You thought elite people in the EA cluster who actually tried had high probability of much more extreme achievements than I did. I see how that fits into this post, but I didn't know you had loads of other criticism about EA, and I probably would have had a pretty different conversation with you if I did.

Fair enough regarding how you want to spend your time. I think you're mistaken about how open I am to changing my mind about things in the face of arguments, and I hope that you reconsider. I believe that if you consulted with people you trust who know me much better than you, you'd find they have different opinions about me than you do. There are multiple cases where detailed engagement with criticism has substantially changed my operations.

If one usually reliable algorithm disagrees strongly with others, yes, short term you should probably effectively ignore it, but that can be done via squaring assigned probabilities, taking harmonic or geometric means, etc, not by dropping it, and more importantly, such deviations should be investigated with some urgency.

I think we agree about this much more than we disagree. After writing this post, I had a conversation with Anna Salamon in which she suggested that--as you suggest--exploring such disagreements with some urgency was probably more important than getting the short-term decision right. I agree with this and I'm thinking about how to live up to that agreement more.

Regarding the rest of it, I did say "or give less weight to them".

I think that people following the standards that seem credible to them upon reflection is the best you can hope for. Ideally, upon reflection, bets and experiments will be part of those standards to at least some people.

Thanks for answering the main question.

I and at least one other person I highly trust have gotten a lot of mileage out of paying a lot attention to cues like "Person X wouldn't go for this" and "That cluster of people that seems good really wouldn't go for this", and trying to think through why, and putting weight on those other approaches to the problem. I think other people do this too. If that counts as "following the standards that seems credible to me upon reflection", maybe we don't disagree too much. If it doesn't, I'd say it's a substantial disagreement.

The main thing that I personally think we don't need as much of is donations to object-level charities (e.g. GiveWell's top picks). It's unclear to me how much this can be funged into more self-reflection for the general person, but for instance I am sacrificing potential donations right now in order to write this post and respond to criticism...

I am substantially less enthusiastic about donations to object-level charities (for their own sake) than I am for opportunities for us to learn and expand our influence. So I'm pretty on board here.

I think "writing blogposts criticizing mistakes that people in the EA community commonly make" is a moderate strawman of what I'd actually like to see, in that it gets us closer to being a successful movement but clearly won't be sufficient on its own.

That was my first pass at how I'd try to start to try to increase the "self-awareness" of the movement. I would be interested in hearing more specifics about what you'd like to see happen.

Why do you think basic fact-finding would be particularly helpful? Seems to me that if we can't come to nontrivial conclusions already, the kind of facts we're likely to find won't help very much.

A few reasons. One is that the model for research having an impact is: you do research --> you find valuable information --> people recognize your valuable information --> people act differently. I have become increasingly pessimistic about people's ability to recognize good research on issues like population ethics. But I believe people can recognize good research on stuff like shallow cause overviews.

Another consideration is our learning and development. I think the above consideration applies to us, not just to other people. If it's easier for us to tell if we're making progress, we'll learn how to learn about these issues more quickly.

I believe that a lot of the more theoretical stuff needs to happen at some point. There can be a reasonable division of labor, but I think many of us would be better off loading up on the theoretical side after we had a stronger command of the basics. By "the basics" I mean stuff like "who is working on synthetic biology?" in contrast with stuff like "what's the right theory of population ethics?".

You might have a look at this conversation I had with Holden Karnofsky, Paul Christiano, Rob Wiblin, and Carl Shulman. I agree with a lot of what Holden says.

I'd like to see more critical discussion of effective altruism of the type in this post. I particularly enjoyed the idea of "pretending to actually try." People doing sloppy thinking and then making up EA-sounding justifications for their actions is a big issue.

As Will McAskill said in a Facebook comment, I do think that a lot of smart people in the EA movement are aware of the issues you're bringing up and have chosen to focus on other things. Big picture, I find claims like "your thing has problem X so you need to spend more resources on fixing X" more compelling when you point to things we've been spending time on and say that we should have done less of those things and more of the thing you think we should have been doing. E.g., I currently spend a lot of my time on research, advocacy, and trying to help improve 80,000 Hours and I'd be pretty hesitant to switch to writing blogposts criticizing mistakes that people in the EA community commonly make, though I've considered doing so and agree this would be help address some of the issues you've identified. But I would welcome more of that kind of thing.

I disagree with your perspective that the effective altruism movement has underinvested in research into population ethics. I wrote a PhD thesis which heavily featured population ethics and aimed at drawing out big-picture takeaways for issues like existential risk. I wouldn't say I settled all the issues, but I think we'd make more progress as movement if we did less philosophy and more basic fact-finding of the kind that goes into GiveWell shallow cause overviews.

Disclosure: I am a Trustee for the Centre for Effective Altruism and I formerly worked at GiveWell as a summer research analyst.

I would love to hear about your qualms with the EA movement if you ever want to have a conversation about the issue.

Edited: When I first read this, I thought you were saying you hadn't brought these problems up with me, but re-reading it it sounds like you tried to raise these criticisms with me. This post has a Vassar-y feel to it but this is mostly criticism I wouldn't say I'd heard from you, and I would have guessed your criticisms would be different. In any case, I would still be interested in hearing more from you about your criticisms of EA.

Load More