Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Summary: I found this post persuasive, and only noticed after the fact that I wasn't clear on exactly what it had persuaded me of. I think it may do some of the very things it is arguing against, although my epistemic status is that I have not put in enough time to analyze it for a truly charitable reading. 

Disclaimer: I don't have the time (or energy) to put in the amount of thought I'd want to put in before writing this comment. But nevertheless, my model of Duncan wants me to write this comment, so I'm posting it anyway. Feel free to ignore it if it's wrong, useless, or confusing, and I'm sorry if it's offensive or poorly thought out! 

Object-level: I quite liked your two most recent posts on Concentration of Force and Stag Hunts. I liked them enough that I almost sent them to someone else saying "here's a good thing you should read!" It wasn't until I read the comment below by SupposedlyFun that I realized something slightly odd is going on and I hadn't noticed it. I really should have noticed some twinge in the back of my mind on my own, but it took someone else pointing it out for me to catch it. I think you might be guilty of the very thing you're complaining about in this second essay. I'm not entirely sure. But if I'm right about what this second post is about, you'd want me to write this comment. 

Of course, this is tricky because the problem is I'm not sure I can accurately summarize what this second post is about. The first post was very clear - I can give a one-sentence summary I'd give an >80% you'd agree with as an accurate summary (you can win local battles by strategically outnumbering someone locally without outnumbering them in the war more generally, and since such victories can snowball in social realms, we should be careful to notice and leverage this where possible as it's a more effective use of force). Whereas I only give <30% you'd agree with any attempted summary of this post I could give. In your defense, I didn't click out to all of the comments in the other 3 posts that you give as examples of things going wrong. I also didn't read through the entire post a second time. Both of these should be done for a more charitable reading. On the other hand, I committed a decent amount of time to reading both of these essays all the way through, and I imagine anything more than that is a slightly unreasonable standard for effort to understand your core claim. 

I have something like the vague understanding that you think LW is doing something bad, you want less of it, and more of something better. Maybe you merely just want more Rationality and I'm not missing anything, but I think you're trying to make a more narrow point and I'm legitimately not sure what it is. I get that you think the recent Leverage drama is not a good example of Rationality. But without following a number of the linked comments, I can't say exactly what you think went wrong. I have my own views of this from having followed the Leverage drama, but I don't think this should be a prerequisite to understanding the claims in your post. 

Your comment below provides some additional nuance by giving this example: "I have direct knowledge of at least three people who would like to say positive things about their experience at Leverage Research, but feel they cannot." Maybe the issue is you merely need to provide more examples? But that feels like a surface-level fix to a deeper problem, even though I'm not sure what the deeper problem is. All I can say is that I left the post with an emotion (of agreement), not a series of claims I feel like I can evaluate. Whereas your other posts feel more like a series of claims I can analyze and agree or disagree with. What's particularly interesting is I read the essay through and I was like "Yeah Duncan woo this is great, +1" and I didn't even notice I didn't know precisely the narrow thing you're arguing for until I read SupposdelyFun's comment saying the same. This suggests you might be doing the very thing (I think) you're arguing against: using rhetoric and well-written prose to convince me of something without my even knowing exactly what you've convinced me of. That the outgroup is bad (boo!) that the warriors for rationality are getting outnumbered (yikes!) and that we should rally to fix it (huzzah!).

I'm not entirely sure. My thinking around this post isn't clear enough to know precisely what I'm objecting to, but I'm noticing a vague sense of confusion, and I'm hoping that pointing it out is helpful. I do think that putting out thinking on this topic is good in general, and meta-discussion about what went wrong with the Leverage conversation seems sorely needed, so I'm glad that you're starting a conversation about it (despite my comments above).