I'm Screwtape, also known as Skyler. I'm an aspiring rationalist originally introduced to the community through HPMoR, and I stayed around because the writers here kept improving how I thought. I'm fond of the Rationality As A Martial Art metaphor, new mental tools to make my life better, and meeting people who are strange in ways I find familiar and comfortable. If you're ever in the Boston area, feel free to say hi.
Starting early in 2023, I'm the ACX Meetups Czar. You might also know me from the New York City Rationalist Megameetup, editing the Animorphs: The Reckoning podfic, or being that guy at meetups with a bright bandanna who gets really excited when people bring up indie tabletop roleplaying games.
I recognize that last description might fit more than one person.
Feels weird to correct-vote and also give a missed the point react.
I'm aware of Tsiolkovsky's harsh equation, and I totally am treating my delta-v here as magical momentum applied from some kind of platonic spherical cow engine. I stand by my metaphor anyway. If you handwave the mass of the fuel, I think my numbers are roughly right. If you don't handwave the fuel, then all my numbers are wrong, yes in a way that makes Pluto take more effort, and handling the implications of how much more effort becomes closer to an engineering problem (e.g. "do multiple launches to bring fuel up before leaving LEO.")
Once we're treating it as engineering problems, if you've already gone to the moon I think you've already solved the hardest engineering problems involved in going to Pluto.
I'm worried I'm unintentionally creating a motte-and-bailey in the comments, so I'm going to try and split a few things out. Here's four things all of which I believe are true, but which people could reasonably agree or disagree with separately.
I think 4 is true, but I'm on board with LessWrong in particular being a place where we don't give up units of honesty in order to get units of grace no matter the exchange rate. I'm less cheerful about existing in spaces where we give up 1.
Note: I'm not trying to say every single sentence is close to free to say in the other language, or gracefully- there's detailed German words like Schadenfreude that are harder to communicate in English, and there's it takes effort to gracefully fire someone for non-performance.
Lightly agree with a little caveat. I don't want to make too strong a claim here to the counter, but there's a healthy version of this that's surprisingly close to Cognitive Behavioral Therapy. It's possible to be poorly calibrated in the other direction than I'm talking about in this essay, where you don't say perfectly fine things because you incorrectly assume people will be mad.
--"If I say this, everyone will hate me. If I say that, everyone will hate me. If I don't say anything, everyone will think I'm weird and hate me."
"Can you tell me what that would look like?"
"They'll ignore me, or give short answers, or move away."
"Are there any other possible explanations for that behavior?"
"I mean, I guess maybe they saw a friend."
"At the last gathering you were at, when you talked, did they do those behaviors?"
"No. I guess some people smiled. But maybe they were just faking?"
"At the next gathering, if you say those things, do you predict people will smile or move away from you? Would you like to put odds on those options?"
I think the difference is Heroic Responsibility doesn't mean taking every problem on your shoulders, it means taking every potential part of the problems you have taken on your shoulders.
A business manager takes Heroic Responsibility for their business, but not the whole world. You can decide to take Heroic Responsibility for the whole world, and that looks a lot like the EA playbook in many ways. At least in my interpretation Heroic Responsibility often involves crossing departments to make sure your problem gets solved, but it doesn't automatically make everyone else's problems your problems.
I'm not saying the process of becoming skilled is free, and if I've accidentally communicated that it's a mistake I'd like to fix. It would take me a lot of time and effort to learn to speak Chinese fluently.
I am trying to say that once I've paid the cost of becoming skilled at social grace, using this particular skill costs much less than people seem to think. If I was already fluent in Chinese (not just barely fluent, but really good at it and comfortable) then saying a sentence in Chinese is free - or rather, it doesn't cost any meaningful more brain power or breath than saying that sentence in English.
This analogy will only work if you're at least somewhat a programmer. I'm going to go with the odds around here and try it anyway, let me know if it misses and I should try something else.
If you see someone about to check in code that's doing a bubble sort. One has to think how to implement something differently, such as block sort, to get the same amount of sorting in less time.
Except, if you wind up sorting a lot of things, you could practice implementing blocksort. Once you have that up-front practice, thinking through how to properly implement bubble sort isn't meaningfully easier than properly implementing block sort.
There are cases in programming where there really is a tradeoff! There's a real reason to do merge sort instead of heap sort sometimes, usually because stability is important. The sentence you quote isn't "Every time it seems to me like there's grace grace for the same amount of honesty." I am very confident I've seen people do the social equivalent of bogo sort. Heck, I've done it myself. And then I got better at the skill.
I'd think it would be fair if someone bundled "learning to phrase things in a socially graceful way" with "learning to spend the social capital when it's correct to do so" together when accounting for the cost of learning. But I think this actually points at a stronger case for my argument, which is that the exchange rates can get really, really lopsided. If you don't know how much you're spending, your budget can get pretty bad and you make poor choices.
If I imagine an architect who has no idea what steel or drywall costs, I expect them to make unsustainably and needlessly expensive buildings. They might buy one material for a thousand dollars a square foot when another that costs ten dollars per square foot would do almost as well. And that's in the case where they're actually at the pareto frontier, which I'm not convinced most people are. Often it seems to me like there's free grace for the same amount of honesty.
I have a lot of interest in the data collection puzzle.
Object Level Questions
My last best writeup of the problem is the Unofficial 2024 LessWrong Community Census, in one of the fishing expeditions. My strategy has been to ask about things that might make people more rational (e.g. going to CFAR workshops, reading The Sequences, etc) and ask questions to test people's rationality (e.g. conjunction fallacy, units of exchange, etc) and then check if there's any patterns.
There's always the good ol' self-report on comfort with techniques, but I've been trying to collect questions that are objective evaluations. A partial collection of my best:
Still, self-reports aren't worthless.
Meta, how do we find good questions?
I'm tempted to ask people their goals, ask who's succeeding at their goals or at common goals, and then operate as though that's a useful proxy. There's a fair number of people who say they want a well paying job and a happy relationship, and other people who have those things. Selection effects are sneaky though, and I don't trust my ability to sort out people who are doing well financially because of CFAR's good teachings from the people who were able to attend CFAR because they were already doing well financially.
On a meta level, I feel pretty excited about different groups that are trying to increase rationality asking each other's questions. That is, if ESPR had a question, CFAR had another question, and the Guild of the Rose had a third question, I think it'd be great if each of them asked their attendees all three questions. Even better in my view to add a few organizations that are adjacent but not really aiming at that goal; ACX Everywhere or Manifold, for instance. Those would be control groups. The different organizations are doing different things, and if ESPR starts doing better on the evaluation questions than Guild of the Rose then maybe the Guild starts borrowing more from ESPR's approach. If ACX Everywhere attendees have better calibration than Metaculus, then we notice we're confused. I've been doing this for the ULWC Census already, and I'd be interested in adding it to after-event surveys.
Is there one or two questions CFAR wants to ask, or has historically asked, that you'd like to add to that collection? Put another way, what are the couple of evaluation questions you think CFAR alumni will do better on relative to say, ACX Everywhere attendees?
Take a Screwtape Point for putting your numbers down in text. You're talking a decent sized game here, but I do think I agree with your point that Politics Is The Mindkiller ideally would have been more an invitation to get good than a prohibition on trying. I disagree the content audit makes sense- most front-page posts on LessWrong don't contain scored object-level political forecasting because this crowd isn't as interested in politics. I think AI posts account for >60% of the frontpage all on their own, it'd be kind of weird to me if this crowd was more interested in politics than like, fun math puzzles or how to dress better. Maybe I'm typical minding?
(I did, and do, take it as a prohibition on trying when you're bad at it. Arguing about contemporary politics is like a metaphorical version of doing a backflip- totally doable, but if you just read a blog post on it and maybe watch someone else do it a couple times then try it yourself you're going to hurt yourself. Work up to it a bit.)
You stuck your neck out, so I'll stick mine a little out. Have a Fatebook tournament. I'm presently at .3/.15/.6/.3/.4 on your predictions, respectively, though since I haven't put a ton of thought into any of them but the manufacturing question I'm probably anchoring on you.
Boston
When: December 27th, 6:30 pm
Where: Connexion, 149 Broadway, Somerville, MA
No tickets needed, RSVPs appreciated either at https://www.facebook.com/events/1188042049854248 or https://www.lesswrong.com/events/asnjsSXqKNmfKMcB9/boston-secular-solstice-2025