I'm Screwtape, also known as Skyler. I'm an aspiring rationalist originally introduced to the community through HPMoR, and I stayed around because the writers here kept improving how I thought. I'm fond of the Rationality As A Martial Art metaphor, new mental tools to make my life better, and meeting people who are strange in ways I find familiar and comfortable. If you're ever in the Boston area, feel free to say hi.
Starting early in 2023, I'm the ACX Meetups Czar. You might also know me from the New York City Rationalist Megameetup, editing the Animorphs: The Reckoning podfic, or being that guy at meetups with a bright bandanna who gets really excited when people bring up indie tabletop roleplaying games.
I recognize that last description might fit more than one person.
A question I have been asking since before starting work with Raemon on this, which is only more relevant now:
What rationality skill(s) do you think is most important? Put another way, if we could snap our fingers and teach some rationality skill as broadly as literacy currently gets taught, what skill should we pick?
(I don't think this is Raemon's angle on the project but it is kind of mine.)
I would lightly argue scope insensitivity (and calibration!) are both traditional rationalist topics. Incomplete, yes, but I think they're both well served by quantified intuitions and they'd be part of my ideal rationalist training program.
I think the Subscribe to Group button will subscribe to the LessWrong community group. In theory I cross post all the things to all the appropriate places. In practice I'd suggest this mailing list or the Discord, because those are the ones I reliably remember.
The time and place is a mistake, I just fixed it. (This is why I usually do the What Where When, as a kind of checksum when I'm cross posting something to a lot of places.) Thanks for pointing that out!
Planned drills, list might get tweaked a bit between now and the meetup date:
This is an experiment in format. Usually I'd make one of these the focus of a meetup and take our time with each example. This time around I'm going to hit all five repeatedly and at speed.
I'm one of the +4s. I would believe the liars in this post are being filtered out before they get to you Ruby, but they aren't all getting filtered out before they get to me, and they aren't getting filtered out of the general population fast enough that the man on the street won't run into them.
I actually squint a bit at "highly incompetent" in a weird way; the thing they're doing works surprisingly well in my observation. Emphasis on "surprising"! It stinks, I hate it, and also I kinda do think it works sufficiently well in some social environments that it's positive expected value for them.
Some cruxes for me:
I am if not alarmed then at least consider it a problem, but haven't felt confusion here for at least a year. I have a pretty good model of how it happens. Someone's doing some searching on the internet, and gets recommended a LessWrong article on an Boston rents, or an AI paper, or hikers going missing. Maybe a friend recommended them a fun essay on miracles or a goofy Harry Potter fanfic. They hang around, read a few more things, comment a bit. Then they see a meetup announcement, and show up, and enjoy conversation. (Very very roughly a third of LessWrong/ACX meetups are socials, with no or minimal readings or workshops.) They go to more meetups, they make more comments on the internet, maybe they make some posts of their own and their posts get upvoted. Maybe they step up and run the meetup when the previous organizer is sick or busy or moves.
At no point did someone give them a math test. I'm basically describing my arc above, and nobody asked me to solve a mammogram problem in that process.
That's how we end up in this world.
As for what the missing thing is: my theory is to change this state of affairs, we'd need two things. We'd need to start actually regularly asking folks questions where they'd need to use it, and we'd need an explanation fast and simple enough that it can survive being taught by non-specialists who are also juggling having snacks out and getting the door for people. I love this not for its intuitiveness, but for rearranging the numbers to a shape people can do easier.
I'd give much higher odds on members of the community being able to gesture at the key ideas of base rates and priors in English sentences! (Not as high as I'd like, but higher, anyway.) But that's not the same as being able to do the calculations. And there's something slippery about describing a piece of math in intuitive sentences then trying to use it as a heuristic without quite being able to actually run the numbers, which is why I'd like to change that.
Again, I suck too! I'm running around doing a dozen things in my day to day life, none of which is remedial math practice. This kind of thing happens a lot actually. Once upon a time I did some basic interviews for some software developers, and watched comp sci grads fail to fizzbuzz correctly.
My hope is that if somehow I can get a tweet or two worth of text that teaches the numbers in a way that can fit in math people already do in their daily life (multiplication between two to four numbers) and add a small battery of exercises that use it, I might be able to package that in a way local organizers not only could use but would spread. Like you say, maybe hoping for just one more Bayes explanation is not the path. To me, this one was a meaningful step simpler and easier.
I guess I'll note as well that I want to raise the sanity waterline. To do that, I can't work with a version that wants above average intelligence. I do genuinely want to figure out how to teach Bayes to fourth graders and then go out and teach some fourth graders. C'mon, don't you want to see what people turn out like if they have access to a better mental toolkit from a young age?
Also,
it's not a particularly complicated piece of math ... even if you don't remember the exact formula, it should be very easy to rederive within a few minutes from first principles once you understand the core idea.
I think you might be having an xkcd feldspar moment.
Please no set notation! The arrow brackets are on thin ice I think.
I meant what I said above, I think there's something really good about having a Bayes explanation that requires no symbols not on a standard keyboard and no math an on-track fourth grader wouldn't know.
(And also, thank you both for improving this! I recognize you two are the ones in the arena at the moment and I wish I was able to help refine this more.)
I joke, but "Thresholding is a Sazen" sure is a sentence I'd call at least 20% correct.
I've had this on my to-review log all review season, and I guess I'm getting this in mere hours before it closes.
What does this post add to the conversation?
The most important piece I think this adds is that the problem is not simple.
I keep seeing people run into one of these community conflict issues, and propose simple and reasonable sounding ways to solve it. I do not currently believe the issue is impossible, but it's not as simple as some folks think when they stare at it the first time.
How did this post affect you, your thinking, and your actions?
Context: For a time, Mingyuan's role involved overseeing the global Astral Codex Ten community, particularly the in-person parts. When she resigned from that position, I'm the one who took it up. I got to see a version of this post before it went live, shortly after I took up the role.
According to her, more than 50% of the reason she stepped down was she became entangled in community conflicts. When you are something like a month into a new role and your predecessor hands you a document saying a problem is impossible and it also caused her to resign, and you already have a fresh instance of the problem in your inbox? That tends to affect you, your thinking, and your actions.
Does it make accurate claims? Does it carve reality at the joints? How do you know? / Is there a subclaim of this post that you can test?
Well, I haven't stepped down because of becoming entangled in community conflict yet. This kind of thing is in the top three most likely reasons I step down if I do someday resign though. (Citation, my my own self evaluation.)
Do investigations eat up hundreds of person-hours? Yep, they totally can if you let them. Skill and practice and a willingness to triage can cut that down a lot.
Do panels generally have much real ability to enforce things? Here I half disagree. The rationality community in particular is fuzzy and amorphous, without a clear single roster or doorway. In that situation, the panel can't enforce things. More structured communities- including sub-communities within the rationalist community- can potentially give panels enforcement ability. The LessWrong site itself, the r/rational discord, or an organization and venue space like Mox can delegate a ban decision to a panel and enforce that. That said, this requires the panel being granted this capability, and for the enforcing entity to carry it out.
If you just try and convene a panel mid-conflict, and it doesn't have a scope or actual authority to do stuff ("actual authority" being, they've got the ban command on the forum, or keys to the dormitory, or the people who do will likely follow the panel's advisement, or something like that) then yeah they're not going to have much ability to enforce anything.
Do panels act like they are courts of law? Unclear. The ones I'm most familiar with had literal lawyers on them, though not in a professional capacity. I think a little of this is that the panel is trying to have some higher standard of evidence, but also that trust isn't transitive; it's just much easier for me to feel confident in what I experienced than it is for a panel to feel confident that I'm relaying my experience accurately.
Do panels often lack a secure sense of legitimacy? Unclear, I'm not in their heads. I do think that giving a panel a clear scope is very useful.
Do favourable rulings lend legitimacy to bad actors? Yep, and more than you'd think. It's just a very straightforward move if you're a bad actor who a panel has investigated and deemed fine to bring up that fact as often as it seems helpful. It may be a useful intuition pump to picture some social deception game where an ability can detect what team someone's on, which has somehow been spoofed or misread. One thing that caught me by surprise is that the rulings don't even need to be straightforwardly favourable: I've seen mixed rulings be quoted in ways I consider pretty out of context.
Are extended investigations stressful for all parties involved? Unclear, I'm not in their heads, but people do keep saying this is true.
Is there often no way to find the objective truth? Yes, at least no reasonable way. One relevant skill is sifting through various claims and noticing which ones would be potentially provable. And unmentioned here is how often no one particular objective truth is significant.
What followup work would you like to see building on this post?
I've been trying!
Interest In Conflict Is Instrumentally Convergent is my single best followup, if I had to pick one. I really wish I'd written up more of what I learned from other communities. The rationality community is odd in many ways, but there are things we can learn. Sci-fi conventions, service non-profits, sports leagues, community colleges, martial arts dojos, small churches, all of these groups and more have lessons for us in how to manage and respond to incidents.
If you're part of a community with a process for incident response, I'd be grateful if you'd write up how that works. You don't need to give details on any specific incidents; just walking through the steps that would get taken for a couple hypotheticals, and who would handle each part, is useful for comparison.
So, what to vote?
It's a good essay. It's going in my collection of things people should read if they're interested in the topic. It's relevant to LessWrongers, and it's relevant to most communities at least as a theory. It's spot on relevant for my sub-special interest of the last year.
For all that, I personally gave it a +1. I can see a pretty clear argument for +4 and might change my mind. It's a bit more niche than I'd want for +4, I think it's coming from a place of (empathized with) frustration and I wish it had some more actionable moves or improvements to offer.