I'm Screwtape, also known as Skyler. I'm an aspiring rationalist originally introduced to the community through HPMoR, and I stayed around because the writers here kept improving how I thought. I'm fond of the Rationality As A Martial Art metaphor, new mental tools to make my life better, and meeting people who are strange in ways I find familiar and comfortable. If you're ever in the Boston area, feel free to say hi.
Starting early in 2023, I'm the ACX Meetups Czar. You might also know me from the New York City Rationalist Megameetup, editing the Animorphs: The Reckoning podfic, or being that guy at meetups with a bright bandanna who gets really excited when people bring up indie tabletop roleplaying games.
I recognize that last description might fit more than one person.
Take a Screwtape Point for putting your numbers down in text. You're talking a decent sized game here, but I do think I agree with your point that Politics Is The Mindkiller ideally would have been more an invitation to get good than a prohibition on trying. I disagree the content audit makes sense- most front-page posts on LessWrong don't contain scored object-level political forecasting because this crowd isn't as interested in politics. I think AI posts account for >60% of the frontpage all on their own, it'd be kind of weird to me if this crowd was more interested in politics than like, fun math puzzles or how to dress better. Maybe I'm typical minding?
(I did, and do, take it as a prohibition on trying when you're bad at it. Arguing about contemporary politics is like a metaphorical version of doing a backflip- totally doable, but if you just read a blog post on it and maybe watch someone else do it a couple times then try it yourself you're going to hurt yourself. Work up to it a bit.)
You stuck your neck out, so I'll stick mine a little out. Have a Fatebook tournament. I'm presently at .3/.15/.6/.3/.4 on your predictions, respectively, though since I haven't put a ton of thought into any of them but the manufacturing question I'm probably anchoring on you.
If you want a blurb, how about
"ACX Everywhere is Scott Alexander's twice-a-year effort to shine light on local Astral Codex Ten meetups. If you'd like to talk to other readers of the blog, you can take a look at the list of meetups to see if there's one in your city! If there isn't one for your city yet, you can fill out a late-entry by filling out the form for an October ACX Everywhere. Running an ACX Everywhere can be pretty straightforward; pick a time (weekend afternoons are best) and a place (a local cafe will do just fine) and be ready to talk to interesting people!"
Usually we would be. I do take late entries (a number of people only realize they over-procrastinated when the Times & Places announcement goes out) but usually shut it off after a few weeks. LessWrong's having a meetup month though! Seems a good excuse to try and get to two hundred :)
ACX Meetup Czar here- I love a good meetup month, and we're presently five meetups away from having two hundred ACX Everywheres. If there's no meetup in your city yet and you want to add one in October, I'll take a late entry. Fill out a quick form and I'll get back to you!
Game design thoughts:
Missing information or a too-complex information environment is a big part of where I'm excited about cohabitive game design. The market works to summarize many many preferences. Large WoW guilds run into problems of organizing information.
I'm musing about some kind of fog of war chess game. Each player controls one piece, and can see all of the squares they can move to. They can't see the whole board, and can only communicate to the king how badly they want to move, and the king decides. This would be a team competitive game, but the idea of adding small side goals for pieces (such as, you get 2 points if your team wins, and 1 point if you're alive at the end win or lose) could make for an unusually focused cohabitive design space.
In general a lot of my cohabitive game design ideas come out in the form of having larger player bases. I should try and make a really tight two player cohabitive setup.
(Very tangentially related but if you want to talk Cohabitive Game Design stuff, I'll be hanging around at Metagame.games this weekend.)
I kinda wish the subsequent back and forth between you and Habryka and Ben hadn't happened yet downthread here, because I was hoping to elicit a more specific set of odds (is "pretty high" 75%? 90%? 99%?) and see if you wanted to bet.
I can sympathize with the feeling where it seems an interlocutor says false things so often if they said it was sunny outside I'd bring an umbrella. I also haven't been tracking every conversation on LessWrong that involves you, but that said even in a world where Habryka was entirely uncorrelated with truth I'd have remembered the big moderation post about the two of you and guessed Duncan at least would have said something along those lines.
So it is in fact straightforwardly true to say that there are zero examples of “top author X cites Said as a top reason for why they do not want to post or comment on LW” turning out to just be straightforwardly true.
I'm having trouble modeling you here Said. When you wrote there were zero examples, what odds would you have put that nobody would be able to produce a quote of anyone saying something like this? What odds would you currently put that nobody can produce a similar quote from a second such author?
You say "the count now stands at one example" as though it's new information. Duncan in particular seems hard to have missed. I'm trying to work out why you didn't think that counted. Maybe you forgot about him saying that? Maybe it has to be directly quoted in this thread?
Why would Bella reply by invoking this sort of abstract, somewhat esoteric, meta-level concept like “setting the zero point”, instead of saying something more like “… uh, Chloe, are you ok? you know we don’t have 15 cows to divide, right?”.
Because she's in a silly shortform dialogue that's building up to the esoteric, meta-level concept like the game theory in the third part mostly. I wanted some kind of underhanded negotiating tactic I could have Chloe try, I came up with asking for way more than is reasonable to set the stage for "compromise," and then I noticed that the tactic had a good conceptual handle and I referenced it.
This makes me suspect that whatever this fictional conversation is a metaphor for, is not actually analogous to dividing six spherical cows between two people.
It's pretty generic, abstracted negotiation and Chloe is being pretty blatant and ambitious. Asking for value that the other person didn't even think was on the table is a negotiation move I've seen and heard of though, sometimes successfully. For a more realistic version, compare a salary negotiation where the applicant asks for 10% higher salary, gets told the company doesn't have that much to pay employees, and then tries for a couple weeks extra vacation time or more company stock instead.
I think the math at the end still works even if the two sides don't agree on how many cows are actually available.
4 should be there not because it's what Cameron thinks is fair but because it's what they're offering.
How about "4 because that's what you say is fair for you to get"? Cameron isn't offering 4 to Bryer, it's a 2:4 split with 4 to Cameron.
(I want to make sure I get this part right, and appreciate the edit pass!)
I have a lot of interest in the data collection puzzle.
Object Level Questions
My last best writeup of the problem is the Unofficial 2024 LessWrong Community Census, in one of the fishing expeditions. My strategy has been to ask about things that might make people more rational (e.g. going to CFAR workshops, reading The Sequences, etc) and ask questions to test people's rationality (e.g. conjunction fallacy, units of exchange, etc) and then check if there's any patterns.
There's always the good ol' self-report on comfort with techniques, but I've been trying to collect questions that are objective evaluations. A partial collection of my best:
Still, self-reports aren't worthless.
Meta, how do we find good questions?
I'm tempted to ask people their goals, ask who's succeeding at their goals or at common goals, and then operate as though that's a useful proxy. There's a fair number of people who say they want a well paying job and a happy relationship, and other people who have those things. Selection effects are sneaky though, and I don't trust my ability to sort out people who are doing well financially because of CFAR's good teachings from the people who were able to attend CFAR because they were already doing well financially.
On a meta level, I feel pretty excited about different groups that are trying to increase rationality asking each other's questions. That is, if ESPR had a question, CFAR had another question, and the Guild of the Rose had a third question, I think it'd be great if each of them asked their attendees all three questions. Even better in my view to add a few organizations that are adjacent but not really aiming at that goal; ACX Everywhere or Manifold, for instance. Those would be control groups. The different organizations are doing different things, and if ESPR starts doing better on the evaluation questions than Guild of the Rose then maybe the Guild starts borrowing more from ESPR's approach. If ACX Everywhere attendees have better calibration than Metaculus, then we notice we're confused. I've been doing this for the ULWC Census already, and I'd be interested in adding it to after-event surveys.
Is there one or two questions CFAR wants to ask, or has historically asked, that you'd like to add to that collection? Put another way, what are the couple of evaluation questions you think CFAR alumni will do better on relative to say, ACX Everywhere attendees?