Screwtape

I'm Screwtape, also known as Skyler. I'm an aspiring rationalist originally introduced to the community through HPMoR, and I stayed around because the writers here kept improving how I thought. I'm fond of the Rationality As A Martial Art metaphor, new mental tools to make my life better, and meeting people who are strange in ways I find familiar and comfortable. If you're ever in the Boston area, feel free to say hi.

Starting early in 2023, I'm the ACX Meetups Czar. You might also know me from the New York City Rationalist Megameetup, editing the Animorphs: The Reckoning podfic, or being that guy at meetups with a bright bandanna who gets really excited when people bring up indie tabletop roleplaying games. 

I recognize that last description might fit more than one person.

Sequences

Cohabitive Game Design
The LessWrong Community Census
Meetup Tips
Meetup in a box

Wikitag Contributions

Comments

Sorted by

Debbie's particular shape is arranged in part to isolate honesty and predictability as useful. If I'd just had her hiding bad things and confabulating good things I'd worry the takeaway would be solely that doing bad things or having a bad average was the problem, so I set her up such that the average stayed put and the curve just flattened out. I think the individual pieces do make sense though, if not in that particular combination.

Hiding good actions happens due to humbleness or status regulation or shyness or just because it's private. 

  • A church needs unexpectedly expensive repairs, and an anonymous donor covers them.
  • A new player on the sports team could honestly take credit for a win, but doesn't want to make enemies and emphasizes other people's work.
  • A world class scientist gets asked what they do for work, and answers "I work for a university."
  • That one person who reached out after a bad breakup and talked their friend through the worst few nights.

Some off-kilter weirdos do invent middle-of-the-distribution actions, trying to create a false consensus or just badly misunderstanding what's normal.

  • "I have lots of friends here, ask Robin or Sean or Ted, they'll vouch for me!" "We asked them. We had to remind them who you were, they said they talked to you once or twice briefly."
  • "I'm surprised to hear you say people are uncomfortable with me, they've never said so and I'm respectful of boundaries. Susan and I broke up, that's all." "According to Susan she politely said she'd rather you to leave her alone repeatedly, then eventually told you if you showed up at her dorm unannounced again she'd call the police, and she hasn't heard from you since." "Yeah, and? She set a boundary, I respected it. And that's not her saying she's uncomfortable is it?"
  • "Alcohol? Eh, like I said I drink a normal amount, you know?" "This is the third time we've found you passed out on the lawn." "Yeah, doesn't everyone drink about that much? I'm not a real alcoholic, it's not like I do that unless it's the weekend."

(Also, people contain multitudes. The same person can donate generously to their local community, talk their friends through the long nights of despair, and also drink themselves insensate every week plus ignore a lot of romantic soft nos.)

And, yeah, sometimes there's just a very different outlook that's causing some blue and orange morality social norms. Ideally, you can talk to the person or get used to them and build up a custom bell curve for them, notice that while weird their behavior isn't actually hurting anyone, and everything's fine. You can even be a bit of an ambassador or on-ramp. "Oh yeah, that's Wanda, the Groucho Marx mustache is a bit weird but she's pretty friendly." I'm a big fan of spaces for the weird but harmless!

Frontpage is mostly what the admins and mods think is worth frontpaging, plus what users upvote. It's also a positional good, there can only be so many things on the front page. This is a more specific and useful question though! Yeah, if the LW team frontpaged more AI governance and less of everything else, and the average user upvoted more AI governance and less of everything else, the frontpage would have more AI governance on it. I wouldn't be a fan, but I'd understand the move that was the goal. My understanding is that's not the goal.

Not having a use for in-group signaling seems accurate but maybe overly cynical or something? I think it's that having lots of posts on LessWrong is not a constructive part of their plan. Look at Situational Awareness and ai-2027: great writing, great outreach, obviously applicable to governance. Would either of those have been better as LessWrong posts? I think no, they're more impactful as freestanding websites with a short url and a convenient PDF button.

What's the actual game plan, and what intervening steps benefit from the average LessWrong reader knowing the information you want to tell them or having a calling card that leads the right LessWrong readers to reach out to you? Look at the rash of posting around SB-1047, particularly the PauseAI leader's post. There's a game plan that benefited from a bunch of LessWrong readers knowing some extra information.

I don't have the technical AI Safety skillset myself. My guess is to show up with specific questions if you need a technical answer, try and make a couple of specific contacts you can run big plans past or reach out to if you unexpectedly get traction, and use your LessWrong presence to establish a pointer to you and your work so people looking for what you're doing can find you. That seems worthwhile. After that, maybe crosspost when it's easy? Zvi might be a good example, where it's relatively easy to crosspost between LessWrong and Substack, though he's closer to keeping up with incoming news and less building resource posts for the long term. 

If I type "lawyer AI safety" into LessWrong's search, your post comes up, which I assume is something you want.

It might be useful for you to taboo "LessWrong" at least briefly.

I have a spiel that may turn into a post someday about how communities aren't people, the short version being that if you ask "why doesn't the community do X?" the answer is usually that no individual in the community took it upon themselves to be the hero. Other times, someone did, but the result didn't look like the community doing X it looks like individuals doing X.

Is the question "why does the average user on this website not put much more focus on AI Governance and outreach?" Half of LessWrong users don't make their own posts, just comment or lurk. Yes, they could do more, but I could say that of Twitter users too.

Is the question "why does the formal organization behind this website not put much more focus on AI Governance and outreach?" They just helped put out ai-2027.com. They run Alignment Forum. They host conferences like The Curve. Yes, they could do more, but at some point you look at the employee hours they have and what projects they've done and it balances out.

Is the question "why do the most prominent users on this website not put much more focus on AI Governance and outreach?" By karma, the top ten users are Eliezer Yudkowsky, Gwern, Raemon, John Wentworth, Kaj Sotala, Zvi, Scott Alexander, Wedrifrid, Habryka, and Vaniver. I think Eliezer, Scott, and Zvi do lots of outreach actually, and it's not like John and Vaniver do none. Yes, they could do more Governance and outreach, but- no, wait, I take it back, I don't think Zvi realistically has much more marginal AI outreach he can do, let the man rest I think his keyboard is smoking from all that typing, his children miss him while he's away in the posting mines.

I'm not exactly a disinterested observer here. I do a lot of rationality outreach, and I make a deliberate choice not to push an AI angle because I think that would be worse both for AI and for my sub-branch of the rationality community. If your argument is more people on the margin should do AI governance and outreach, especially if they have comparative advantage, sure, I'll agree with that. If you think the front page of LessWrong should be completely full of AI discussion, I disagree, with the core of my disagreement stemming from The Common Interest of Many Causes.

I'll speak up for notecards: I use binder clips to sort them by category or date once in a while. While they are a bit small for complex or detailed drawings, in a pinch you can lay them slightly overlapping (perhaps with a little tape on the back) and get as big a sheet as you want. They won't replace my sketchbook for doing portraiture anytime soon, but that's a minority of my paper time.

Overall, I love this post and I like hearing other people's approaches to paper!

This post feels like it may have been written in response to some specific interpersonal drama. If it was, then I'd like to make it clear that I have absolutely no idea what it was and therefore no opinion on it. I just think this is a useful concept in general. 

Thumbs up, I appreciate knowing it lands even for people with no idea of the specific cases.

Other than the murder thing, I'm talking about something I've seen more than once. Like I said in the post, part of what I'm supposed to do for ACX meetups is handle complaints, which creates some unusual selection effects around what I see.

I do have one minor nitpick:

I think that's a reasonable nitpick and I've updated it to be a bit clearer, thanks for the pointer!

Basically agreed.

Though also relevant is the degree of maliciousness required and what the subject might get out of it. In the "bobcat instead of office chair" example, this is pretty willful willingness to cause physical harm and the sender doesn't really get anything out of it other than sadistic kicks and making the world much weirder. If the sender sent a much cheaper chair model, there's a less weird motivation (they keep the change) and there's less extra work involved.

I'm going to note I'm having a little trouble parsing your sentences here.

Strong downvoted for not just saying what you're really thinking to the person you have a criticism about which is almost definitely wrong.

I think the thing you're saying is that you downvoted because you think instead of writing this essay, I should have told a specific person that I think they're being some kind of jerk (mailing metaphorical bobcats) to a small number of people while being nice to the majority of people. Further, that I'm incorrect about how bad the jerkishness is. Is that close?

Downvote as you will. But I'm trying to talk about a pattern I've noticed across multiple people, and I'm trying to share a tool because I can't be in every ACX meetup in the world. I want local organizers to be able to notice this faster. 

Also, speaking directly to the person I have a criticism about isn't always enough. Imagine Bob is at a meetup and is nice to nine other attendees, but punches the tenth in the face in a way only me and the victim can see. The victim leaves, never to return, and I tell Bob not to do that again. Next week, there's Bob, the nine from last time, and a newcomer. Bob punches the newcomer in the face where only me and the newcomer can see, the newcomer leaves, and I ban Bob. This is the situation I'm talking about in part IV but more-so because I saw the punching myself, and I'm still going to have to explain to the nine regulars why I banned someone who was only ever nice to them.

(I believe targeting dynamics do happen sometimes - see part V where I at least touch on this -  but I also think the basic pattern of “nice to most people but terrible to a few” does happen sometimes.)

Yep, and also as things scale you just get less information about everyone. 

An random local meetup might fit in one room, sometimes splitting into two rooms so it's easier to have multiple conversations. I can have line of sight to everyone at once and hear it if voices start getting raised. With meetups in ten cities, I can at least wave at most attendees, and have had a couple hours of conversation with the organizers.  With meetups in a hundred cities, I have only demographic guesses about who the attendees are, and it takes time and effort to get to meet the organizers in a way where I'd notice an "I Eat Puppies" tattoo on their face.

Weirder things can happen, and also you have less bandwidth to notice it.

Edit: There's an essay that's shaped some of my thinking here called Default Blind. I recommend it.

Somewhat agreed. 

I'm trying to point at something loosely in this vicinity in section V, about hunting in packs - replace "one of them has three good friends" with "one of them paid three people" - where sometimes a bunch of negative reports are happening because someone is making up or deeply exaggerating accusations and routing them to you through different sources. I don't know that it's my first assumption; I currently think "Erin is mailing metaphorical bobcats to a small number of people" happens more often than "Frank is coordinating a bunch of fake or exaggerated accusations about Erin." I do think the latter happens sometimes though.

Conservation of expected evidence, if I see an Amazon review page with reviews saying "instead of office chair, package contained bobcat" my odds they're sending bobcats did go up at least somewhat. How much depends on things like how outlandish the actual accusation is and how coordinated the reviews look.

I strongly agree that adversarial optimization exists and is an important factor in these kinds of social situations. I've been thinking about and writing about social conflict a lot lately in large part because the sometimes adversarial nature of it makes this harder in many ways than most parts of running good meetups. Many (most?) problems you can fix and they stay fixed, and if you're stuck you can ask the people around you. That doesn't work as well for social conflict.

Load More