Raemon

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Drawing Less Wrong

Wiki Contributions

Comments

Sorted by
Raemon121

I’d like to hire cognitive assistants and tutors more often. This could (potentially) be you, or people you know. Please let me know if you’re interested or have recommendations.

By “cognitive assistant” I mean a range of things, but the core thing is “sit next to me, and notice when I seem like I’m not doing the optimal thing, and check in with me.” I’m interested in advanced versions who have particular skills (like coding, or Applied Quantitivity, or good writing, or research taste) who can also be tutoring me as we go.

I’d like a large rolodex of such people, both for me, and other people I know who could use help. Let me know if you’re interested.

I was originally thinking "people who live in Berkeley" but upon reflection this could maybe be a remote role.

Raemon121

Yep, endorsed. One thing I would add: the "semi-official" dresscode I've been promoting explicitly includes black (for space/darkness), silver (for stars), gold (for the sun), and blue (for the earth). 

(Which is pretty much what you have here, I think the blue works best when it is sort of a minority-character distributed across people, such that it's a bit special when you notice it)

The complaints I remember about this post seem mostly to be objecting to how some phrases were distilled into the opening short "guideline" section. When I go reread the details it mostly seems fine. I have suggestions on how to tweak it.

(I vaguely expect this post to get downvotes that are some kind of proxy for vague social conflict with Duncan, and I hope people will actually read what's written here and vote on the object level. I also encourage more people to write up versions of The Basics of Rationalist Discourse as they seem them)

The things I'd want to change are:

1. Make some minor adjustments to the "Hold yourself to the absolute highest standard when directly modeling or assessing others' internal states, values, and thought processes." (Mostly, I think the word "absolute" is just overstating it. "Hold yourself to a higher standard" seems fine to me. How much higher-a-standard depends on context)

2. Somehow resolve an actual confusion I have with the "...and behave as if your interlocutors are also aiming for convergence on truth" clause. I think this is doing important, useful work, but a) it depends on the situation, b) it feels like it's not quite stating the right thing.

Digging into #2...

Okay, so when I reread the detailed section, I think I basically don't object to anything. I think the distillation sentence in the opening paragraphs conveys a thing that a) oversimplifies, and b) some people have a particularly triggered reaction to.

The good things this is aiming for that I'm tracking:

  • Conversations where everyone trusts that each other are converging on truth are way less frictiony than ones where everyone is mistrustful and on edge about it.
  • Often, even when the folk you're talking to aren't aiming for convergence on truth, proactively acting as if they are helps make it more true. Conversational vibes are contagious.
  • People are prone to see others' mistakes as more intense than their own mistakes, and if most humans aren't specifically trying to compensate for this bias, there's a tendency to spiral into a low-trust conversation unnecessarily (and then have the wasted motion/aggression of a low-trust conversation instead of a medium-or-high one). 

I think maybe the thing I want to replace this with is more like "aim for about 1-2 levels more trusting-that-everyone-is-aiming-for-truth than currently feel warranted, to account for your own biases, and to lead by example in having the conversation focus on truth." But I'm not sure if this is quite right either.

...

This post came a few months before we created our New User Reject Template system. It should have at least occurred to me to use some of the items here as some of the advice we have easily-on-hand to give to new users (either as part of a rejection notice, or just "hey, welcome to LW but it seems like you're missing some of the culture here."

If this post was voted in the Top 50, and a couple points were resolved, I'd feel good making a making a fork with minor context-setting adjustments and then linking to it as a moderation resource), since I'd feel like The People had a chance to weigh in on it. 

The context-setting I'm imagining is not "these are the official norms of LessWrong", but, if I think a user is making a conversation worse for reasons covered in this post, be more ready to link to this post. Since this post came out, we've developed better Moderator UI for sending users comments on their comments, and it hadn't occurred to me until now to use this post as reference for some of our Stock Replies.

(Note: I currently plan to make it so, during the Review, anyone write Reviews on a post even if normally blocked on commenting. Ideally I'd make it so they can also comment on Review comments. I haven't shipped this feature yet but hopefully will soon)

Previously, I think I had mostly read this through the lens of "what worked for Elizabeth?" rather than actually focusing on which of this might be useful to me. I think that's a tradeoff on the "write to your past self" vs "attempt to generalize" spectrum – generalizing in a useful way is more work.

When I reread it just now, I found the "Ways to Identify Fake Ambition" the most useful section (both for the specific advice of "these emotional reactions might correspond to those motivations", and the meta-level advice of "check for your emotional reactions and see what they seem to be telling you."

I'd kinda like to see a post that is just that section, with a bit of fleshing out to help people figure out when/why they should check for fake ambition (and how to relate to it). I think literally a copy-paste version would be pretty good, and I think there's a more (well, um) ambitious version that does more interviewing with various people and seeing how the advice lands for them.

I might incorporate this section more directly into my metastrategy workshops.

Raemon46

Well to be honest in the future there is probably mostly an AI tool that just beams wisdom directly into your brain or something.

Raemon20

I wrote about 1/3 of this myself fyi. (It was important to me to get it to a point where it was not just a weaksauce version of itself but where I felt like I at least might basically endorse it and find it poignant as a way of looking at things)

Raemon62

One way I parse this is "the skill of being present (may be) about untangling emotional blocks that prevent you from being present, more than some active action you take."

It's not like entangling emotional blocks isn't tricky! 

Raemon92

I don't have a strong belief that this experience won't generalize, but, I want to flag the jump between "this worked for me" and an implied "this'll work for everyone/most-people." (I expect most people would benefit from hearing this suggestion, just generally have a yellow-flag about some of the phrasings you have here)

Raemon30

Nod. 

Fwiw I mostly just thought it was funny in a way that was sort of neutral on "is this a reasonable frame or not?". It was the first thing I thought of as soon as I read your post title.

(I think it's both true that in an important sense everything we care about is in the Map, and also true in an important sense that it's not, and in the ways it was true it felt like a kind of legitimately poignant rewrite that felt like it helped me appreciate your post, and insofar as it was false it seemed hilarious (non-meanspiritedly, just in a "it's funny that so many lines from the original remain reasonable sentences when you reframe it as about epistemology"))

Raemon20

lol at the strong downvote and wondering if it is more objecting to the idea itself or more because Claude co-wrote it?

Load More