Raemon

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Drawing Less Wrong

Wiki Contributions

Comments

Sorted by
Raemon20

I like all these questions. "Maybe you should X" is least likely to be helpful but still fine so long as "nah" wraps up the thread quickly and we move on. The first three are usually helpful (at least filtered for assistants who are asking them fairly thoughtfully)

Raemon20

I imagined "FocusMate + TaskRabbit" specifically to address this issue.

Three types of workers I'm imagining here:

  • People who are reasonable skilled types, but who are youngish and haven't landed a job yet.
  • People who actively like doing this sort of work and are good at it
  • People who have trouble getting/keeping a fulltime job for various reasons (which would land them in the "unreliable" sector), but... it's FocusMate/TaskRabbit, they don't need to be reliable all the time, there just needs to be one of them online who responds to you within a few hours, who is at least reasonably competent when they're sitting down and paying attention. 

And then there are reviews (which I somehow UI design to elicit honest reactions, rather than just slapping a 0-5 stars rating which everyone feels obligated to rate "5" all the time unless something was actively wrong"), and they have profiles about what they think they're good at and what others thought they were good at.

(where an expectation is, if you don't have active endorsementss, if you haven't yet been rated you will probably charge a low rate)

Meanwhile if you're actively good and actively reliable, people can "favorite" you and work out deals where you commit to some schedule.

Raemon20

(Quick note to people DMing me, I'm doing holidays right now and will followup in a week or so. I won't necessarily have slots/need for everyone expressing interest)

Raemon52

Can you say more details about how this works (in terms of practical steps) and how it went?

Raemon61

I actually meant to say "x-risk focused individuals" there (not particularly researchers), and yes was coming from the impact side of things. (i.e. if you care about x-risk, one of the options available to you is to becoming a thinking assistant). 

Raemon121

I’d like to hire cognitive assistants and tutors more often. This could (potentially) be you, or people you know. Please let me know if you’re interested or have recommendations.

By “cognitive assistant” I mean a range of things, but the core thing is “sit next to me, and notice when I seem like I’m not doing the optimal thing, and check in with me.” I’m interested in advanced versions who have particular skills (like coding, or Applied Quantitivity, or good writing, or research taste) who can also be tutoring me as we go.

I’d like a large rolodex of such people, both for me, and other people I know who could use help. Let me know if you’re interested.

I was originally thinking "people who live in Berkeley" but upon reflection this could maybe be a remote role.

Raemon121

Yep, endorsed. One thing I would add: the "semi-official" dresscode I've been promoting explicitly includes black (for space/darkness), silver (for stars), gold (for the sun), and blue (for the earth). 

(Which is pretty much what you have here, I think the blue works best when it is sort of a minority-character distributed across people, such that it's a bit special when you notice it)

The complaints I remember about this post seem mostly to be objecting to how some phrases were distilled into the opening short "guideline" section. When I go reread the details it mostly seems fine. I have suggestions on how to tweak it.

(I vaguely expect this post to get downvotes that are some kind of proxy for vague social conflict with Duncan, and I hope people will actually read what's written here and vote on the object level. I also encourage more people to write up versions of The Basics of Rationalist Discourse as they seem them)

The things I'd want to change are:

1. Make some minor adjustments to the "Hold yourself to the absolute highest standard when directly modeling or assessing others' internal states, values, and thought processes." (Mostly, I think the word "absolute" is just overstating it. "Hold yourself to a higher standard" seems fine to me. How much higher-a-standard depends on context)

2. Somehow resolve an actual confusion I have with the "...and behave as if your interlocutors are also aiming for convergence on truth" clause. I think this is doing important, useful work, but a) it depends on the situation, b) it feels like it's not quite stating the right thing.

Digging into #2...

Okay, so when I reread the detailed section, I think I basically don't object to anything. I think the distillation sentence in the opening paragraphs conveys a thing that a) oversimplifies, and b) some people have a particularly triggered reaction to.

The good things this is aiming for that I'm tracking:

  • Conversations where everyone trusts that each other are converging on truth are way less frictiony than ones where everyone is mistrustful and on edge about it.
  • Often, even when the folk you're talking to aren't aiming for convergence on truth, proactively acting as if they are helps make it more true. Conversational vibes are contagious.
  • People are prone to see others' mistakes as more intense than their own mistakes, and if most humans aren't specifically trying to compensate for this bias, there's a tendency to spiral into a low-trust conversation unnecessarily (and then have the wasted motion/aggression of a low-trust conversation instead of a medium-or-high one). 

I think maybe the thing I want to replace this with is more like "aim for about 1-2 levels more trusting-that-everyone-is-aiming-for-truth than currently feel warranted, to account for your own biases, and to lead by example in having the conversation focus on truth." But I'm not sure if this is quite right either.

...

This post came a few months before we created our New User Reject Template system. It should have at least occurred to me to use some of the items here as some of the advice we have easily-on-hand to give to new users (either as part of a rejection notice, or just "hey, welcome to LW but it seems like you're missing some of the culture here."

If this post was voted in the Top 50, and a couple points were resolved, I'd feel good making a making a fork with minor context-setting adjustments and then linking to it as a moderation resource), since I'd feel like The People had a chance to weigh in on it. 

The context-setting I'm imagining is not "these are the official norms of LessWrong", but, if I think a user is making a conversation worse for reasons covered in this post, be more ready to link to this post. Since this post came out, we've developed better Moderator UI for sending users comments on their comments, and it hadn't occurred to me until now to use this post as reference for some of our Stock Replies.

(Note: I currently plan to make it so, during the Review, anyone write Reviews on a post even if normally blocked on commenting. Ideally I'd make it so they can also comment on Review comments. I haven't shipped this feature yet but hopefully will soon)

Previously, I think I had mostly read this through the lens of "what worked for Elizabeth?" rather than actually focusing on which of this might be useful to me. I think that's a tradeoff on the "write to your past self" vs "attempt to generalize" spectrum – generalizing in a useful way is more work.

When I reread it just now, I found the "Ways to Identify Fake Ambition" the most useful section (both for the specific advice of "these emotional reactions might correspond to those motivations", and the meta-level advice of "check for your emotional reactions and see what they seem to be telling you."

I'd kinda like to see a post that is just that section, with a bit of fleshing out to help people figure out when/why they should check for fake ambition (and how to relate to it). I think literally a copy-paste version would be pretty good, and I think there's a more (well, um) ambitious version that does more interviewing with various people and seeing how the advice lands for them.

I might incorporate this section more directly into my metastrategy workshops.

Raemon46

Well to be honest in the future there is probably mostly an AI tool that just beams wisdom directly into your brain or something.

Load More