Raemon

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Drawing Less Wrong

Wikitag Contributions

Comments

Sorted by

I've now worked with 3 Thinking Assistants, and there are a couple more I haven't gotten to try out yet. So far I've been doing it with remote ones, who I share my screen with. If you would like to try them out I can DM you information and my sense of their various strengths.

The baseline benefit is just them asking "hey, are you working on what you mean to work on?" every 5 minutes. I think I a thing I should do but haven't yet is have them be a bit more proactive in asking if I've switched tasks (because sometimes it's hard to tell looking at my screen), and nagging me a bit harder about "is this the right thing?" if I'm either switching a lot, or doing one that seems at-odds with my stated goals for the day.

Sometimes I have them do various tasks that are easy to outsource, depending on their skills and what I need that day.

I have a google doc I have them read in advance that lays out my overall approach, and which includes a journal for myself I'm often taking notes in, and a journal for each assistant I work with for them to take notes. I think something-like-this is a good practice. 

For reference, here's my intro:

Intro

There’s a lot of stuff I want done. I’m experimenting with hiring a lot of assistants to help me do it. 

My plans are very in-flux, so I prefer not to make major commitments, just hire people piecemeal to either do particular tasks for me, or sit with me and help me think when I’m having trouble focusing.

My working style is “We just dive right into it, usually with a couple hours where I’m testing to see if we work well together.” I explain things as we go. This can be a bit disorienting, but I’ve tried to write important things in this doc which you can read first. Over time I may give you more openended, autonomous tasks, if that ends up making sense.

Default norms

  • Say “checking in?” and if it’s a good time to check in I’ll say “ok” or “no.” If I don’t respond at all, wait 30-60 seconds and then ask again more forcefully (but still respect a “no”)
  • Paste in metastrategies from the metastrategy tab into whatever area I’m currently working in when it seems appropriate.

For Metacognitive Assistants

Metacognitive Assistants sit with me and help me focus. Basic suggested workflow:

  • By default, just watch me work (coding/planning/writing/operations), and occasionally give signs you’re still attentive, without interrupting.
  • Make a tab in the Assistant Notes section. Write moment to moment observations which feel useful to you, as well as general thoughts. This helps you feel more proactively involved and makes you focused on noticing patterns and ways in which you could be more useful as an assistant.
  • The Journal tab is for his plans and thoughts about what to generally do. Read it as an overview.
  • This Context tab is for generally useful information about what you should do and about relevant strategies and knowledge Ray has in mind. Reading this helps you get a more comprehensive view on what his ideal workflow looks like, and what your ideal contributions look like.

Updating quickly

There’s a learning process for figuring out “when it is good to check if Ray’s stuck?” vs “when is it bad to interrupt his thought process?”. It’s okay if you don’t get it perfectly right at first, by try… “updating a lot, in both directions?” like, if it seemed like something was an unhelpful interruption, try speaking up half-as-often, or half-as-loudly, but then later if I seem stuck, try checking in on me twice-as-often or twice-as loudly, until we settle into a good rhythm.

The "10x" here was meant more to refer to how long it took him to figure it out, than how much better it was. I'm less sure how to quantify how much better.

I'm busy atm but will see if I can get a screeshot from an earlier draft

Thanks! I'll keep this in mind both for potential rewrites here, and for future posts.

Curious how this takes you typically?

Well, this is the saddest I've been since April 1st 2022.

It really sucks that SB 1047 didn't pass. I don't know if Anthropic could have gotten it passed if they had said "dudes this this fucking important, pass it now" instead of "for some reason we should wait until things are already 

It is nice that at least Anthropic did still get to show up to the table, and that they said anything at all. I sure wish their implied worldview didn't seem so crazy. (I really don't get how you can think it's workable to race here, even if you think Phase I alignment is easy, as well as it seeming really wrong to think Phase I alignment is that likely to be easy)

It feels like winning pathways right now mostly route through:

  • Some kind of miracle of Vibe Shift (ideally mediated through a miracle of Sanity). I think this needs masterwork-level communication / clarity / narrative setting.
  • Just... idk, somehow figure out how to just Solve The Hard Part Real Fast.
  • Somehow muddle through with scary demos that get a few key people to change their mind before it's too late.

You wouldn't guess it, but I have an idea...

...what.... what was your idea?

I don't know if I'd go as strong as the OP, but, I think you're being the most pro-social if you have a sense of the scale of other things-worth-doing that aren't in the news, and consciously checking how the current News Thing fits into that scale of importance.

(There can be a few different ways to think about importance, which this frame can be agnostic on. i.e. doesn't have to be "global utilitarian in the classical sense")

FYI I do currently think "learn when/how to use your subconcious to process things" is an important tool in the toolbox (I got advice about that from a mentor I went to talk to). Some of the classes of moves here are:

  • build up intuitions about when it is useful to background process things vs deliberate-process them
  • if your brain is sort of subconsciously wandering in a rut, use a small amount of agency to direct your thoughts in a new direction, but then let them wander once you get them rolling down the hill in that new direction

I feel less optimistic about the "forgetting something on the tip of your tongue", and pretty optimistic about the code debugging.

Load More