Carson Jones

Wiki Contributions

Comments

Sorted by

Thanks for your thoughts here!

So I do have some familiarity with the concept of deliberate practice, and I would definitely include that as part of the thing I'm talking about above. But I can also think of things that might improve a researcher's capacity that don't fall under deliberate practice.

1. One researcher told me they were having frequent interruptions to their focused working time as a result of their work environment, so they made some adjustments to their work environment to prevent that. I don't think I'd call that deliberate practice, but it does seem like a big improvement.

2. Motivation/procrastination. This is probably the single biggest complaint I've heard from researchers. To the extent that they find a solution to this, it probably won't end up being something in the category of "deliberate practice". It will probably look like creating incentives for themselves, or introspecting on their own preferences and motivations, or creating accountability mechanisms, etc.

If there are any alignment researchers reading this who think they would benefit from having someone to talk to about improving their research capacity, I’m happy to be that person.

I’m offering free debugging-style conversations about improving research capacity to any alignment researchers who want them. Here’s my calendly link if you’d like to grab time on my calendar: https://calendly.com/dcjones15/60min .

I’m not claiming to have any answers or ready made solutions. I primarily add value by asking questions to elicit your own thoughts and help you come up with your own improvement plans that address your specific needs.  A number of researchers have told me these conversations are productive for them, so the same may be true for you.


 

I'm not new to reading LessWrong, but I am new to posting or commenting here. I plan to be more active in the future. I care about the cause of AI Alignment, and am currently in the process of shifting my career from low-level operations work at MIRI to something I think may be more impactful: 

I.e. supporting alignment researchers in their efforts to level up in research effectiveness, by offering myself as a conversational partner to help them think through their own up-leveling plans.

In that spirit, here's an offer I'd like to make to any interested alignment researchers who come across this comment.

The Offer
Free debugging-style conversations (could be just one, or recurring) aimed at helping you become a more effective researcher. How to sign up?

Questions you may have:

What would the conversation look like?

  • I’ll mostly try to ask good questions to elicit your own thoughts and ideas.
  • Help you get unstuck if you feel confused or averse to the subject.
  • Make the occasional suggestion

Who am I, and why might I be a good person to talk to about this?

  • I’ve been doing low-level operations work for MIRI for the past four years.
  • I’m not a researcher, but I have thought a lot about improving my own intellectual processes over the years, and have had some good results with that.
  • The few of these conversations I’ve had so far seemed good and productive, so the same may be true for you.

Why?

  • Alignment researchers from various organizations have told me they don’t invest in leveling up as much as they endorse, and that when they try to level up, it’s aversive or difficult. I suspect just having someone to discuss it with can help a lot.
  • I really enjoy conversations like this, and am hoping to one day be good enough at them to get paid to do it. So, I need lots of practice!


 

These monthly threads and Stampy sound like they'll be great resources for learning about alignment research.

I'd like to know about as many resources as possible for supporting and guiding my own alignment research self-study process. (And by resources, I guess I don't just mean more stuff to read; I mean organizations or individuals you can talk to for guidance on how to move forward in one's self-education).  

Could someone provide a link to a page that attempts to gather links to all such resources in one place? 
I already saw the Stampy answer to "Where Can I Learn About AI Alignment?". Is that pretty comprehensive, or are there many more resources?