Cross-posted from the EA Forum. I'm not sure how many people on LessWrong will find this useful, but I imagine some will.
I've had calls with >30 people who are interested in things like testing their fit for EA-aligned research careers, writing on the EA Forum, "getting up to speed" on areas of effective altruism, etc. (This is usually during EA conferences.)
I gradually collected a set of links and notes that I felt that many such people would benefit from seeing, then turned that into a Google Doc. Many people told me they found that doc useful, so I'm now (a) sharing it as a public post, and (b) still entertaining the hypothesis that those people were all just vicious liars and sycophants, of course.
Disclaimers
Not all of these links/notes will be relevant to any given person
These links/notes are most relevant to people interested in (1) research roles, (2) roles at explicitly EA organisations, and/or (3) longtermism
But this just because that’s what I know best
There are of course many important roles that aren’t about research or aren’t at EA orgs!
And I'm happy with many EAs prioritising cause areas other than longtermism
But, in any case, some of the links/notes will also be relevant to other people and pathways
This doc mentions some orgs I work for or have worked for previously, but the opinions expressed here are my own, and I wrote the post (and the doc it evolved from) in a personal capacity
This has great writing tips that definitely apply on the Forum, and ideally would apply everywhere, but unfortunately they don’t perfectly align with the norms in some areas/fields
Sometimes people worry that a post idea might be missing some obvious, core insight, or just replicating some other writing you haven't come across. I think this is mainly a problem only inasmuch as it could've been more efficient for you to learn things than slowly craft a post.
So if you can write (a rough version of) the post quickly, you could just do that.
Or you could ask around or make a quick Question post to outline the basic idea and ask if anyone knows of relevant things you should read.
Note: I know less about what the opportunities at the Center for Reducing Suffering and the Nonlinear Fund would be like than I know about what the other opportunities would be like, so I'm not necessarily able to personally endorse those two opportunities.
One key piece of career advice I want to highlight is that people often apply to too few things, don't make enough ambitious applications, and/or don't make enough "safe applications"
I think it's generally good to apply to a lot of things, including both ambitious and safe options
(But of course, any piece of general career advice will have some exceptions)
Cross-posted from the EA Forum. I'm not sure how many people on LessWrong will find this useful, but I imagine some will.
I've had calls with >30 people who are interested in things like testing their fit for EA-aligned research careers, writing on the EA Forum, "getting up to speed" on areas of effective altruism, etc. (This is usually during EA conferences.)
I gradually collected a set of links and notes that I felt that many such people would benefit from seeing, then turned that into a Google Doc. Many people told me they found that doc useful, so I'm now (a) sharing it as a public post, and (b) still entertaining the hypothesis that those people were all just vicious liars and sycophants, of course.
Disclaimers
Regarding writing, the Forum, etc.
Research ideas
Programs, approaches, or tips for testing fit for (longtermism-related) research
Not all of these things are necessarily "open" right now.
Here are things I would describe as Research Training Programs (in alphabetical order to avoid picking favourites):
Note: I know less about what the opportunities at the Center for Reducing Suffering and the Nonlinear Fund would be like than I know about what the other opportunities would be like, so I'm not necessarily able to personally endorse those two opportunities.
Here are some other things:
Getting “up to speed” on EA, longtermism, x-risks, etc.
Other
I'd welcome comments suggesting other relevant links, or just sharing people's own thoughts on any of the topics addressed above!