I am actively looking for a tutor/advisor with expertise in AI x-risk, with the primary goal of collaboratively determining the most effective ways I can contribute to reducing AI existential risks (X-risk).

Tutoring Goals

I suspect that I misunderstand key components of the mental models that lead some highly rational and intelligent individuals to assign a greater than 50% probability of AI-related existential catastrophe ("p-doom"). By gaining a clearer understanding of these models, I aim to refine my thinking and make better-informed decisions about how to meaningfully reduce AI X-risk.

Specifically, I want to delve deeper into why and how misaligned AGI might be developed, and why it wouldn’t be straightforward to solve alignment before it becomes a critical issue.

To clarify, I do NOT believe we could contain or control a misaligned AGI with current safety practices. What I do find likely is that we will be able to avoid a situation altogether.

In addition to improving my understanding of AI X-risks, I also seek to explore strategies that I could aid in implementing in order to reduce AI X-risk.

About Me

- My primary motivation is effective altruism, and I believe that mitigating AI X-risk is the most important cause to work on.
- I have 7 years of experience working with machine learning, with a focus on large language models (LLMs), and possess strong technical knowledge of the field.
- My current p-doom estimate is 25%, derived from my own model, which gives about 5%, but I adjust upward in since some highly rational thinkers predicts significantly higher p-doom. Even if my p-doom were 1%, I would still view AI X-risk as the most pressing issue and dedicate my time to it.
 
Why Become My Tutor?

- You will be directly contributing to AI safety/alignment efforts, working with someone highly committed to making an impact.
- Opportunity for **highly technical 1-on-1 discussions** about the cutting-edge in AI alignment and X-risk reduction strategies.
- Compensation: $100–150 per hour (negotiable depending on your experience).

Ideal Qualifications

- Deep familiarity with AI existential risks and contemporary discussions surrounding AGI misalignment.
- A genuine interest in refining mental models related to AI X-risk and collaborating on solutions.
- p-doom estimate above 25%, since I aim to understand high p-doom perspectives.
- Strong interpersonal compatibility: It’s crucial that we both find these discussions rewarding and intellectually stimulating.

Structure & Logistics

- Weekly one-hour meetings focused on deep discussions of AI X-risk, strategic interventions, and mental model refinement.
- Flexible arrangement: you can invoice my company for the tutoring services.

How to Apply

If this opportunity sounds appealing to you, or if you know someone who may be a good fit, please DM me here on LessWrong.

New Answer
New Comment
3 comments, sorted by Click to highlight new comments since:

FWIW I think this would be a lot less like "tutoring" and a lot more like "paying people to tell you their opinions". Which is a fine thing to want to do, but I just want to make sure you don't think there's any kind of objective curriculum that comprises AI alignment.

Hmm, a bit confused what this means. There is I think a relatively large set of skills and declarative knowledge that is pretty verifiable and objective and associated with AI Alignment. 

It is the case that there is no consensus on what solutions to the AI Alignment problem might look like, but I think the basic arguments for why this is a thing to be concerned about are pretty straightforward and are associated with some pretty objective arguments.

There's a lot of detailed arguments for why alignment it's going to be more or less difficult. Understanding all of those arguments, starting with the most respected, is a curriculum. Just pulling a number out of your own limited perspective is a whole different thing.