Hello LessWrong crew,

I am familiar with the ideas of Effective Altruism as I have read the 80,000 Hours career guide. I think it is a great guide and it definitely put a new perspective on the way I view my career.

A bit of my background: 

I have a master's degree in computer science. I am currently working remotely as a machine learning engineer.

Here is a list of the things that I am looking for in my career, ordered from most important to least important:

  1. Remote work
  2. High salary
  3. Impact

Maybe I'm not the paragon of Effective Altruism values, but if I'm being honest, I value remote work and high salary more than impact. Impact has the 3rd place, but it is still a factor.

Now onto my question:

A few years ago I read Superintelligence and got scared that AGI might make humanity go extinct. I then started focusing on machine learning and after graduating I ended up as a machine learning engineer, where I'm working currently.

Recently, however, I began questioning whether what I was doing is the right thing to do impact-wise. I believe blockchain to be a great technology as well (even though we are in a bubble right now). Fundamentally, I think blockchain is going to bring "power  to the people" and I think that's great. It's got it weaknesses now, sure, but over time I think they'll get ironed out.

Here are my top three reasons why I think I should switch to blockchain:

  1. Given my strong remote work preferences, I don't think I will make any impact in anything AI safety related. I think that the main discoveries are being made in companies such as OpenAI and DeepMind and they all require going to the office. Since I don't want to go to the office (my remote work preference is higher than my impact preference), I don't think I will be a part of a team that reaches a fundamental breakthrough. With blockchain, on the other hand, most jobs are remote and I could therefore contribute more.
  2. I am not 100% convinced that AI safety is an existential risk. There are some indications toward this (such as this one), but I think that it may very well be that worrying about AGI safety (as in it's an existential risk for all humans) is the same as worrying that aliens will come and destroy Earth or something similar. I am not denying the problems with current AI systems, but what I am saying is that I don't see a clear path to AGI and I think there's a lot of hand waving that goes on when talking about AGI safety at this point in time.
  3. One could make the argument that I should do machine learning engineering jobs and wait for AI safety related jobs to become remote. I would then be working on making some AI system safe. Here's the problem with this perspective: I'm not sure when we will come to a point where there are remote AI safety jobs available. What if there's no fundamental breakthrough in AI for another 30-40 years and I keep working on some non-AI safety related remote machine learning jobs to "keep my skills sharp in case they're needed", only to find myself never using them on actual AI safety problems.

Fundamentally, the only reason I'm interested in AI is because of AGI safety. And right now I'm not sure that AGI safety is a real existential threat and even if it is, given my remote work preferences I will probably have low to no impact on AGI safety. Blockchain, on the other hand, is already changing and will most likely continue to change the way we use the internet and is much more remote friendly.

What are your 2 cents? I'd like to bounce off perspectives off of others to see if I'm missing anything in my train of thought.

P.S. I cross-posted this on Effective Altruism forums to get multiple perspectives.

New Answer
New Comment

1 Answers sorted by

Viliam

60

I don't really see how blockchain is relevant to your stated motivations. If AGI makes humanity extinct, blockchain giving "power to the people" is irrelevant. And if your highest preference is remote work, then you simply need to survey which type of job is more likely to allow you to work remotely.

(Tangentially, I don't see how blockchain privileges people over e.g. state actors, considering that governments can use cryptocurrency just as well. Also, a government can take away your cryptocurrency trivially by keeping you in prison until you provide the private key to your wallet.)

Given that you don't think you can make any difference in AI safety... seems to me that you should apply to both machine learning and blockchain jobs, insist on 100% remote work, and choose the one that accepts you and offers the highest salary. (Then, you can donate some amount to an organization that you believe has a highest chance to make an actual difference in AI safety.)

tl;dr - follow your preferences