That's awesome that you are looking to work on AI safety. Here are some options that I don't see you mentioning:
If you're able to get a job working on AI or machine learning, you'll be getting paid to improve your skills in that area. So you might choose to direct your study and independent projects towards building a resume for AI work (e.g. by participating in Kaggle competitions).
If you get in to the right graduate program, you'll be able to take classes and do research in to AI and ML topics.
Probably quite difficult, but if you're able to create an app that uses AI or machine learning to make money, you'd also fulfill the goal of both making money and studying AI at the same time. For example, you could earn money through this stock market prediction competition.
80000 hours has a guide on using your career to work on AI risk.
MIRI has set up a research guide for getting the background necessary to do AI safety work. (Note that if MIRI is correct, your understanding of math may be much more important than your understanding of AI in order to do AI safety research. So the previous plans I suggested might look less attractive. The best path might be to aim for a job doing AI work, and then once you have that, start studying math relevant to AI safety part time.)
BTW, the x risk career network is also a good place to ask questions like this. (Folks on that mailing list are probably more qualified than me to answer this question but they don't browse LW that often.)
Thanks for your varied suggestions!
Actually I'm kind of more comfortable with MIRI math than with ML math, but the research group here is more interested in machine learning. If I recommended them to look into provability logic, they would get big eyes and say Whoa!, but no more. If, however, I do ML research in the direction of AI safety, they would get interested. (And they are getting interested, but (1) they can't switch their research too quickly and (2) I don't know enough Japanese and the students don't know enough English to make any kind of lunchtime or hallway conversation about AI safety possible.)
(I'm re-posting my question from the Welcome thread, because nobody answered there.)
I care about the current and future state of humanity, so I think it's good to work on existential or global catastrophic risk. Since I've studied computer science at a university until last year, I decided to work on AI safety. Currently I'm a research student at Kagoshima University doing exactly that. Before April this year I had only little experience with AI or ML. Therefore, I'm slowly digging through books and articles in order to be able to do research.
I'm living off my savings. My research student time will end in March 2017 and my savings will run out some time after that. Nevertheless, I want to continue AI safety research, or at least work on X or GC risk.
I see three ways of doing this:
Oh, and I need to be location-independent or based in Kagoshima.
I know http://futureoflife.org/job-postings/, but all of the job postings fail me in two ways: not location-independent and requiring more/different experience than I have.
Can anyone here help me? If yes, I would be happy to provide more information about myself.
(Note that I think I'm not in a precarious situation, because I would be able to get a remote software development job fairly easily. Just not in AI safety or X or GC risk.)