I was initially going to title this post, "try to become an AI researcher and, if so, how?" but see Hold Off On Proposing Solutions. So instead, I'm going to ask people to give me as much relevant information as possible. The rest of this post will be a dump of what I've figured out so far, so people can read it and try to figure out what I might be missing.
If you yourself are trying to make this decision, some of what I say about myself may apply to you. Hopefully, some of the comments on this post will also be generally applicable.
Oh, and if you can think of any bias-avoiding advice that's relevant here, along the lines of holding off on proposing solutions, that would be most helpful.
Though I'm really hard to offend in general, I've made a conscious decision to operate by Crocker's Rules in this thread.
One possibility that's crossed my mind for getting involved is going back go graduate school in philosophy to study under Bostrom or Chalmers. But I really have no idea what the other possible routes for me are, and ought to know about them before making a decision.
~~~~~
Now for the big dump of background info. Feel free to skip and just respond based on what's above the squigglies.
I seem to be good at a lot of different things (not necessarily everything), but I'm especially good at math. SAT was 800 math, 790 verbal, GRE was 800 on both, but in both cases, I studied for the test and getting my verbal score up was much harder work. However, I know there are plenty people who are much better at math than I am. In high school, I was one of the very few students from my city to qualify for the American Regions Math League competition (ARML), but did not do especially well.
Going to ARML persuaded me that I was probably not quite smart enough to be a world-class anything. I entered college as a biochemistry major, with the idea that I would go to medical school and then join an organization like Doctors Without Borders to just do as much good as I could, even if I wasn't a world-class anything. I did know at the time that I was better at math than biology, but hadn't yet read Peter Unger's Living High and Letting Die, so "figure out the best way to covert math aptitude into dollars and donate what you don't need to charity" wasn't a strategy I even considered.
After getting mostly B's in biology and organic chemistry my sophomore year, I decided maybe I wasn't well-suited for medical school and began looking for something else to do with my life. To this day, I'm genuinely unsure why I don't do so well in biology. Did the better students in my classes have some aptitude I lacked? Or was it that being really good at math made me lazy about things that are inherently time-consuming to study, like (possibly) anatomy?
I took a couple of neuroscience classes junior year, and considered a career in the subject, but eventually ended up settling on philosophy, silencing some inner doubts I had about philosophy as a field. I applied to grad school in philosophy at over a dozen programs and was accepted in to exactly one: the University of Notre Dame. I accepted, which was in retrospect the first or second stupidest decision I've made in my life.
Why was it a stupid decision? To give only three of the reasons: (1) Notre Dame is a department where evangelical Christian anti-evolutionists like Alvin Plantinga are given high status (2) it was weak in philosophy of mind, which is what I really wanted to study (3) I was squishing what were, in retrospect, legitimate doubts about academic philosophy, because once I made the decision to go, I had to make it sound as good as possible to myself.
Why did I do it? I'm not entirely sure, and I'd like to better understand this mistake so as to not make a similar one again. Possible contributing factors: (1) I didn't want to admit to myself that I didn't know what I was doing with my life (2) I had an irrational belief that if I said "no" I'd never get another opportunity like that again (3) my mom and dad went straight from undergrad to graduate school in biochemistry and dental school, respectively, and I was using that as a model for what my life should look like without really questioning it (4) Notre Dame initially waitlisted me and then, when they finally accepted me, gave me very little time to decide whether or not to accept, which probably unintentionally invoked one or two effects straight out of Cialdini.
So a couple years later, I dropped out of the program and now I'm working a not-especially-challenging, not-especially-exciting, not-especially-well-paying job while I figure out what I should do next.
My main reason for now being interested in AI is that through several years of reading LW/OB, and the formal publications of the people who are popular around here, I've become persuaded that even if specific theses endorsed by the Singularity Institute are wrong potentially world-changing AI is close enough to be worth putting a lot of thought into seriously thinking about.
It helps that it fits with interests I've had for a long time in cognitive science and philosophy of mind. I think I actually was interested in the idea of being an AI researcher some time around middle school, but by the time I was entering college I had gotten the impression that human-like AI was about as likely in the near future as FTL travel.
The other broad life-plan I'm considering is the thing I should have considered going into college, "figure out the best way to covert math aptitude into dollars and donate what you don't need to charity." One sub-option is to look into computer programming as suggested in HPMOR author's note a month or two ago. My dad thinks I should take some more stats and go for work as an analyst for some big eastern firm. And there are very likely options in this area that I'm missing.
~~~~~
I think that covers most of the relevant information I have. Now, what am I missing?
Note that FAI research, AI research, and AGI research are three very different things.
Currently, FAI research is conducted solely by the Singularity Institute and researchers associated with them. Looking at SI's publications over the last few years, the FAI research program has more in common with philosophy and machine ethics than programming.
More traditional AI research, which is largely conducted at universities, consists mostly of programming and math and is conducted with the intent to solve specific problems. For the most part, AI researchers aren't trying to build general intelligence of the kind discussed on LW, and a lot of AI work is split into sub-fields like machine learning, planning, natural language processing, etc. (I'm an undergraduate intern for an AI research project at a US university. Feel free to PM with questions.)
AGI research mostly consists of stuff like OpenCog or Numenta, i.e. near-term projects that attempt to create general intelligence.
It's also worth remembering that AGI research isn't taken very seriously by some (most?) AI researchers, and the notion of FAI isn't even on their radar.
This is useful, and suggests "learn programming" is useful preparation for both work on AI and just converting math ability into $.
One thing it seems to leave out, though, is the stuff Nick Bostrom has done on AI, which isn't strictly about FAI, though it is related. Perhaps we need to add a category of "general strategic thinking on how to navigate the coming of AI."
I should learn more about AGI projects. My initial guess is that near-term projects are hopeless, but in their "Intelligence Explosion" paper Luke and Anna express the view that a couple AGI projects have a chance of succeeding relatively soon. I should know more about that. Where to begin learning about AGI?