Gram_Stone comments on Stupid Questions May 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (263)
There is a not necessarily large, but definitely significant chance that developing machine intelligence compatible with human values may very well be the single most important thing that humans have or will ever do, and it seems very likely that economic forces will make strong machine intelligence happen soon, even if we're not ready for it.
So I have two questions about this: firstly, and this is probably my youthful inexperience talking (a big part of why I'm posting this here), but I see so many rationalists do so much awesome work on things like social justice, social work, medicine, and all kinds of poverty-focused effective altruism, but how can it be that the ultimate fate of humanity to either thrive beyond imagination or perish utterly may rest on our actions in this century, and yet people who recognize this possibility don't do everything they can to make it go the way we need it to? This sort of segues in to my second question, which is what is the most any person, more specifically, I can do for FAI? I'm still in high school, so there really isn't that much keeping me from devoting my life to helping the cause of making sure AI is friendly. What would that look like? I'm a village idiot by LW standards, and especially bad at math, so I don't think I'd be very useful on the "front lines" so to speak, but perhaps I could try to make a lot of money and do FAI-focused EA? I might be more socially oriented/socially capable than many here, perhaps I could try to raise awareness or lobby for legislation?
To elaborate on existing comments, a fourth alternative to FAI theory, Earning To Give, and popularization is strategy research. (That could include research on other risks besides AI.) I find that the fruit in this area is not merely low-hanging but rotting on the ground. I've read in old comment threads that Eliezer and Carl Shulman in particular have done a lot of thinking about strategy but very little of it has been written down, and they are very busy people. Circumstances may well dictate retracing a lot of their steps.
You've said elsewhere that you have a low estimate of your innate mathematical ability, which would preclude FAI research, but presumably strategy research would require lower aptitude. Things like statistics would be invaluable, but strategy research would also involve a lot of comparatively less technical work, like historical and philosophical analysis, experiments and surveys, literature reviews, lots and lots of reading, etc. Also, you've already done a bit of strategizing; if you are fulfilled by thinking about those things and you think your abilities meet the task, then it might be a good alternative.
Some strategy research resources:
Luke Muehlhauser's How to study superintelligence strategy.
Luke's AI Risk and Opportunity: A Strategic Analysis sequence.
Analysis Archives of the MIRI blog.
The AI Impacts blog, particularly the Possible Empirical Investigations post and links therein.
The Future of Life Institute's A survey of research questions for robust and beneficial AI.
Naturally, Bostrom's Superintelligence.
Thanks for taking the time to put all that together! I'll keep it in mind.