I think that there's not much more that most individuals can do about x-risk as a full-time pursuit than we can as aware and interested civilians.
I also think that unfriendly AI Foom is a small part of the disaster space, compared to the current volume of unfriendly natural intelligence we face. Increase in destructive power of small (or not-so-small) groups of humans seems 20-1000x more likely (and I generally think toward the higher end of that) to filter us than a single or small number of AI entities becoming powerful enough to do so.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Do you get the impression that Japan has numerous benevolent and talented researchers who could and would contribute meaningfully to AI safety work? If so, it seems possible to me that your comparative advantage is in evangelism rather than research (subject to the constraint that you're staying in Japan indefinitely). If you're able to send multiple qualified Japanese researchers west, that's potentially more than you'd be able to do as an individual.
You'd still want to have thorough knowledge of the issues yourself, if only to convince Japanese researchers that the problems were interesting.
Why should I send them west? Hopefully so that they learn and come back and produce researcher offspring? I'll see what I can do. – Nag my supervisor to take me to domestic conferences…