After some conversations here I thought I would try and find out what the community of people who care about AI risk think are the priorities for research.
To represent peoples opinions fairly I wanted to get input from people who care about the future of intelligence. Also I figure that other people will have more experience designing and analyzing surveys than me and getting their help or advice would be a good plan.
Planning document
Here is the planning document, give me a shout if you want edit rights. I'll be filling in the areas for research over the next week or so.
I'll set up a trello if I get a few people interested.
None of your survey choices seemed to fit me. I am concerned about and somewhat interested in AI risks. However, I currently would like to see more effort put into cryonics and reversing aging.
To be clear, I don't want to reduce the effort/resources currently put into AI risks. I just think they they are over weighted relative to cryonics and age reversal and would like to see any additional resource go to those until a better balance is achieved.