After some conversations here I thought I would try and find out what the community of people who care about AI risk think are the priorities for research.
To represent peoples opinions fairly I wanted to get input from people who care about the future of intelligence. Also I figure that other people will have more experience designing and analyzing surveys than me and getting their help or advice would be a good plan.
Planning document
Here is the planning document, give me a shout if you want edit rights. I'll be filling in the areas for research over the next week or so.
I'll set up a trello if I get a few people interested.
Do you have a short write-up somewhere about what do you want to do and why other people should help you?
I want to gather information about what people care about in AI Risks. Other people should help me if they also want to gather information about what people care about in AI Risks.