Let’s do an experiment in "reverse crowdfunding”. I will pay 50 USD to anyone who can suggest a new way of X-risk prevention that is not already mentioned in this roadmap. Post your ideas as a comment to this post.
Should more than one person have the same idea, the award will be made to the person who posted it first.
The idea must be endorsed by me and included in the roadmap in order to qualify, and it must be new, rational and consistent with modern scientific data.
I may include you as a co-author in the roadmap (if you agree).
The roadmap is distributed under an open license GNU.
Payment will be made by PayPal. The total amount of the prize fund is 500 USD (total 10 prizes).
The competition is open until the end of 2015.
The roadmap can be downloaded as a pdf from:
UPDATE: I uploaded new version of the map with changes marked in blue.
http://immortality-roadmap.com/globriskeng.pdf
Email: alexei.turchin@gmail.com
Linking to the appropriate section of the explanation text would probably be better than linking to primary sources directly once that exists (which in turn would link out to primary sources).
Compressing to "rationality" is reasonable, though most readers would not understand at a glance. If you're trying to keep it very streamlined just having a this as a lot of pointers makes sense, though perhaps alongside rationality it'd be good to have a pointer that's more clearly directed at "make wanting to fix the future a thing which is widely accepted", rather than rationality's normal meanings as being effective. I'd also think it more appropriate for the A3 stream than A2, for what I have in mind at least.
I'd think creating world saviors from scratch would not be a viable option with some AI timelines, but getting good at picking up promising people in/leaving uni who have the right ethical streak and putting them in a network full of the memes of EA/X-risk reduction could plausibly give a turnaround from "person who is smart and will probably get some good job in some mildly evil corporation" to "person dedicated to trying to fix major problems/person in an earning to give career to fund interventions/person working towards top jobs to gain leverage to fix things from the inside" on the order of months, with an acceptable rate of success (even a few % changing life trajectory would be more than enough to pay back the investment of running that network in terms of x-risk reduction).
Perhaps classifying things in terms of what should be the focus right now verses things that need more steps before they become viable projects would be more useful than attempting to give dates in general? Vague dates are better, but thinking more I'm not sure if even giving wide ranges really solves the problem, our ability to forecast several very important things is highly limited. I'm not sure about a good set of labels for this though, but perhaps something like:
And thank you. I tend to take downvotes as very strong negative reinforcement, it helps that you find my post somewhat useful.
Thank you for inspiring comment. Yes, anonymous downvoting make me feel as I have secret enemy in the woods(( The idea of creating "world saviour" from bright students is more realistic, and effective altruists and LW did a lot in this way. Rationality also should be elaborated and suggestion about dates classification is inspiring.