Let’s do an experiment in "reverse crowdfunding”. I will pay 50 USD to anyone who can suggest a new way of X-risk prevention that is not already mentioned in this roadmap. Post your ideas as a comment to this post.
Should more than one person have the same idea, the award will be made to the person who posted it first.
The idea must be endorsed by me and included in the roadmap in order to qualify, and it must be new, rational and consistent with modern scientific data.
I may include you as a co-author in the roadmap (if you agree).
The roadmap is distributed under an open license GNU.
Payment will be made by PayPal. The total amount of the prize fund is 500 USD (total 10 prizes).
The competition is open until the end of 2015.
The roadmap can be downloaded as a pdf from:
UPDATE: I uploaded new version of the map with changes marked in blue.
http://immortality-roadmap.com/globriskeng.pdf
Email: alexei.turchin@gmail.com
I am working now on large explanation text which will be 40-50 pages. It will be with links. Maybe I will add the links inside the pdf.
I don't think that I should go inside all details of decision theory and EA. I just put "rationality".
Picking potential world saviours and educating them and providing all our support seems to be a good idea but probably we don't have time. I will think more about it.
Planetary mining was recent addition which is addressed to people who think that Peak Oil and Peak Everything is the main risk. Personally I don't believe in usefulness of space mining without nanotech.
The point about dates is really important. Maybe I should put more vague dates like beginning of 21 century, middle and second half? What is other way to say it more vague?
I upvoted your post and in general I think that downvoting without explanation is not good thing on LW.
"Pray" corrected.
Linking to the appropriate section of the explanation text would probably be better than linking to primary sources directly once that exists (which in turn would link out to primary sources).
Compressing to "rationality" is reasonable, though most readers would not understand at a glance. If you're trying to keep it very streamlined just having a this as a lot of pointers makes sense, though perhaps alongside rationality it'd be good to have a pointer that's more clearly directed at "make wanting to fix the future a thing which is widely acc... (read more)