Let’s do an experiment in "reverse crowdfunding”. I will pay 50 USD to anyone who can suggest a new way of X-risk prevention that is not already mentioned in this roadmap. Post your ideas as a comment to this post.
Should more than one person have the same idea, the award will be made to the person who posted it first.
The idea must be endorsed by me and included in the roadmap in order to qualify, and it must be new, rational and consistent with modern scientific data.
I may include you as a co-author in the roadmap (if you agree).
The roadmap is distributed under an open license GNU.
Payment will be made by PayPal. The total amount of the prize fund is 500 USD (total 10 prizes).
The competition is open until the end of 2015.
The roadmap can be downloaded as a pdf from:
UPDATE: I uploaded new version of the map with changes marked in blue.
http://immortality-roadmap.com/globriskeng.pdf
Email: alexei.turchin@gmail.com
Comprehensive, I think it has the makings of a good resource, though it needs some polish. I'd imagine this would be much more useful to someone new to the ideas presented if it linked out to a bunch of papers/pages for expansion from most bulletpoints.
One thing I'd like to see added is spreading the memes of reason/evidence-based consequentialist decision making (particularly large-scale and future included) at all levels. It may be entirely accepted here, but the vast majority of humans don't actually think that way. It's kind of a pre-requisite for getting much momentum behind the other, more direct, goals you've laid out.
In a few places, particularly in A1, you drift into general "things that would be good/cool", rather than appearing to stay focused on things applicable to countering an extinction risk. Maybe there is a link that I'm missing, but other than bringing more resources I'm not sure what risk "Planetary mining" for example helps counter.
I'd advise against giving dates. AI timelines in particular could plausibly be much quicker or much slower than your suggestions, and it'd have massive knock-on effects. False confidence on specifics is not a good impression to give, maybe generalize them a bit?
"Negotiation with the simulators or prey for help"
pray?
I am working now on large explanation text which will be 40-50 pages. It will be with links. Maybe I will add the links inside the pdf.
I don't think that I should go inside all details of decision theory and EA. I just put "rationality".
Picking potential world saviours and educating them and providing all our support seems to be a good idea but probably we don't have time. I will think more about it.
Planetary mining was recent addition which is addressed to people who think that Peak Oil and Peak Everything is the main risk. Personally I don't b... (read more)