steven0461 comments on 2011 Survey Results - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (513)
Below is my attempt to re-do the calculations that led to that conclusion (this time, it's 4%).
FAI before WBE: 3%; Surviving to WBE: 60%; I assume cryonics revival feasible mostly only after WBE; Given WBE, cryonics revival (actually happening for significant portion of cryonauts) before catastrophe or FAI: 10%; FAI given WBE (but before cryonics revival): 2%; Heads preserved long enough (given no catastrophe): 50%; Heads (equivalently, living humans) mattering/useful to FAI: less than 50%.
In total, 6% for post-WBE revival potential and 4% for FAI revival potential, discounted by 50% preservation probability and 50% mattering-to-FAI probability, this gives 4%.
(By "humans useful to FAI", I don't mean that specific people should be discarded, but that the difference to utility of the future between a case where a given human is initially present, and where they are lost, is significantly less than moral value of current human life, so that it might be better to keep them than not, but not that much better, for fungibility reasons.)
I'm not sure how to interpret the uploads-after-WBE-but-not-FAI scenario. Does that mean FAI never gets invented, possibly in a Hansonian world of eternally competing ems?
If you refer to "cryonics revival before catastrophe or FAI", I mean that catastrophe or FAI could happen (shortly) after, no-catastrophe-or-superintelligence seems very unlikely. I expect catastrophe very likely after WBE, also accounting for most of the probability of revival not happening after WBE. After WBE, greater tech argues for lower FAI-to-catastrophe ratio and better FAI theory argues otherwise.
So the 6% above is where cryonauts get revived by WBE, and then die in a catastrophe anyway?
Yes. Still, if implemented as WBEs, they could live for significant subjective time, and then there's that 2% of FAI.
In total, you're assigning about a 4% chance of a catastrophe never happening, right? That seems low compared to most people, even most people "in the know". Do you have any thoughts on what is causing the difference?
I expect that "no catastrophe" is almost the same as "eventually, FAI is built". I don't expect a non-superintelligent singleton that prevents most risks (so that it can build a FAI eventually). Whenever FAI is feasible, I expect UFAI is feasible too, but easier, and so more probable to come first in that case, but also possible when FAI is not yet feasible (theory isn't ready). In physical time, WBE sets a soft deadline on catastrophe or superintelligence, making either happen sooner.