Previously: round 1, round 2, round 3
From the original thread:
This is for anyone in the LessWrong community who has made at least some effort to read the sequences and follow along, but is still confused on some point, and is perhaps feeling a bit embarrassed. Here, newbies and not-so-newbies are free to ask very basic but still relevant questions with the understanding that the answers are probably somewhere in the sequences. Similarly, LessWrong tends to presume a rather high threshold for understanding science and technology. Relevant questions in those areas are welcome as well. Anyone who chooses to respond should respectfully guide the questioner to a helpful resource, and questioners should be appropriately grateful. Good faith should be presumed on both sides, unless and until it is shown to be absent. If a questioner is not sure whether a question is relevant, ask it, and also ask if it's relevant.
Ask away!
"Yes" in the sense that people are aware of the argument, which goes back at least as far as Vernor Vinge, 1993, but "no" in the sense that there are also arguments that it may not be highly unlikely that a failed attempt at FAI will be worse than extinction (especially since some of the FAI proposals, such as Paul Christiano's, are actually very closely related to uploading), and also "no" in the sense that we don't know how to take into account considerations like this one except by using our intuitive judgments which seem extremely unreliable.
The non-negligible chance of waking up to a personal hell-world (including, partial+failed revivification) is the main non-akratic reason I'm not signed up for cryonics. I currently think AGI is coming sooner than WBE, but if WBE starts pulling ahead then I would be even more disinclined to sign up for cryonics.
Wei, do you know of any arguments better than XiXiDu's that a failed attempt at FAI could very well be worse than extinction?