You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Will_Newsome comments on Best shot at immortality? - Less Wrong Discussion

4 Post author: tomme 22 March 2012 10:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (85)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 22 March 2012 03:05:14PM *  0 points [-]

Wait a second, your objection doesn't really strongly counter my point, right? 'Cuz the author of the post wanted to maximize immortality, so saying that the FAI would have better things to do with its time would imply that the FAI wasn't applying the reversal test when it comes to keeping current humans alive. It seems that the FAI should either kill those living and replace them with something better, or revive the dead, otherwise it's being inconsistent. (I mean not necessarily, but still.) Also, if it doesn't resurrect those in graves or urns then it's not gonna resurrect cryonauts either, so cryonics is out. And your "rescue sim" argument doesn't seem strong; rescue sims might not be considered as good as running simulations of people who had died; high opportunity cost. So not being in a rescue sim could just mean that the FAI had better things to do, e.g. running simulations of previously-dead people in heaven or whatever. Am I missing something?

Comment author: cousin_it 22 March 2012 04:05:07PM *  2 points [-]

Also, if it doesn't resurrect those in graves or urns then it's not gonna resurrect cryonauts either, so cryonics is out.

Why? If FAI is weak enough, it might be unable to resurrect non-cryonauts. Also maybe there will be no AIs and an asteroid will kill us all in 200 years, but we'll figure out how to thaw cryonauts in 100, so they get some bonus years.

Comment author: moridinamael 22 March 2012 04:39:55PM 3 points [-]

I don't think it's a matter of an intelligence being strong or weak. I'm relatively confident that the inverse problem of computing the structure of a human brain given a rough history of the activities of the human as input is so woefully underconstrained and nonunique as to be impossible. If you're familiar with inversion in general, you can look at countless examples where robust Bayesian models fail to yield anything but the grossest approximations even with rich multivariate data to match.

Unless you're conjecturing FAI powers so advanced that the modern understanding of information theory doesn't apply, or unless I'm missing the point entirely.

Comment author: Will_Newsome 22 March 2012 04:11:30PM 0 points [-]

I think those possibilities are unlikely. /shrugs