Will_Newsome comments on A Much Better Life? - Less Wrong

61 Post author: Psychohistorian 03 February 2010 08:01PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (173)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 31 July 2011 06:02:38PM *  -1 points [-]

I have a rather straightforward argument---well, I have an idea that I completely stole from someone else who might be significantly less confident of it than I am---anyway, I have an argument that there is a strong possibility, let's call it 30% for kicks, that conditional on yer typical FAI FOOM outwards at lightspeed singularity, all humans who have died can be revived with very high accuracy. (In fact it can also work if FAI isn't developed and human technology completely stagnates, but that scenario makes it less obvious.) This argument does not depend on the possibility of magic powers (e.g. questionably precise simulations by Friendly "counterfactual" quantum sibling branches), it applies to humans who were cremated, and it also applies to humans who lived before there was recorded history. Basically, there doesn't have to be much of any local information around come FOOM.

Again, this argument is disjunctive with the unknown big angelic powers argument, and doesn't necessitate aid from quantum siblings

You've done a lot of promotion of cryonics. There are good memetic engineering reasons. But are you really very confident that cryonics is necessary for an FAI to revive arbitrary dead human beings with 'lots' of detail? If not, is your lack of confidence taken into account in your seemingly-confident promotion of cryonics for its own sake rather than just as a memetic strategy to get folk into the whole 'taking transhumanism/singularitarianism seriously' clique?

Comment author: Zack_M_Davis 31 July 2011 06:13:37PM 6 points [-]

I have a rather straightforward argument [...] anyway, I have an argument that there is a strong possibility [...] This argument does not depend on [...] Again, this argument is disjunctive with [...]

And that argument is ... ?

Comment author: [deleted] 31 July 2011 06:20:05PM 2 points [-]

How foolish of you to ask. You're supposed to revise your probability simply based on Will's claim that he has an argument. That is how rational agreement works.

Comment author: Will_Newsome 31 July 2011 06:26:39PM *  3 points [-]

Actually, rational agreement for humans involves betting. I'd like to find a way to bet on this one. AI-box style.