Just for exercise, let's estimate the probability of the conjunction of my claims.
claim A: I think the idea of a single 'self' in the brain is provably untrue according to currently understood neuroscience. I do honestly think so, therefore P(A) as close to 1.0 as makes no difference. Whether I'm right is another matter.
claim B: I think a wildly speculative vague idea thrown into a discussion and then repeatedly disclaimed does little to clarify anything. P(B) approx 0.998 - I might change my mind before the day is out.
claim C: The thing I claim to think in claim B is in fact "usually" true. P(C) maybe 0.97 because I haven't really thought it through but I reckon a random sample of 20 instances of such would be unlikely to reveal 10 exceptions, defeating the "usually".
claim D: A running virtual machine is a physical process happening in a physical object. P(D) very close to 1, because I have no evidence of non-physical processes, and sticking close to the usual definition of a virtual machine, we definitely have never built and run a non-physical one.
claim E: You too are a physical process happening in a physical object. P(E) also close to 1. Never seen a non-physical person either, and if they exist, how do they type comments on lesswrong?
claim F: Nobody knows enough about the reality of consciousness to make legitimate claims that human minds are not information-processing physical processes. P(F) = 0.99. I'm pretty sure I'd have heard something if that problem had been so conclusively solved, but maybe they were disappeared by the CIA or it was announced last week and I've been busy or something.
P( A B C D E F) is approx 0.96.
The amount of money I'd bet would depend on the odds on offer.
I fear I may be being rude by actually answering the question you put to me instead of engaging with your intended point, whatever it was. Sorry if so.
I fear I may be being rude by actually answering the question you put to me instead of engaging with your intended point, whatever it was. Sorry if so.
No, you're right. You did technically answer my question, it wasn't rude, I should have made my intended point clearer. But your answer is really a restatement of your refutation of Mitchell Porter's position, not an affirmative defense of your own.
First of all, have I fairly characterized your position in my own post (near the bottom, starting with "For patternists to be right, both the following w...
In June 2012, Robin Hanson wrote a post promoting plastination as a superior to cryopreservation as an approach to preserving people for later uploading. His post included a paragraph which said:
This left me with the impression that the chances of the average cryopreserved person today of being later revived aren't great, even when you conditionalize on no existential catastrophe. More recently, I did a systematic read-through of the sequences for the first time (about a month 1/2 ago), and Eliezer's post You Only Live Twice convinced me to finally sign up for cryonics for three reasons:
I don't find that terribly encouraging. So now I'm back to being pessimistic about current cryopreservation techniques (though I'm still signing up for cryonics because the cost is low enough even given my current estimate of my chances). But I'd very much be curious to know if anyone knows what, say, Nick Bostrom or Anders Sandberg think about the issue. Anyone?
Edit: I'm aware of estimates given by LessWrong folks in the census of the chances of revival, but I don't know how much of that is people taking things like existential risk into account. There are lots of different ways you could arrive at a ~10% chance of revival overall:
is one way. But:
is a very similar conclusion from very different premises. Gwern has more on this sort of reasoning in Plastination versus cryonics, but I don't know who most of the people he links to are so I'm not sure whether to trust them. He does link to a breakdown of probabilities by Robin, but I don't fully understand the way Robin is breaking the issue down.