I think the plausibility of the arguments depends in a very great part on how plausible you think cryonics is; since the average on this site is about 22%, I can see how other strategies which are low likelihood/high payoff might appear almost not worth considering. On the other hand, something like 'simulationist' preservation seems to me to be well within two orders of magnitude of the probability of cryonics - both rely on society finding your information and deciding to do something with it, and both rely on the invention of technology which appears logically possible but well outside the realms of current science (overcome death vs overcome computational limits on simulations). But simulation preservation is three orders of magnitude cheaper than cryonics, which suggests to me that it might be worthwhile to consider. That is to say, if you seriously dismissed it in a couple of seconds you must have very very strong reasons to think the strategy is - say - about four orders of magnitude less likely than cryonics. What reason is that? I wonder if maybe I assumed the simulation problem was more widely accepted than I thought it might be. I'm a bit concerned about this line of reasoning, because all of my friends dismiss cryonics as 'obviously not worth considering' and I think they adopt this argument because the probabilistic conclusions are uncomfortable to contemplate.
With respect to your second point, that this post could be counter-productive, I am hugely interested by the conclusion. A priori it seems hugely unlikely that with all of our ingenuity we can only come up with two plausible strategies for living forever (religion and cryonics) and that both of those conclusions would be anathemic to the other group. If the 'plausible strategy-space' is not large I would take that as evidence that the strategy-space is in fact zero and people are just good at aggregating around plausible-but-flawed strategies. Can you think about any other major human accomplishment for which the strategy-space is so small? I suspect the conclusion for this is that I am bad at thinking up alternate strategies, rather than the strategies not existing, but it is an excellent point you make and well worth considering
something like 'simulationist' preservation seems to me to be well within two orders of magnitude of the probability of cryonics - both rely on society finding your information and deciding to do something with it
I don't know if I agree with your estimate of the relative probabilities, but I admit that I exaggerated slightly to make my point. I agree that this strategy at least worth thinking about, especially if you think it is at all plausible that we are in a simulation. Something along these lines is the only one of the listed strategies that I thou...
I wanted to try and write this like a sequence post with a little story at the beginning because the style is hard to beat if you can pull it off. For those that want to skip to the meat of the argument, scroll down to the section titled ‘The Jealous God of Cryonics’
Bizzaro-Pascal
The year is 1600BC and Moses is scrambling down the slopes of Mount Sinai under the blazing Egyptian sun, with two stone tablets tucked under his arms - strangely small for the enduring impact they will have on the world. Pausing a moment to take a sip of water from his waterskin, he decided to double-check the words on the tablet were the same as those God had dictated to him before reading them to the Israelites – it wouldn’t do to have a typo encouraging adultery! Suddenly, a great shockwave bowled Moses onto the ground. It was simultaneously as loud as the universe tearing itself into two nearly identical copies, but as quiet as the difference between a coin landing on heads rather than tails. Moses - trembling with shock - picked himself up, dusted off the tablets and scratched his beard. He was sure that the Second Commandment looked a bit different, but he couldn’t quite put his finger on it...
More than three thousand years later, Blaise Pascal is about to formulate the Wager that would make him infamous. “You see,” he says, “If God exists then the payoff is infinitely positive for believing in Him and infinitely negative for not, therefore whatever the cost of believing you should do it”.
“Well I’m sceptical,” says his friend, “It seems to me that the idea of an infinite payoff is incoherent to begin with, plus you have no particular reason to privilege the hypothesis that the Christian God should and wants to be worshipped, and not to mention the fact that if I were God I’d be pretty irritated that people pretended believe in Me because of some probabilistic argument rather than by observing all of My great works”
“But don’t you see?” Pascal rejoins, “God in His infinite goodness foresaw your objections and wrote the Second Commandment specifically to take that into account; ‘Thou shall have no other God but me, unless thou feels that thy can maximise thine’s utility by ignoring this Commandment and worshipping multiple Gods. Seriously, I don’t mind, worship as many Gods as you want with whatever degree of ‘true’ faithfulness versus rational utility maximising makes you happiest (although I recommend worshipping only Gods that do not prohibit the worship of other Gods, so as to maximise your chances of getting it right and going to heaven)’ “
“Hmm... Yes, come to think of it there has always been something a little different about that Commandment compared to the rest. I didn’t think much of it because there exist similar laws in every other major religion, which now I reflect on it should probably have tipped me off to the format of your Wager quite a long time before now”
“You see my Wager suggests you should worship the largest subset of non-contradictory Gods you possibly can; although I acknowledge that the probability of selecting the true God out of all of God-space is small (and for that God to both exist and select for heaven based on faithfulness is also unlikely), the payoff is sufficiently wonderful to make it worth the small up-front cost of most religions’ declarations of faith. I can only imagine what sort of a fanatic would seriously propose this argument in a Universe where all Gods demand you sample only once from God-space!”
In the universe Pascal describes, all you need to do to qualify for eternal life given a particular religion is true is to say out loud that you are a true believer (or go through some non-traumatic initiation rite, like a baptism or Shahadah). The probability of a God existing is still low, and the probability of that God caring that you worship Zir is still low, but it is (almost certainly) rational to take the advice of Pascal and find a maximal subset of Gods that you think maximises your chance of eternal happiness.
The Jealous God of Cryonics
Cryonics is not like Pascal’s Wager except superficially, but this little story attempts to drive an intuition which would appeal to bizzaro-Pascal. In this universe, someone who worshipped only one God would be deeply irrational. They might be able to defend their choice with some applause-light soundbites (“I have great faith, so I need no other Gods”) but in a purely utility-maximising sense - the sense where we try and maximise the number of happy years we live – this person is behaving irrationally. But although this seems obvious to us, some (most) cryonics advocates behave as though cryonics is a ‘jealous’ God (like in our universe) rather more accurately modelling it as a ‘permissive’ God like in bizzaro-Pascal’s universe. Cryonics doesn’t care at all if you adopt other strategies for maximising your lifespan except insofar as they conflict with cryonics. So for example high religiosity and cryonics are logically compatible as far as I can see; if brain death really is death (that is to say it is completely irreversible) then at least you have the back-up possibility that an afterlife exists. Yet it seems to me that supporters of cryonics happily stop looking for alternate life-extension strategies almost as soon as they discover cryonics (I hypothesise the actual mechanism is that someone convinces them cryonics is rational and then they forget about the rest of the strategy-space in their excitement). Certainly, I can’t find any discussions on cryonics on LessWrong promoting any alternate life-maximisation techniques except perhaps brain plasticisation. This is a shame, because it is possible that some additional life-extension techniques might be costlessly employed by those who want to live forever to greatly increase their expected utility.
Looking around for literature on this topic. Alcor, for example, have an article entitled ‘The Road Less Travelled’ talking about potential alternatives to cryonics including desiccation and peat preservation. Brain plasticisation and chemical preservation are seriously discussed as alternatives even amongst those who are strongly in favour of freezing; the consensus is that these techniques are likely to offer a higher success rate once they are perfected, but freezing is the way forward now. I can think of a few more outlandish methods of preservation (such as firing yourself into the heart of a black hole and assuming time dilation means you will still be alive when a recovery technique is developed or standing in a high-radiation environment hoping that your telomerase will re-knit) but these all suffer from the fact they are less likely to work than cryonics, and obviously so. Why would cryonicists waste time thinking about outlandish preservation techniques when they displace a more likely technique? Indeed, even if these techniques were more likely there are good reasons to treat cryonics as a Schelling point unless a new technique obviously dominates; we want future society to spend all of its resources targeting one problem, especially if we are part of the generation that is first experimenting with these techniques. While it surprises me that no cryonicists seem interested in this even as an intellectual exercise, it is at least rational to ignore low-probability techniques which displace higher-probability techniques with the same payoff for all of the above reasons.
The Extended Strategy-Space
But there seems to be no excuse for failing to consider additional strategies which complement cryonics; there exist a very great number of strategies which could be followed that might result in revivification before cryonics (or instead of cryonics if cryonics turns out to be impossible) and have a cost of strictly less than cryonic freezing. I’ve given them short descriptions to enable easy reference in the comments (if anyone is interested) so don’t read too much into the names. I’ve also ordered them roughly in the order in which I find them plausible; up until the boundary between Social and Simulation Preservation I actual find the arguments more plausible than cryonics:
Each of these strategies have a number of features which make them attractive; they are (mostly) less expensive than cryonics, they do not strictly lower your chance of cryogenic revival (and in some cases probably increase it) and all have a non-zero chance of preserving your brainstates at least until future society is advanced enough to do something with them. Even better, most of these strategies synergise well with each other; if I decide to get myself frozen I will definitely also pay for fMRIs to record my brainstate as I think about various stimuli and store copies of those recordings with multiple institutions. I don’t think this list is exhaustive, but I do think it covers a good amount of the possible ‘live forever’ strategy space. It does not explore strategies which are absurdly expensive or which interfere with cryonics - so it is still only one small corner of the total strategy space – but I think it expands the area of the strategy space most people are interested in; the bit in which you and I can act.