Gaming Democracy

8 Froolow 30 July 2014 09:45AM

I live in the UK, which has a very similar voting structure to the US for the purposes of this article. Nevertheless, it may differ on the details, for which I am sorry. I also use a couple of real-life political examples which I hope are uncontroversial enough not to break the unofficial rules here. If they are not, I can change them, because this is a discussion of gaming democracy by exploiting swing seats to push rationalist causes.

Cory Doctrow writes in the Guardian about using Kickstarter-like thresholds to encourage voting for minority parties:

http://www.theguardian.com/technology/2014/jul/24/how-the-kickstarter-model-could-transform-uk-elections

He points out that nobody votes for minority parties because nobody else votes for them; if you waste your vote on Yellow then it is one fewer vote that might stop the hated Blue candidate getting in by voting for the not-quite-so-bad Green. He argues that you could use the internet to inform people when some pre-set threshold had been triggered with respect to voting for a minor party and thus encourage them to get out and vote. So for example if the margin of victory was 8000 votes and 9000 people agreed with the statement, “If more than 8000 people agree to this statement, then I will go to the polls on election day and vote for the minority Yellow party”, the minority Yellow party would win power even though none of the original 9000 participants would have voted for Yellow without the information-coordinating properties of the internet.

I’m not completely sure of the argument, but I looked into some of the numbers myself. There are 23 UK seats (roughly equivalent to Congressional Districts for US readers) with a margin of 500 votes or fewer. So to hold the balance of power in these seats you need to find either 500 non-voters who would be prepared to vote the way you tell them, or 250 voters with the same caveats (voters are worth twice as much as non-voters to the aspiring seat-swinger, since a vote taken from the Blues lowers the margin by one, and a vote given to the Greens lowers the margin by one, and every voter is entitled to both take a vote away from the party they are currently voting for and award a vote to any party of their choice). I’ll call the number of votes required to swing a seat the ‘effective voter’ count, which allows for the fact that some voters count for two.

It doesn’t sound impossible to me to reach the effective voter count for some swing constituencies, given that often even extremely obvious parody parties can often win back their deposit (500 actual votes, not even ‘effective votes’).

Doctrow wants to use the information co-ordination system to help minority parties reach a wider audience. I think it could be used in a much more active way to force policy promises on uncontroversial but low-status issues from potential future MPs. Let me take as an example ‘Research funding for transhuman causes’. Most people don’t know what transhumanism is, and most people who do know what it is don’t care. Most people who know what it is and care are basically in support of research into transhuman augmentations, but would definitely rank issues like the economy or defence as more important. There is a small constituency of people who oppose transhumanism outright, but they are not single issue voters either by any means (I imagine opposing transhumanism is strongly correlated with a ‘traditional religious value’ cluster which includes opposing abortion, gay marriage and immigration). Politicians could therefore (almost) costlessly support a small amount of research funding for transhuman, which would almost certainly be a sensible move when averaged across the whole country (either you discover something cool, in which case your population is made better off and your army more powerful or you don’t, and in the worst case you get a decent multiplier effect to the economy that comes from employing a load of material scientists and bioengineers). However we know that they won’t do this because while the benefits to the country might be great, the minor cost of supporting a low-status (‘weird’) project is borne entirely by the individual politician. What I mean by this is that the politician will probably not lose any votes by publically supporting transhumanism, but will lose status among their peers and will want to avoid this. There’s also a small risk of losing votes by supporting transhuman causes from the ‘traditional value’ cluster and no obvious demographic with whom supporting transhuman causes gains votes.

This indicates to me that if enough pro-transhumans successfully co-ordinated their action, they could bargain with the politicians standing for office. Let us say there are unequivocally enough transhumans to meet the effective voter threshold for a particular constituency. One person could go round each transhuman (maybe on that city’s subreddit) and get them to agree in principle to vote for whichever candidate will agree to always vote ‘Yes’ on research funding for transhuman causes, up to a maximum of £1bn. Each transhuman might have a weak preference for Blues vs Greens or vice versa, but the appeal is made to their sense of logic; each Blue vote is cancelled out by each Green vote, but each ‘Transhuman’ vote is a step closer to getting transhumanism properly funded, and transhumanism is more important than any marginal policy difference between the two parties. You then go to each candidate and present the evidence that the ‘transhuman’ block has the power to swing the election and is well co-ordinated enough to vote as a bloc on election day. If both candidates agree that they will vote ‘Yes’ on the bills you decided on, then send round an electronic message saying – essentially – “Vote your conscience”. If one candidate says ‘Yes’ and the other ‘No’ send round a message saying “Vote Blue” (or Green). If both candidates say ‘no’ send a message saying “Vote for the Transhuman Party (which is me)” in the hope that you can demonstrate you really did hold the balance of power, to increase the weight of your negotiation in the future.

If the candidate then goes back on their word, you slash and burn the constituency and make sure that no matter what the next candidate from that party promises, they lose. Also ensure that if that candidate ever stands in a marginal seat again, they lose (effectively ending their political career). This gives a strong incentive for MPs to vote the way they promised, and for parties to allow them to vote the way they promised.

Incidentally my preferred promise to extract from the candidates (and I don’t think this works in America) is to bring a bill with a particular wording if they win a Private Members’ Ballot (a system whereby junior members enter a lottery to see whose idea for a bill gets a ‘reading’ in the House of Commons, and hence a chance of becoming a law). For example, “This house would fund £1bn worth of transhumanism basic research over the next four years”. This is because it forces MPs to take a position on an issue they otherwise would not want to touch (because it is low-status) and one way out of this bind is to pretend the issue was high-status all along, which would be a good outcome for transhumanism as it means people might start funding it without the complicated information-coordination game I describe above.

One issue with this is that some groups – for example; Eurosceptics – are happy to single issue vote already, and there are far more Eurosceptics than there are rationalists in the UK. A US equivalent – as far as I understand – might be gun rights activists; they will vote for whatever party deregulates guns furthest, regardless of any other policies they might have and they are very numerous. This could be a problem, since a more numerous coalition will always beat a less numerous coalition at playing this information coordination game.

The first response is that it might actually be OK if this occurs. Being a Eurosceptic in no way implies a particular position on transhuman issues, so a politician could agree to the demands of the Eurosceptic bloc and transhuman bloc without issue. The numbers problem only occurs if a particular position automatically implies a position on another issue, so if there was a large single-issue anti-transhuman voting bloc, and there isn’t. There is a small problem if someone is both a Eurosceptic and a transhuman, since you can only categorically agree to vote the way one bloc tells you, but this is a personal issue where you have to decide which issue is more important and not a problem with the system as it stands.

The second response is that you are underestimating the difficulty of co-ordinating a vote in this way. For example, Eurosceptics – as a rule – will want to vote for the minority UKIP party to signal their affiliation with Eurosceptic issues. No matter what position the candidates agree to on Europe, UKIP will always be more extreme on European issues, since the candidate can only agree to sufficiently mainstream policies that the vote-cost of agreeing to the policy publically is less than the vote-gain of gaining the Eurosceptic bloc. Therefore there will be considerable temptation to defect and vote UKIP in the event of successfully coordinating a policy pledge from a candidate since the voter has a strong preference for UKIP over any other party. Transhumans – it is hypothesised – have a stronger preference for marginal gains in transhuman funding over any policy difference between the two major parties and so getting them to ‘hold their nose’ and vote for a candidate they would otherwise not want to is easier.

It is not just transhumanism that this vote-bloc scheme might work for, but transhumanism is certainly a good example. In my mind you could co-ordinate any issue where the proposed voting bloc is:

  1. Intelligent enough to understand why voting for a candidate you don’t like might result in outcomes you do like
  2. Sufficiently politically unaffiliated that voting for a party they disapprove of is a realistic prospect (hence I’m picking issues young people care about, since they typically don’t vote)
  3. Sufficiently internet-savvy that coordinating by email / reddit is a realistic prospect.
  4. Unopposed by any similar-sized or larger group which fits the above three criteria.
  5. Cares more about this particular issue than any other issue which fits the above four criteria

Some other good examples of this might be opposing homeopathy on the NHS, encouraging Effective Altruism in government foreign aid, spending a small portion of the Defence budget on FAI and so on.

Are there any glaring flaws I’ve missed?

The Extended Living-Forever Strategy-Space

10 Froolow 02 May 2014 02:15PM

I wanted to try and write this like a sequence post with a little story at the beginning because the style is hard to beat if you can pull it off. For those that want to skip to the meat of the argument, scroll down to the section titled ‘The Jealous God of Cryonics’

Bizzaro-Pascal

The year is 1600BC and Moses is scrambling down the slopes of Mount Sinai under the blazing Egyptian sun, with two stone tablets tucked under his arms - strangely small for the enduring impact they will have on the world. Pausing a moment to take a sip of water from his waterskin, he decided to double-check the words on the tablet were the same as those God had dictated to him before reading them to the Israelites – it wouldn’t do to have a typo encouraging adultery! Suddenly, a great shockwave bowled Moses onto the ground. It was simultaneously as loud as the universe tearing itself into two nearly identical copies, but as quiet as the difference between a coin landing on heads rather than tails. Moses - trembling with shock - picked himself up, dusted off the tablets and scratched his beard. He was sure that the Second Commandment looked a bit different, but he couldn’t quite put his finger on it...

More than three thousand years later, Blaise Pascal is about to formulate the Wager that would make him infamous. “You see,” he says, “If God exists then the payoff is infinitely positive for believing in Him and infinitely negative for not, therefore whatever the cost of believing you should do it”.

“Well I’m sceptical,” says his friend, “It seems to me that the idea of an infinite payoff is incoherent to begin with, plus you have no particular reason to privilege the hypothesis that the Christian God should and wants to be worshipped, and not to mention the fact that if I were God I’d be pretty irritated that people pretended believe in Me because of some probabilistic argument rather than by observing all of My great works”

“But don’t you see?” Pascal rejoins, “God in His infinite goodness foresaw your objections and wrote the Second Commandment specifically to take that into account; ‘Thou shall have no other God but me, unless thou feels that thy can maximise thine’s utility by ignoring this Commandment and worshipping multiple Gods. Seriously, I don’t mind, worship as many Gods as you want with whatever degree of ‘true’ faithfulness versus rational utility maximising makes you happiest (although I recommend worshipping only Gods that do not prohibit the worship of other Gods, so as to maximise your chances of getting it right and going to heaven)’ “

“Hmm... Yes, come to think of it there has always been something a little different about that Commandment compared to the rest. I didn’t think much of it because there exist similar laws in every other major religion, which now I reflect on it should probably have tipped me off to the format of your Wager quite a long time before now”

“You see my Wager suggests you should worship the largest subset of non-contradictory Gods you possibly can; although I acknowledge that the probability of selecting the true God out of all of God-space is small (and for that God to both exist and select for heaven based on faithfulness is also unlikely), the payoff is sufficiently wonderful to make it worth the small up-front cost of most religions’ declarations of faith. I can only imagine what sort of a fanatic would seriously propose this argument in a Universe where all Gods demand you sample only once from God-space!”

In the universe Pascal describes, all you need to do to qualify for eternal life given a particular religion is true is to say out loud that you are a true believer (or go through some non-traumatic initiation rite, like a baptism or Shahadah). The probability of a God existing is still low, and the probability of that God caring that you worship Zir is still low, but it is (almost certainly) rational to take the advice of Pascal and find a maximal subset of Gods that you think maximises your chance of eternal happiness.

The Jealous God of Cryonics

Cryonics is not like Pascal’s Wager except superficially, but this little story attempts to drive an intuition which would appeal to bizzaro-Pascal. In this universe, someone who worshipped only one God would be deeply irrational. They might be able to defend their choice with some applause-light soundbites (“I have great faith, so I need no other Gods”) but in a purely utility-maximising sense - the sense where we try and maximise the number of happy years we live – this person is behaving irrationally. But although this seems obvious to us, some (most) cryonics advocates behave as though cryonics is a ‘jealous’ God (like in our universe) rather more accurately modelling it as a ‘permissive’ God like in bizzaro-Pascal’s universe. Cryonics doesn’t care at all if you adopt other strategies for maximising your lifespan except insofar as they conflict with cryonics. So for example high religiosity and cryonics are logically compatible as far as I can see; if brain death really is death (that is to say it is completely irreversible) then at least you have the back-up possibility that an afterlife exists. Yet it seems to me that supporters of cryonics happily stop looking for alternate life-extension strategies almost as soon as they discover cryonics (I hypothesise the actual mechanism is that someone convinces them cryonics is rational and then they forget about the rest of the strategy-space in their excitement). Certainly, I can’t find any discussions on cryonics on LessWrong promoting any alternate life-maximisation techniques except perhaps brain plasticisation. This is a shame, because it is possible that some additional life-extension techniques might be costlessly employed by those who want to live forever to greatly increase their expected utility.

Looking around for literature on this topic. Alcor, for example, have an article entitled ‘The Road Less Travelled’ talking about potential alternatives to cryonics including desiccation and peat preservation. Brain plasticisation and chemical preservation are seriously discussed as alternatives even amongst those who are strongly in favour of freezing; the consensus is that these techniques are likely to offer a higher success rate once they are perfected, but freezing is the way forward now. I can think of a few more outlandish methods of preservation (such as firing yourself into the heart of a black hole and assuming time dilation means you will still be alive when a recovery technique is developed or standing in a high-radiation environment hoping that your telomerase will re-knit) but these all suffer from the fact they are less likely to work than cryonics, and obviously so. Why would cryonicists waste time thinking about outlandish preservation techniques when they displace a more likely technique? Indeed, even if these techniques were more likely there are good reasons to treat cryonics as a Schelling point unless a new technique obviously dominates; we want future society to spend all of its resources targeting one problem, especially if we are part of the generation that is first experimenting with these techniques. While it surprises me that no cryonicists seem interested in this even as an intellectual exercise, it is at least rational to ignore low-probability techniques which displace higher-probability techniques with the same payoff for all of the above reasons.

The Extended Strategy-Space

But there seems to be no excuse for failing to consider additional strategies which complement cryonics; there exist a very great number of strategies which could be followed that might result in revivification before cryonics (or instead of cryonics if cryonics turns out to be impossible) and have a cost of strictly less than cryonic freezing. I’ve given them short descriptions to enable easy reference in the comments (if anyone is interested) so don’t read too much into the names. I’ve also ordered them roughly in the order in which I find them plausible; up until the boundary between Social and Simulation Preservation I actual find the arguments more plausible than cryonics:

  • Diarist Preservation: Begin recording your phone calls, pay someone to archive your web presence, begin keeping obsessive diaries and blog constantly. Hope that this can be recompiled into a coherent personality at some point in the future, or at the very least be used to plug gaps in the personality of the unfrozen body.
  • Genetic Preservation: Take genetic samples of yourself and preserve them in a platinum-iridium bar in binary. Hope that personality is very largely genetic, and the proportion that isn’t can be reconstructed from statistical analysis of the time period in which you live (perhaps by employing Diarist Preservation in tandem).
  • MRI Preservation: Subject yourself to MRI scans as often as possible (it may be helpful to fake a serious neurological condition). Ask for copies and encode them in microchips that you scatter round the world as you travel. Hope that future societies will find the information useful to constructing an em and will find the chips if they are distributed widely enough.
  • Signal Preservation: Obsessively generate long streams of nonsense binary based on tapping randomly at a keyboard. Assume that these long strings must correspond in some way to brain states, and that future mathematics will be advanced enough to untangle the signal from the noise. Post these long strings of text to as many internet sites as possible to preserve them (VERY VERY IMPORTANT NOTE: If you decide to try this strategy you must absolutely ensure that the first few characters of every message are a code known only to you salted with (for example) the current time and then hashed, or the first word of the next string of binary you produce. Otherwise unkind people could claim to be you, post their own strings and screw up your revival. I don't think it is a serious worry that people who can bring you back from the dead will struggle with SHA)
  • Social Preservation: Form a hypothesis which says (roughly) “The more people who know about me that I can persuade to freeze their brain information with me, the more likely it is that any gap in my own brain-state can be plugged with information from another individual’s brainstate”. Act ruthlessly on this hypothesis; pay for friends and family to get frozen conditional on their memorising a list of facts about you. Offer to discount a friend’s cryo in exchange for them signing up with another organisation to you (in case yours has a damaging but not fatal mishap and you need perfectly-stored redundant information to back yourself up). Attend cryonics conferences like a vulture, and socialise as much as you possibly can. An additional note about this strategy (which every pro-cryonicist knows); it is hugely in your interest to take a large 21st Century contingent with you to whatever time you are revived, so that your 21st Century contingent can form a natural political bloc. Even better if the majority of that bloc know and like you!
  • Simulation Preservation: Bury ‘time-capsules’ – lead-lined containers which explain in as many languages as possible who you are and expressing a desire to be resurrected if society has discovered that we live in a simulation and has the power to talk to the simulators. Otherwise ask the society to rebury your letter (after translating the request into all current languages) to await the arrival of a true simulationist society. A stronger version of this is to employ one of the aforementioned Preservation techniques and add in your letter that you would be happy to be resurrected inside a simulation created by this society based on the information preserved by that technique; that insures against the possibility that simulation is logically possible but we have not yet discovered a way to communicate with the simulator.
  • Philosophical Preservation: Discover a completely watertight argument which proves – perhaps probabilistically - that ‘you’ (the bit of you you hope will survive death) is totally identifiable with something permanent like the information on your Y-chromosome (for men) or the unordered atoms in your brain. Do whatever this argument implies to extend your life. This might sound silly, but many people really do profess to believe their ‘soul’ survives forever and they can increase their chances of this occurring by correctly interpreting a very old book, so it is highly likely that there is an argument that would convince you, even if that argument is not actually valid. A clever rationalist might even be able to identify a subset of religious/philosophical activities that maximises their chance of eternal life in heaven (as per the introductory story).
  • Evolutionary Preservation: Blast genetic samples of yourself into space. Hope that eons later one sample will come to rest on a planet suitable for life and evolve into a creature identical to you except whereas you have mostly true beliefs, this creature will have mostly delusional beliefs that correspond in a one-to-one way with your true beliefs. For example while you truthfully think, “I was alive in 2010”, this creature will have a delusional belief, “I was alive in 2010”, plus whatever additional delusional beliefs it needs to make this belief cohere, for example, “I must have been stunned sometime in early 2070 (when my beliefs appear to stop) and taken to this strange planet I don’t recognise”.
  • Time-travel Preservation: Do something so marvellous or heinous that if time travel exists, some time travellers will travel to the moment of your triumph/crime to watch. Overpower a time traveller, and take their time machine. You might have a very low prior probability of being able to do something so brilliant/evil as to compete with the whole rest of history, but bear in mind the first successful hijack of a time machine would itself be an event worth watching by future time travellers, so you may not actually need to do anything marvellous in the first place; just make a binding resolution with yourself to steal the first time machine you come across and look to see if any police phone boxes pop up from nowhere. Making this resolution once or twice a day for the rest of your life is almost costless, although perhaps you would want to attend a combat sport class to increase the chances of a successful overpowering.

Each of these strategies have a number of features which make them attractive; they are (mostly) less expensive than cryonics, they do not strictly lower your chance of cryogenic revival (and in some cases probably increase it) and all have a non-zero chance of preserving your brainstates at least until future society is advanced enough to do something with them. Even better, most of these strategies synergise well with each other; if I decide to get myself frozen I will definitely also pay for fMRIs to record my brainstate as I think about various stimuli and store copies of those recordings with multiple institutions. I don’t think this list is exhaustive, but I do think it covers a good amount of the possible ‘live forever’ strategy space. It does not explore strategies which are absurdly expensive or which interfere with cryonics - so it is still only one small corner of the total strategy space – but I think it expands the area of the strategy space most people are interested in; the bit in which you and I can act.

The Cryonics Strategy Space

24 Froolow 24 April 2014 04:11PM

In four paragraphs I’m going to claim, “It is highly likely reading this article will increase your chance of living forever”. I’m pretty sure you won’t disagree with me. First, however, I’d like to talk about how much I don’t like Monopoly.

I play a lot of Monopoly, because I am forced into it – against my will – by friends, family, work-related-bonding etc. I understand this is a controversial opinion, but I really, really don’t like Monopoly – there is very little scope for creative play. In fact there is so little scope for create play I spotted that I could win at Monopoly, in a probabilistic sense, by going online and looking up the optimal allocation of houses to properties and valuation of houses in the ‘bargaining’ mid-game. For a while, the fact that nobody but me played ‘perfect’ Monopoly meant I won nearly every game, and I felt much better about playing because games tended to conclude more quickly when one player was a soulless, utility-hungry robot – it left me more time to concentrate on the stuff I actually enjoyed, which was socialising.

But Monopoly, despite being an almost completely deterministic dice-rolling game, hides unexpected complexity; a salutary lesson for an aspiring rationalist. Winning the game was completely secondary to my actual aim, which was forcing the game to take as little time as possible. I realised a few months ago that it didn’t matter who won, as long as somebody won quickly, and it was very unlikely the strategy optimised for one player was the same as the strategy optimised for all of them. As a consequence, I reran the computer simulations I built and developed an optimal ‘turn reducing’ strategy (it won’t surprise you to know that the basic rule is ‘play with as much variance as you possibly can’; having one maverick player lowers the average number of turns to the first bankruptcy, and bankruptcy is gamebreaking in Monopoly).

I agree that I could lower the number of turns even more by simply flipping over the board and storming out when someone suggests I play, but let’s assume I am also trying to balance a nebulously-defined but nonetheless real value of ‘not losing all my friends’, which is satisfied when I play a risky-but-exciting strategy and not satisfied when I constantly demand to play games I find fun. The point is, I had what Kuhn calls a ‘paradigm shift’ – once I realised that my goal when playing Monopoly was not to win as quickly as possible, but to ensure anyone wins as quickly as possible I was able to greatly, greatly increase my utility with no troublesome side-effects.

I’m relating this story to you because noticing my aims and strategy weren’t perfectly aligned improved my experience of Monopoly without doing anything difficult like hacking my motivation, and I’m sure you have similar stories of paradigm shifts improving your experience of a certain event (I hear people talk about the day they discovered coding was fun once they learned the rules, or maths was awesome once they got past the spadework. That has yet to happen to me, but my experience at thrashing my friends at a children’s board games means I can totally relate). What’s striking with these paradigm shifts is how obvious the conclusion seems in retrospect, and how opaque it seemed before the lightbulb moment. With that in mind, let me make a claim you might find concerning; “The aims and strategy of people who want to live forever are highly likely to be out of alignment”. In particular, from what I read on LW and other pro-cryo communities, the strategy-space explored is vastly smaller than the strategy-space of all possible cryonics strategies. Indeed, the strategy space explored by people who want to live forever is – in some ways – smaller than that explored by me while trying to get out of playing tedious boardgames. I’m going to talk about that strategy space a little in this article, mostly with the aim of triggering a ‘lightbulb moment’ – if there are any to be had – in readers a lot more committed to cryonics than me. To draw an obvious conclusion, if there are such lightbulb moments to be had, it is highly likely reading this article will increase your chance of living forever by increasing the size of the cryonics strategy space you consider.

That the strategy space explored is small is pretty hard to disagree with; there is an option to freeze or not-freeze in the first place, go with Alcor or The Cryonics Institute (or possibly KryoRuss), go for your full body or just your head and – maybe – whether to hang on for plastination or begin investing in cryonics insurance now. As far as I can tell, more ‘fringe’ options are not discussed with very much regularity. A search of the LW archives turned up this thread which was along similar lines, but didn’t trigger anything like the discussion I thought it would; this surprises me – when the ‘prize’ for picking a marginal improvement in your cryonics strategy that doubles your chance of revivification is that you double your chance of living forever, I’m highly surprised the cryonics strategy space is not exhaustively searched at this point, certainly amongst people who turn rationality into an art form.

For example, there are at least three ways I can think of to raise your chance of being successfully frozen:

·         (Sensible) Redundancy cryonics: Make redundant copies of the information you intend to preserve. For example, MRI scans of your brain and detailed notes on your reactions to certain stimuli. In the event that current technology almost-but-not-quite preserves information in the brain, your notes and images might help future scientists reconstruct your personality. You might even go further and send hippocampal slices to multiple cryonics facilities, gambling on the fact that the increased probability of at least one facility’s survival outweighs the lower probability of revivification from a single hippocampal slice.

·         (Sensible) Diversified cryonics: In addition to cryonics, employ some other strategy(s) which might result in you living forever but which are as completely uncorrelated with the success or failure of cryonics as you can manage given that ‘the complete destruction of the earth on a molecular level by a malevolent alien race’ correlates with many bad outcomes and few good ones. I actually have a list of about ten of these, which I will happily make available on request (i.e. I’ll write another discussion post about them if people are interested) but I don’t want the whole discussion of this post to be about this one single issue, which it was when I tried the content of the post out on my friend. This is about the cryonics strategy-space only, not the living-forever strategy space, which is much bigger.

·         (Inadvisable) Suicide cryonics: Calculate the point at which your belief in the utility of cryonics outweighs the expected utility of the rest of your life (this will likely come a few seconds before the average age of death in your demographic). Kill yourself in the most cryonics-friendly way you can imagine, which I suspect will involve injecting yourself with toxic cryoprotectants on top of a platform suspended over a large vat of liquid nitrogen so that when you collapse, you collapse into the nitrogen and freeze yourself (which should limit the amount of time the dead brain is at body-temperature). If you are not concerned about your body, you should also try to decapitate yourself as you fall to raise the surface area to volume ratio of the object you are trying to freeze.

Here are three ways that raise your chance of successfully remaining frozen:

·         (Sensible) Positive cryonics: Lobby for laws that ensure the government protects your body. Either lobby for these laws directly (I talked about a ‘right to not-death’ in my last post on this subject) or promise to report to future!USA’s equivalent of the Department of Defence to see if they can weaponise any microbes on you after you’re unfrozen. Remember that we’re talking in terms of expected utility here; the chance that such lobbying is effective is minute, but it might be an effective way to spend your twilight years if you would otherwise be unproductive.

·         (Sensible, but worryingly immoral) Negative cryonics: Sabotage as many cryonics labs as possible before going under, or lobby for laws that make it illegal to freeze yourself which only come into force after you die. This raises the chances that you are the James Bedford of modern cryonics and society has a particular interest in keeping your body safe. Note that though sabotaging an entire lab is difficult and illegal, trashing the field of cryonics itself is pretty easy and socially high-status because people already think it’s pretty weird – you’d predict that at least some detractors of cryonics are actually extremely pro-cryonics and trying to raise their chances of being kept frozen as a cultural curiosity rather than as only one of millions of corpsicles.

·         (Sensible if your name is Lex Luthor, otherwise implausible) Ninja cryonics: Build a cryonics pod yourself, with enough liquid nitrogen to keep you frozen for several thousand years, known only to the highly trusted individual who transfers your cryo-preserved body from Alcor to this location (if you could somehow get yourself into an unprotected far-earth orbit after freezing this would be perfect). Hope that your pod is discovered by friendly future-humans before you run out of coolant. This is insurance against the possibility that society destroys all cryonics labs somehow and then later regrets it (although, now I think about it, someone following this strategy certainly wouldn’t tell anyone about it on a public forum…)

Here are three ways that raise your chance of successfully being revived:

·         (Sensible if legal) Compound-interest cryonics: Devote a small chunk of your resources towards a fund which you expect to grow faster than the rate of inflation, with exponential growth (the simplest example would be a bank account with a variable rate that pays epsilon percent higher than the rate of inflation in perpetuity). Sign a contract saying the person(s) who revive you receive the entire pot. Since after a few thousand years the pot will nominally contain almost all the money in the world this strategy will eventually incentivise almost the entire world to dedicate itself to seeking your revival. Although this strategy will not work if postscarcity happens before unfreezing, it collapses into the conventional cryonics problem and therefore costs you no more than the opportunity cost of spending the capital in the fund before you die. (Although apparently this is illegal)

·         (Sensible) Cultural-value cryonics: Freeze yourself with something which is relatively cheap now, but you predict might be worth a lot of money in the future. I suspect that – for example – rare earth metals or gold might be a decent guess at something that will increase in value whatever society does, but the real treasure trove will be things like first-editions of books you expect might become classics in the future, original paintings by artists who might become very trendy in the 25th Century or photographs of an important historic event which will become disputed or lionised in the future (my best bet would be anything involving the relationship between China and America if we’re talking a few centuries, and pre-technology parts of Africa if we’re talking millennia). It’s hard to believe even a post-singularity society won’t have some social signalling remaining, so you’ve got a respectable chance of finding a buyer for these artefacts. These fantastically valuable artefacts will be used to pay your way in a society where – thanks to the Flynn effect – you will have an IQ which breaks the curve at the ‘dangerously stupid’ end and you might not be able to survive otherwise. Be careful nobody knows you’re doing this, otherwise your cryopod will be raided like an Egyptian tomb! Even disregarding this financial advice, it might be a good idea to ensure you freeze yourself with e.g. a beloved pet, or the complete works of Shakespeare. This ensures that even if future society is so totally different to what you were expecting you will still have some information-age artefacts to protect you from culture-shock.

·         (Inadvisable and high-risk) Game-theory cryonics: Set up an alarm on your cryonics pod that unthaws you after five hundred years. This is insurance against the possibility that society is able to unfreeze you, but chooses not to, since no society would just let you die (you hope). You could go more supervillain-y than this by planting a deadly bomb somewhere, timed to go off in five hundred years unless you enter a 128-digit disarming key. This should incentivise society to develop revivification processes as a matter of urgency. Bear in mind if it is easier for future society to develop extremely strong counter-cryptography or radiation shielding your plan may backfire as research that would have been undertaken in cryopreservation is redeployed to stop your diabolical scheme.

I think most of these strategies have never been written about before, and of those that have been written about they have all been throwaway thought experiments on LW. Given that the strategy space of cryonics strategies is much bigger than cryonics advocates appear to instinctively gravitate around, I conclude that it is very unlikely there has been a serious effort to optimise the cryonics process beyond the scientific advances made by Alcor (and hence it is very unlikely we have all hit upon the optimal strategy by chance). This is especially true because the optimal strategy in some cases depends on the probability that the future resembles certain kinds of predictions, and I know people disagree over those predictions on LW. For example, the ratio of culturally-valuable artefacts to sanity-preserving artefacts you should take with you probably depends on the relative likelihood you assign that a post-scarcity or post-singularity world will be the one to revive you. I’m not in a very good position to make that particular judgement myself, but I am in a good position to say that there is a very real opportunity cost to considering a narrow strategy space when considering life-extending strategies, just as there is an opportunity cost when considering over-narrow Monopoly strategies. In the first case, the impact of your decision might result in you throwing your life away. In the second, it only feels like it does.

How long will Alcor be around?

30 Froolow 17 April 2014 03:28PM

The Drake equation for cryonics is pretty simple: work out all the things that need to happen for cryonics to succeed one day, estimate the probability of each thing occurring independently, then multiply all those numbers together. Here’s one example of the breakdown from Robin Hanson. According to the 2013 LW survey, LW believes the average probability that cryonics will be successful for someone frozen today is 22.8% assuming no major global catastrophe. That seems startlingly high to me – I put the probability at at least two orders of magnitude lower. I decided to unpick some of the assumptions behind that estimate, particularly focussing on assumptions which I could model.

EDIT: This needs a health warning; here be overconfidence dragons. There are psychological biases that can lead you to estimating these numbers badly based on the number of terms you're asked to evaluate, statistical biases that lead to correlated events being evaluated independently by these kind of models and overall this can lead to suicidal overconfidence if you take the nice neat number these equations spit out as gospel.

Every breakdown includes a component for ‘the probability that the company you freeze with goes bankrupt’ for obvious reasons. In fact, the probability of bankruptcy (and global catastrophe) are particularly interesting terms because they are the only terms which are ‘time dependant’ in the usual Drake equation. What I mean by this is that if you know your body will be frozen intact forever, then it doesn’t matter to you when effective unfreezing technology is developed (except to the extent you might have a preference to live in a particular time period). By contrast, if you know safe unfreezing techniques will definitely be developed one day it matters very much to you that it occurs sooner rather than later because if you unfreeze before the development of these techniques then they are totally wasted on you.

The probability of bankruptcy is also very interesting because – I naively assumed last week – we must have excellent historical data on the probability of bankruptcy given the size, age and market penetration of a given company. From this – I foolishly reasoned – we must be able to calculate the actual probability of the ‘bankruptcy’ component in the Cryo-Drake equation and slightly update our beliefs.

I began by searching for the expected lifespan of an average company and got two estimates which I thought would be a useful upper- and lower-bound. Startup companies have an average lifespan of four years. S&P 500 companies have an average lifespan of fifteen years. My logic here was that startups must be the most volatile kind of company, S&P 500 must be the least volatile and cryonics firms must be somewhere in the middle. Since the two sources only report the average lifespan, I modelled the average as a half-life. The results really surprised me; take a look at the following graph:

(http://imgur.com/CPoBN9u.jpg)

Even assuming cryonics firms are as well managed as S&P 500 companies, a 22.8% chance of success depends on every single other factor in the Drake equation being absolutely certain AND unfreezing technology being developed in 37 years.

But I noticed I was confused; Alcor has been around forty-ish years. Assuming it started life as a small company, the chance of that happening was one in ten thousand. That both Alcor AND The Cryonics Institute have been successfully freezing people for forty years seems literally beyond belief. I formed some possible hypotheses to explain this:

  1. Many cryo firms have been set up, and I only know about the successes (a kind of anthropic argument)
  2. Cryonics firms are unusually well-managed
  3. The data from one or both of my sources was wrong
  4. Modelling an average life expectancy as a half-life was wrong
  5. Some extremely unlikely event that is still more likely than the one in billion chance my model predicts – for example the BBC article is an April Fool’s joke that I don’t understand.

I’m pretty sure I can rule out 1; if many cryo firms were set up I’d expect to see four lasting twenty years and eight lasting ten years, but in fact we see one lasting about five years and two lasting indefinitely. We can also probably rule out 2; if cryo firms were demonstrably better managed than S&P 500 companies, the CEO of Alcor could go and run Microsoft and use the pay differential to support cryo research (if he was feeling altruistic). Since I can’t do anything about 5, I decided to focus my analysis on 3 and 4. In fact, I think 3 and 4 are both correct explanations; my source for the S&P 500 companies counted dropping out of the S&P 500 as a company ‘death’, when in fact you might drop out because you got taken over, because your industry became less important (but kept existing) or because other companies overtook you – your company can’t do anything about Facebook or Apple displacing them from the S&P 500, but Facebook and Apple don’t make you any more likely to fail. Additionally, modelling as a half-life must have been flawed; a company that has survived one hundred years and a company that has survived one year are not equally likely to collapse!

Consequently I searched Google Scholar for a proper academic source. I found one, but I should introduce the following caveats:

  1. It is UK data, so may not be comparable to the US (my understanding is that the US is a lot more forgiving of a business going bankrupt, so the UK businesses may liquidate slightly less frequently).
  2. It uses data from 1980. As well as being old data, there are specific reasons to believe that this time period overestimates the true survival of companies. For example, the mid-1980’s was an economic boom in the UK and 1980-1985 misses both major UK financial crashes of modern times (Black Wednesday and the Sub-Prime Crash). If the BBC is to be believed, the trend has been for companies to go bankrupt more and more frequently since the 1920’s.

I found it really shocking that this question was not better studied. Anyway, the key table that informed my model was this one, which unfortunately seems to break the website when I try to embed it. The source is Dunne, Paul, and Alan Hughes. "Age, size, growth and survival: UK companies in the 1980s." The Journal of Industrial Economics (1994): 115-140.

You see on the left the size of the company in 1980 (£1 in 1980 is worth about £2.5 now). On the top is the size of the company in 1985, with additional columns for ‘taken over’, ‘bankrupt’ or ‘other’. Even though a takeover might signal the end of a particular product line within a company, I have only counted bankruptcies as representing a threat to a frozen body; it is unlikely Alcor will be bought out by anyone unless they have an interest in cryonics.

The model is a Discrete Time Markov Chain analysis in five-year increments. What this means is that I start my hypothetical cryonics company at <£1m and then allow it to either grow or go bankrupt at the rate indicated in the article. After the first period I look at the new size of the company and allow it to grow, shrink or go bankrupt in accordance with the new probabilities. The only slightly confusing decision was what to do with takeovers. In the end I decided to ignore takeovers completely, and redistribute the probability mass they represented to all other survival scenarios.

The results are astonishingly different:

(http://imgur.com/CkQirYD.jpg)

Now your body can remain alive 415 years for a 22.8% chance of revival (assuming all other probabilities are certain). Perhaps more usefully, if you estimate the year you expect revival to occur you can read across the x axis to find the probability that your cryo company will still exist by then. For example in the OvercomingBias link above, Hanson estimates that this will occur in 2090, meaning he should probably assign something like a 0.65 chance to the probability his cryo company is still around.

Remember you don’t actually need to estimate the actual year YOUR revival will occur, but only the first year the first successful revival proves that cryogenically frozen bodies are ‘alive’ in a meaningful sense and therefore recieve protection under the law in case your company goes bankrupt. In fact, you could instead estimate the year Congress passes a ‘right to not-death’ law which would protect your body in the event of a bankruptcy even before routine unfreezing, or the year when brain-state scanning becomes advanced enough that it doesn’t matter what happens to your meatspace body because a copy of your brain exists on the internet.

My conclusion is that the survival of your cryonics firm is a lot more likely that the average person in the street thinks, but probably a lot less likely that you think if you are strongly into cryonics. This is probably not news to you; most of you will be aware of over-optimism bias, and have tried to correct for it. Hopefully these concrete numbers will be useful next time you consider the Cryo-Drake equation and the net present value of investing in cryonics.