In response to Gaming Democracy
Comment author: HopefullyCreative 31 July 2014 08:36:44AM 1 point [-]

I think your not giving some basic mechanics enough credit here. Yes, many people certainly only vote for the main parties because they feel that their vote may be "wasted" on a minor party. However this poses the question "How did the main parties become the main parties anyway?" When considering how to succeed as a minor party that question should inevitably be something that one must answer.

If you look at the behavior and voting patterns of people they are actually quite unconcerned about empirical data. Instead they are concerned with "virtue". Is the candidate "virtuous" in their eyes? Does the party express and support virtues that they hold dear? Big parties campaign on virtues that are broad range and therefore gather a great deal of support. You mentioned gun control opponents in the United States and that is an excellent example of this. These people may actually vary quite a great deal on other "virtue" issues such as homosexual marriage, however they all personally believe that their legal rights are fundamentally secured by an armed and capable populous. Therefore anyone who campaigns on the perceived virtue of "guns are moral, because they defend people and secure our rights" has a large number of potential supporters.

Therefore if one is trying to get a smaller party of the ground the key is instead to take ownership of a series of virtues and run a persuasive campaign to help the public not only accept but believe in these virtues. In other words, the party must relate with the public and the public must relate with it. There are of course mitigating circumstance. The person who controls a "higher virtue" that is as equally accepted as someone who controls a "lower virtue" will win out. This is actually why again opposing gun control in the United States is actually an effective political stratagem. Because people believe their rights are secured by those arms that means all those rights, other virtues are subservient to the ability to own modern effective weapons.

Comment author: Froolow 31 July 2014 06:08:34PM 1 point [-]

I certainly don't disagree with your analysis, but I think I might not have been clear enough with the endgame of this potential strategy; I don't think this is a good strategy to succeed as a minor party, because no matter how virtuous you make transhumanism sound, people are always going to care more about the economy or defence. But I think you can probably find enough people who care more about transhumanism than they do about the marginal difference between the economic policy of the two main parties. So the 'transhuman' party will never get off the ground, but it may have enough power to swing a marginal seat for one of the two main parties, in exchange for agreement to vote a certain way on a certain issue.

Whether or not you could parley that into a successful minor party is a much harder question!

In response to comment by Punoxysm on Gaming Democracy
Comment author: Lumifer 30 July 2014 02:54:27PM 1 point [-]

This is a very long-winded way to re-invent the idea of a lobbying organization.

Actually, I think, it's just re-inventing the idea of a political party.

In response to comment by Lumifer on Gaming Democracy
Comment author: Froolow 30 July 2014 05:21:02PM 2 points [-]

I think both of you are incorrect. This leverages a specific flaw in the FPTP system which wouldn't work in a PR system that gives a small, tightly coordinated group in a swing seat a disproportionate amount of power. Insofar as both political parties and lobby groups can exist in a PR system, this cannot be either of those things since it could not exist in a PR system.

More specifically, it is not a political party because (amongst other things) it has no general platform and does not seek to acquire power. It is also not a lobby group because it doesn't really 'lobby' in any meaningful sense to get the law changed. I think the example of the NRA is a red herring - it is hard to believe the NRA is well-enough coordinated to get a large number of its members to vote for a party they don't like. Do you have any evidence they have ever been successful at swinging a seat in this way?

Gaming Democracy

8 Froolow 30 July 2014 09:45AM

I live in the UK, which has a very similar voting structure to the US for the purposes of this article. Nevertheless, it may differ on the details, for which I am sorry. I also use a couple of real-life political examples which I hope are uncontroversial enough not to break the unofficial rules here. If they are not, I can change them, because this is a discussion of gaming democracy by exploiting swing seats to push rationalist causes.

Cory Doctrow writes in the Guardian about using Kickstarter-like thresholds to encourage voting for minority parties:

http://www.theguardian.com/technology/2014/jul/24/how-the-kickstarter-model-could-transform-uk-elections

He points out that nobody votes for minority parties because nobody else votes for them; if you waste your vote on Yellow then it is one fewer vote that might stop the hated Blue candidate getting in by voting for the not-quite-so-bad Green. He argues that you could use the internet to inform people when some pre-set threshold had been triggered with respect to voting for a minor party and thus encourage them to get out and vote. So for example if the margin of victory was 8000 votes and 9000 people agreed with the statement, “If more than 8000 people agree to this statement, then I will go to the polls on election day and vote for the minority Yellow party”, the minority Yellow party would win power even though none of the original 9000 participants would have voted for Yellow without the information-coordinating properties of the internet.

I’m not completely sure of the argument, but I looked into some of the numbers myself. There are 23 UK seats (roughly equivalent to Congressional Districts for US readers) with a margin of 500 votes or fewer. So to hold the balance of power in these seats you need to find either 500 non-voters who would be prepared to vote the way you tell them, or 250 voters with the same caveats (voters are worth twice as much as non-voters to the aspiring seat-swinger, since a vote taken from the Blues lowers the margin by one, and a vote given to the Greens lowers the margin by one, and every voter is entitled to both take a vote away from the party they are currently voting for and award a vote to any party of their choice). I’ll call the number of votes required to swing a seat the ‘effective voter’ count, which allows for the fact that some voters count for two.

It doesn’t sound impossible to me to reach the effective voter count for some swing constituencies, given that often even extremely obvious parody parties can often win back their deposit (500 actual votes, not even ‘effective votes’).

Doctrow wants to use the information co-ordination system to help minority parties reach a wider audience. I think it could be used in a much more active way to force policy promises on uncontroversial but low-status issues from potential future MPs. Let me take as an example ‘Research funding for transhuman causes’. Most people don’t know what transhumanism is, and most people who do know what it is don’t care. Most people who know what it is and care are basically in support of research into transhuman augmentations, but would definitely rank issues like the economy or defence as more important. There is a small constituency of people who oppose transhumanism outright, but they are not single issue voters either by any means (I imagine opposing transhumanism is strongly correlated with a ‘traditional religious value’ cluster which includes opposing abortion, gay marriage and immigration). Politicians could therefore (almost) costlessly support a small amount of research funding for transhuman, which would almost certainly be a sensible move when averaged across the whole country (either you discover something cool, in which case your population is made better off and your army more powerful or you don’t, and in the worst case you get a decent multiplier effect to the economy that comes from employing a load of material scientists and bioengineers). However we know that they won’t do this because while the benefits to the country might be great, the minor cost of supporting a low-status (‘weird’) project is borne entirely by the individual politician. What I mean by this is that the politician will probably not lose any votes by publically supporting transhumanism, but will lose status among their peers and will want to avoid this. There’s also a small risk of losing votes by supporting transhuman causes from the ‘traditional value’ cluster and no obvious demographic with whom supporting transhuman causes gains votes.

This indicates to me that if enough pro-transhumans successfully co-ordinated their action, they could bargain with the politicians standing for office. Let us say there are unequivocally enough transhumans to meet the effective voter threshold for a particular constituency. One person could go round each transhuman (maybe on that city’s subreddit) and get them to agree in principle to vote for whichever candidate will agree to always vote ‘Yes’ on research funding for transhuman causes, up to a maximum of £1bn. Each transhuman might have a weak preference for Blues vs Greens or vice versa, but the appeal is made to their sense of logic; each Blue vote is cancelled out by each Green vote, but each ‘Transhuman’ vote is a step closer to getting transhumanism properly funded, and transhumanism is more important than any marginal policy difference between the two parties. You then go to each candidate and present the evidence that the ‘transhuman’ block has the power to swing the election and is well co-ordinated enough to vote as a bloc on election day. If both candidates agree that they will vote ‘Yes’ on the bills you decided on, then send round an electronic message saying – essentially – “Vote your conscience”. If one candidate says ‘Yes’ and the other ‘No’ send round a message saying “Vote Blue” (or Green). If both candidates say ‘no’ send a message saying “Vote for the Transhuman Party (which is me)” in the hope that you can demonstrate you really did hold the balance of power, to increase the weight of your negotiation in the future.

If the candidate then goes back on their word, you slash and burn the constituency and make sure that no matter what the next candidate from that party promises, they lose. Also ensure that if that candidate ever stands in a marginal seat again, they lose (effectively ending their political career). This gives a strong incentive for MPs to vote the way they promised, and for parties to allow them to vote the way they promised.

Incidentally my preferred promise to extract from the candidates (and I don’t think this works in America) is to bring a bill with a particular wording if they win a Private Members’ Ballot (a system whereby junior members enter a lottery to see whose idea for a bill gets a ‘reading’ in the House of Commons, and hence a chance of becoming a law). For example, “This house would fund £1bn worth of transhumanism basic research over the next four years”. This is because it forces MPs to take a position on an issue they otherwise would not want to touch (because it is low-status) and one way out of this bind is to pretend the issue was high-status all along, which would be a good outcome for transhumanism as it means people might start funding it without the complicated information-coordination game I describe above.

One issue with this is that some groups – for example; Eurosceptics – are happy to single issue vote already, and there are far more Eurosceptics than there are rationalists in the UK. A US equivalent – as far as I understand – might be gun rights activists; they will vote for whatever party deregulates guns furthest, regardless of any other policies they might have and they are very numerous. This could be a problem, since a more numerous coalition will always beat a less numerous coalition at playing this information coordination game.

The first response is that it might actually be OK if this occurs. Being a Eurosceptic in no way implies a particular position on transhuman issues, so a politician could agree to the demands of the Eurosceptic bloc and transhuman bloc without issue. The numbers problem only occurs if a particular position automatically implies a position on another issue, so if there was a large single-issue anti-transhuman voting bloc, and there isn’t. There is a small problem if someone is both a Eurosceptic and a transhuman, since you can only categorically agree to vote the way one bloc tells you, but this is a personal issue where you have to decide which issue is more important and not a problem with the system as it stands.

The second response is that you are underestimating the difficulty of co-ordinating a vote in this way. For example, Eurosceptics – as a rule – will want to vote for the minority UKIP party to signal their affiliation with Eurosceptic issues. No matter what position the candidates agree to on Europe, UKIP will always be more extreme on European issues, since the candidate can only agree to sufficiently mainstream policies that the vote-cost of agreeing to the policy publically is less than the vote-gain of gaining the Eurosceptic bloc. Therefore there will be considerable temptation to defect and vote UKIP in the event of successfully coordinating a policy pledge from a candidate since the voter has a strong preference for UKIP over any other party. Transhumans – it is hypothesised – have a stronger preference for marginal gains in transhuman funding over any policy difference between the two major parties and so getting them to ‘hold their nose’ and vote for a candidate they would otherwise not want to is easier.

It is not just transhumanism that this vote-bloc scheme might work for, but transhumanism is certainly a good example. In my mind you could co-ordinate any issue where the proposed voting bloc is:

  1. Intelligent enough to understand why voting for a candidate you don’t like might result in outcomes you do like
  2. Sufficiently politically unaffiliated that voting for a party they disapprove of is a realistic prospect (hence I’m picking issues young people care about, since they typically don’t vote)
  3. Sufficiently internet-savvy that coordinating by email / reddit is a realistic prospect.
  4. Unopposed by any similar-sized or larger group which fits the above three criteria.
  5. Cares more about this particular issue than any other issue which fits the above four criteria

Some other good examples of this might be opposing homeopathy on the NHS, encouraging Effective Altruism in government foreign aid, spending a small portion of the Defence budget on FAI and so on.

Are there any glaring flaws I’ve missed?

Comment author: christopherj 04 May 2014 05:14:02AM 1 point [-]

Supplemental data preservation seems like a synergistic match with cryonics. You'd want to collect vast amounts of data with little effort, so no diaries or random typing or asking friends to memorize facts. MRIs and other medical records might help, keeping a video or audio recording of everything you do, and recording everything you do with your computer, should take little time and might preserve something that might aid cryonic preservation.

Simulation-based preservation attempts may be more likely than people expect, based on the logic that simulated humans likely outnumber physical humans (we could be in a simulation to determine how many simulations per human we will eventually make ourselves). However it is clear that the simulator(s) either already are communicating with us or do not care to, and to gain any more direct access to their attention we'd have to hack the simulation, in which case there may be more clever things to do than call attention to our hacking. However, it is likely that the simulators have highly advanced security technology compared to us. Alternately, given that we are probably being simulated by other humans, and they might be watching, we may be able to appeal to their empathy.

Evolutionary Preservation and Genetic Preservation depend on a misunderstanding of genetics, Philosophical Preservation on a misunderstanding of the natures of reality vs rationalization, and Time-travel Preservation suggests that making a commitment to something that 10%-50% of humans already made will make you notable to time travelers. This sort of thing detracts from your suggestion since you're grasping at straws to find alternatives.

Granted, it's hard to find alternatives. I suppose EEG data could be collected as well, and would also have research benefits. However, like most of the other data that could be collected, it would probably only suffice as a sanity check on your cryonic reconstruction.

Comment author: Froolow 04 May 2014 07:54:15AM 0 points [-]

I don't disagree I was grasping at straws for some of the more outlandish suggestions, but this was deliberate - to try and explore the full boundaries of the strategy space. So I take most of your criticism in the constructive spirit in which it was intended, but I do think maybe you are a bit confused about 'philosophical preservation' (no doubt I explained it very badly to avoid using the word 'religion'). My point is not that you convince yourself, "I will live forever because all life is meaningless and hence death is the same as life", it is that you find some philosophical argument that indicates a plausible strategy and then do that strategy. A simple example would be that you discover an argument which really genuinely proves Christianity offers salvation and then get baptised, or prove to your satisfaction that the soul is real and then pay a medium to continue contacting you after you die. Again, I agree this is outlandish but there must be something appealing about the approach because it is unquestionably the most popular strategy on the list in a worldwide sense.

Comment author: Nornagest 03 May 2014 08:27:20PM *  2 points [-]

If you can select from any one of thirty keys on your keyboard then every ten letters you type has 10^15 bits of entropy,

A little less than 50 bits of entropy, actually, if you're choosing truly randomly. Total entropy of a sequence scales additively with additional choices, not multiplicatively: four coin tosses generate four bits of entropy. 50 bits is enough to specify an option from a space of around 10^15, but the configuration space of those 10^10-11 neurons in the human brain is vastly larger than that.

Comment author: Froolow 04 May 2014 07:46:56AM 0 points [-]

I didn't know that. Fair enough, seems likely 'signal preservation' is much more costly than I originally realised and not worth pursuing (I think the likelihood of revivification is the same or better than cryonics, but the cost in terms of hours spent tapping at a keyboard is basically more than any human could pay in one lifetime)

Comment author: The_Duck 03 May 2014 05:13:12PM *  8 points [-]

something like 'simulationist' preservation seems to me to be well within two orders of magnitude of the probability of cryonics - both rely on society finding your information and deciding to do something with it

I don't know if I agree with your estimate of the relative probabilities, but I admit that I exaggerated slightly to make my point. I agree that this strategy at least worth thinking about, especially if you think it is at all plausible that we are in a simulation. Something along these lines is the only one of the listed strategies that I thought had any merit.

A priori it seems hugely unlikely that with all of our ingenuity we can only come up with two plausible strategies for living forever (religion and cryonics)

I agree, and I also think we should try to think up other strategies. Here are some that people have already come up with besides cryonics and religion:

  • Figure out how to cure aging before you die.

  • Figure out how to upload brains before you die.

  • Create a powerful AI and delegate the problem to it (complementary to cryonics if the AI will only be created after you die).

Comment author: Froolow 04 May 2014 07:44:36AM 2 points [-]

This is an excellent comment, and it is extremely embarrassing for me that in a post on the plausible 'live forever' strategy space I missed three extremely plausible strategies for living forever, all of which are approximately complementary to cryonics (unless they're successful, in which case; why would you bother). I'd like to take this as evidence that many eyes on the 'live forever' problem genuinely does result in utility increase, but I think it is a more plausible explanation that I'm not very good at visualising the strategy space!

Comment author: Nornagest 02 May 2014 08:09:25PM *  2 points [-]

My take on it wouldn't be so much that it's unlikely to contain meaningful information as that it's unlikely to contain enough meaningful information. Whatever (almost certainly very bad) PRNG function you're implementing when you type out random strings, it's not going to leak more than a bit of brain state per bit of output, and most likely very much less than that. Humans have tens of billions of neurons and up to about 10^15 synapses; even under stupidly optimistic assumptions about neurological information storage and state sampling, getting all of that out would take many lifetimes' worth of typing.

Comment author: Froolow 03 May 2014 10:22:21AM 1 point [-]

I basically agree with you that the strategy seems pretty unlikely. But I think you are over-harsh on it; you don't need to reconstruct the entire brain, just the stuff that deals with personal identity. If you can select from any one of thirty keys on your keyboard then every ten letters you type has 10^15 bits of entropy, so it seems possible that if somebody knew absolutely everything about the state you were in when typing they could reconstruct you just from this. You are also not restricted to tapping away randomly - I suspect words or sentences would leak way more the pseudorandom tapping. At any rate, this strategy is almost free, so you'd need astonishingly good reasons not to attempt it if you plan on attempting cryonics.

I think those reasons exist (I'm skeptical the information would survive) but I don't think the theory is quite as much in the lunatic fringe as you do.

Comment author: The_Duck 03 May 2014 01:55:14AM 9 points [-]

Personally, I don't find any of the strategies you mention to be plausible enough to be worth thinking about for more than a few seconds. (Most of them seem obviously insufficient to preserve anything I would identify as "me.") I'm worried this may produce the opposite of this post's intended effect, because it may seem to provide evidence that strategies besides cryonics can be easily dismissed.

Comment author: Froolow 03 May 2014 10:09:15AM 2 points [-]

I think the plausibility of the arguments depends in a very great part on how plausible you think cryonics is; since the average on this site is about 22%, I can see how other strategies which are low likelihood/high payoff might appear almost not worth considering. On the other hand, something like 'simulationist' preservation seems to me to be well within two orders of magnitude of the probability of cryonics - both rely on society finding your information and deciding to do something with it, and both rely on the invention of technology which appears logically possible but well outside the realms of current science (overcome death vs overcome computational limits on simulations). But simulation preservation is three orders of magnitude cheaper than cryonics, which suggests to me that it might be worthwhile to consider. That is to say, if you seriously dismissed it in a couple of seconds you must have very very strong reasons to think the strategy is - say - about four orders of magnitude less likely than cryonics. What reason is that? I wonder if maybe I assumed the simulation problem was more widely accepted than I thought it might be. I'm a bit concerned about this line of reasoning, because all of my friends dismiss cryonics as 'obviously not worth considering' and I think they adopt this argument because the probabilistic conclusions are uncomfortable to contemplate.

With respect to your second point, that this post could be counter-productive, I am hugely interested by the conclusion. A priori it seems hugely unlikely that with all of our ingenuity we can only come up with two plausible strategies for living forever (religion and cryonics) and that both of those conclusions would be anathemic to the other group. If the 'plausible strategy-space' is not large I would take that as evidence that the strategy-space is in fact zero and people are just good at aggregating around plausible-but-flawed strategies. Can you think about any other major human accomplishment for which the strategy-space is so small? I suspect the conclusion for this is that I am bad at thinking up alternate strategies, rather than the strategies not existing, but it is an excellent point you make and well worth considering

Comment author: ChristianKl 02 May 2014 04:17:58PM 1 point [-]

Signal Preservation: Obsessively generate long streams of nonsense binary based on tapping randomly at a keyboard. Assume that these long strings must correspond in some way to brain states, and that future mathematics will be advanced enough to untangle the signal from the noise.

Garbage in garbage out. There no reason that you get meaningful information in that way.

Genetic Preservation: Take genetic samples of yourself and preserve them in a platinum-iridium bar in binary. Hope that personality is very largely genetic, and the proportion that isn’t can be reconstructed from statistical analysis of the time period in which you live (perhaps by employing Diarist Preservation in tandem).

We know that while twins often have a similar personality they are still different.

Comment author: Froolow 02 May 2014 05:25:06PM 1 point [-]

I'm not sure I agree with your analysis of the first - it is reasonable to assume that when a person generates pseudorandom noise they are masking a 'signal' with some amount of true randomness; we don't know enough to say for absolute certain that the input is totally garbage and we have good reason to believe people are actually very bad at generating random numbers. Contrast that to - for example - the fact that we have pretty good reasons to think that bringing someone back from the dead is a hard project and I don't think you're fairly applying the same criteria across preservation methods.

Comment author: trist 02 May 2014 03:21:44PM 0 points [-]

Also, avoiding dying in ways that destroy brain state. I'm not sure how probable those are, or how easy they are to avoid, and if that includes dementia (and so on) it gets rather common and tricky.

Comment author: Froolow 02 May 2014 03:30:56PM 2 points [-]

This is very true. I agonised about including a, 'Structure your life in such a way that your minimise the probability of a death which destroys your brain' option, but decided in the end that a pedant could argue that such a change to your lifestyle might decrease your total lifetime utility and so isn't worth it for certain probabilities of cryonics' success.

View more: Prev | Next