Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Nassim Taleb on Election Forecasting

7 NatashaRostova 26 November 2016 07:06PM

Nassim Taleb recently posted this mathematical draft of election forecasting refinement to his Twitter.

The math isn’t super important to see why it’s so cool. His model seems to be that we should try to forecast the election outcome, including uncertainty between now and the end date, rather than build a forecast that takes current poll numbers and implicitly assumes nothing changes.
The mechanism of his model focuses on forming an unbiased time-series, formulated using stochastic methods. The mainstream methods as of now focus on multi-level Bayesian methods that look to see how the election would turn out if it were run today.
 
That seems like it makes more sense. While it’s safe to assume a candidate will always want to have the highest chances of winning, the process by which two candidates interact is highly dynamic and strategic with respect to the election date.

When you stop to think about it, it’s actually remarkable that elections are so incredibly close to 50-50, with a 3-5% victory being generally immense. It captures this underlying dynamic of political game theory.

(At the more local level this isn’t always true, due to issues such as incumbent advantage, local party domination, strategic funding choices, and various other issues. The point though is that when those frictions are ameliorated due to the importance of the presidency, we find ourselves in a scenario where the equilibrium tends to be elections very close to 50-50.)

So back to the mechanism of the model, Taleb imposes a no-arbitrage condition (borrowed from options pricing) to impose time-varying consistency on the Brier score. This is a similar concept to financial options, where you can go bankrupt or make money even before the final event. In Taleb's world, if a guy like Nate Silver is creating forecasts that are varying largely over time prior to the election, this suggests he hasn't put any time dynamic constraints on his model.

The math is based on assumptions though that with high uncertainty, far out from the election, the best forecast is 50-50. This set of assumptions would have to be empirically tested. Still, stepping aside from the math, it does feel intuitive that an election forecast with high variation a year away from the event is not worth relying on, that sticking closer to 50-50 would offer a better full-sample Brier score.


I'm not familiar enough in the practical modelling to say whether this is feasible. Sometime the ideal models are too hard to estimate.

I'm interested in hearing any thoughts on this from people who are familiar with forecasting or have an interest in the modelling behind it.

I also have a specific question to tie this back to a rationality based framework: When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with? Or do you do a mental adjustment or discounting based on some factor you think they've left out? Whether it's prediction market variations, or adjustments based on perceiving changes in nationalism or politician specific skills (e.g. Scott Adams claimed to be able to predict that Trump would persuade everyone to vote for him. While it's tempting to write him off as a pundit charlatan, or claim he doesn't have sufficient proof, we also can't prove his model was wrong either.) I'm interested in learning the reasons we may disagree or be reasonably skeptical of polls, knowing it of course must be tested to know the true answer.

This is my first LW discussion post -- open to freedback on how it could be improved

[Link] Polarization is the problem, "normalization" is the answer

3 Jacobian 23 November 2016 05:40PM

[Link] Irrationality is the worst problem in politics

-14 Gleb_Tsipursky 21 November 2016 04:53PM

Group rationality -- bridging the gap in a post-truth world

0 rosyatrandom 18 November 2016 01:44PM

Everyone on this site obviously has an interest in being, on a personal level, more rational. That's, without need for argument, a good thing. (Although, if you do want to argue that, I can't stop you...)

But...

As a society, we're clearly not very rational, and it's becoming a huge problem. Look at any political articles out there, and you'll see the same thing: angry people partitioned into angry groups, yelling at each other and confirming their own biases. The level of discourse is... low, shall we say. 

While the obvious facet of rationality is trying to discern the signal above the noise, there's definitely another side: the art of convincing others. That can swing a little too close to Sophistry and putting the emphasis on personal gain, though. What we really need to do is outreach: promote rationality in the world around us. There's probably no-one reading this who hasn't been in an argument where being more rational and right hasn't helped at all, and maybe even made things worse. We've also all probably been on the other side of that, too. Admit it. But possibly the key word in that is 'argument': it frames the discussion as a confrontation, a fight that needs to be won.

Being the calm, rational person in a fight doesn't always work, though. It only takes one party to want a fight to have one, after all. When there's groups involved, the shouty passionate people tend to dominate, too. And they're currently dominating politics, and so all our lives. That's not a status quo any rationalist would be happy with, I think.

One of the problems with political/economic discussions is that we get polarised into taking absurd blanket positions and being unable to admit limitations or counter-arguments. I'm generally pretty far on the Left of the spectrum, but I will freely admit that the Right has both some very good points and a role to play: what is needed is a good dynamic tension between the two sides to ensure we don't go totally doolally either way. (Thesis, Antithesis, Synthesis etc.) And the tension is there, but it's certainly not good. We need to be able to point out failure modes to ourselves and others, encourage constructive criticism.

I think we need ways of both cooling the flames (both 1-on-1 and in groups), and strategies for promoting useful discussion.

So how can we do this? What can we do?

[Link] Maine passes Ranked Choice Voting

6 morganism 14 November 2016 08:07PM

Yudkowsky vs Trump: the nuclear showdown.

9 MrMind 11 November 2016 11:30AM

Sorry for the slightly clickbait-y title.

Some commenters have expressed, in the last open thread, their disappointment that figureheads from or near the rationality sphere seemed to have lost their cool when it came to this US election: when they were supposed to be calm and level-headed, they instead campaigned as if Trump was going to be the Basilisk incarnated.

I've not followed many commenters, mainly Scott Alexander and Eliezer Yudkowsky, and they both endorsed Clinton. I'll try to explain what were their arguments, briefly but as faithfully as possible. I'd like to know if you consider them mindkilled and why.

Please notice: I would like this to be a comment on methodology, about if their arguments were sound given what they knew and believed. I most definitely do not want this to decay in a lamentation about the results, or insults to the obviously stupid side, etc.

Yudkowsky made two arguments against Trump: level B incompetence and high variance. Since the second is also more or less the same as Scott's, I'll just go with those.

Level B incompetence

Eliezer attended a pretty serious and wide diplomatic simulation game, that made him appreciate how difficult is to just maintain a global equilibrium between countries and avoid nuclear annihilation. He says that there are three level in politics:

- level 0, where everything that the media report and the politicians say is taken at face value: every drama is true, every problem is important and every cry of outrage deserves consideration;

- level A, where you understand that politics is as much about theatre and emotions as it is about policies: at this level players operate like in pro-wrestling, creating drama and conflict to steer the more gullible viewers towards the preferred direction; at this level cinicism is high and almost every conflict is a farce and probably staged.

But the bucket doesn't stop here. As the diplomacy simulation taught him, there's also:

- level B, where everything becomes serious and important again. At this level, people work very hard at maintaining the status quo (where outside you have mankind extinction), diplomatic relations and subtle international equilibria shield the world from much worse outcomes. Faux pas at this level in the past had resulted in wars, genocides and general widespread badness.

In August fifty Republican security advisors signed a letter condemning Trump for his position on foreign policy: these are, Yudkowsky warned us, exactly those level B player, and they are saying us that Trump is an ill advised choice.
Trump might be a fantastic level A player, but he is an incompetent level B player, and this might very well turn to disaster.

High variance

The second argument is a more general version of the first: if you look at a normal distribution, it's easy to mistake only two possibilities: you either can do worst than the average, or better. But in a three dimensional world, things are much more complicated. Status quo is fragile (see the first argument), surrounded not by an equal amount of things being good or being bad. Most substantial variations from the equilibrium are disasters, and if you put a high-variance candidate, someone whose main point is to subvert the status quo, in charge, then with overwhelming probability you're headed off to a cliff.
People who voted for Trump are unrealistically optimists, thinking that civilization is robust, the current state is bad and variations can definitely help with getting away from a state of bad equilibrium.

[Link] Major Life Course Change: Making Politics Less Irrational

-8 Gleb_Tsipursky 11 November 2016 03:30AM

[Link] Raising the sanity waterline in politics

-15 Gleb_Tsipursky 08 November 2016 04:10PM

[Link] Voting is like donating hundreds of thousands to charity

-6 Gleb_Tsipursky 02 November 2016 09:22PM

[Link] Trying to make politics less irrational by cognitive bias-checking the US presidential debates

-6 Gleb_Tsipursky 22 October 2016 02:32AM

[Link] Politics Is Upstream of AI

4 iceman 28 September 2016 09:47PM

2016 LessWrong Diaspora Survey Analysis: Part Four (Politics, Calibration & Probability, Futurology, Charity & Effective Altruism)

10 ingres 10 September 2016 03:51AM

Politics

The LessWrong survey has a very involved section dedicated to politics. In previous analysis the benefits of this weren't fully realized. In the 2016 analysis we can look at not just the political affiliation of a respondent, but what beliefs are associated with a certain affiliation. The charts below summarize most of the results.

Political Opinions By Political Affiliation



































Miscellaneous Politics

There were also some other questions in this section which aren't covered by the above charts.

PoliticalInterest

On a scale from 1 (not interested at all) to 5 (extremely interested), how would you describe your level of interest in politics?

1: 67 (2.182%)

2: 257 (8.371%)

3: 461 (15.016%)

4: 595 (19.381%)

5: 312 (10.163%)

Voting

Did you vote in your country's last major national election? (LW Turnout Versus General Election Turnout By Country)
Group Turnout
LessWrong 68.9%
Austrailia 91%
Brazil 78.90%
Britain 66.4%
Canada 68.3%
Finland 70.1%
France 79.48%
Germany 71.5%
India 66.3%
Israel 72%
New Zealand 77.90%
Russia 65.25%
United States 54.9%
Numbers taken from Wikipedia, accurate as of the last general election in each country listed at time of writing.

AmericanParties

If you are an American, what party are you registered with?

Democratic Party: 358 (24.5%)

Republican Party: 72 (4.9%)

Libertarian Party: 26 (1.8%)

Other third party: 16 (1.1%)

Not registered for a party: 451 (30.8%)

(option for non-Americans who want an option): 541 (37.0%)

Calibration And Probability Questions

Calibration Questions

I just couldn't analyze these, sorry guys. I put many hours into trying to get them into a decent format I could even read and that sucked up an incredible amount of time. It's why this part of the survey took so long to get out. Thankfully another LessWrong user, Houshalter, has kindly done their own analysis.

All my calibration questions were meant to satisfy a few essential properties:

  1. They should be 'self contained'. I.E, something you can reasonably answer or at least try to answer with a 5th grade science education and normal life experience.
  2. They should, at least to a certain extent, be Fermi Estimable.
  3. They should progressively scale in difficulty so you can see whether somebody understands basic probability or not. (eg. In an 'or' question do they put a probability of less than 50% of being right?)

At least one person requested a workbook, so I might write more in the future. I'll obviously write more for the survey.

Probability Questions

Question Mean Median Mode Stdev
Please give the obvious answer to this question, so I can automatically throw away all surveys that don't follow the rules: What is the probability of a fair coin coming up heads? 49.821 50.0 50.0 3.033
What is the probability that the Many Worlds interpretation of quantum mechanics is more or less correct? 44.599 50.0 50.0 29.193
What is the probability that non-human, non-Earthly intelligent life exists in the observable universe? 75.727 90.0 99.0 31.893
...in the Milky Way galaxy? 45.966 50.0 10.0 38.395
What is the probability that supernatural events (including God, ghosts, magic, etc) have occurred since the beginning of the universe? 13.575 1.0 1.0 27.576
What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe? 15.474 1.0 1.0 27.891
What is the probability that any of humankind's revealed religions is more or less correct? 10.624 0.5 1.0 26.257
What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then? 21.225 10.0 5.0 26.782
What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time? 25.263 10.0 1.0 30.510
What is the probability that our universe is a simulation? 25.256 10.0 50.0 28.404
What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions? 83.307 90.0 90.0 23.167
What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity? 76.310 80.0 80.0 22.933

 

Probability questions is probably the area of the survey I put the least effort into. My plan for next year is to overhaul these sections entirely and try including some Tetlock-esque forecasting questions, a link to some advice on how to make good predictions, etc.

Futurology

This section got a bit of a facelift this year. Including new cryonics questions, genetic engineering, and technological unemployment in addition to the previous years.

Cryonics

Cryonics

Are you signed up for cryonics?

Yes - signed up or just finishing up paperwork: 48 (2.9%)

No - would like to sign up but unavailable in my area: 104 (6.3%)

No - would like to sign up but haven't gotten around to it: 180 (10.9%)

No - would like to sign up but can't afford it: 229 (13.8%)

No - still considering it: 557 (33.7%)

No - and do not want to sign up for cryonics: 468 (28.3%)

Never thought about it / don't understand: 68 (4.1%)

CryonicsNow

Do you think cryonics, as currently practiced by Alcor/Cryonics Institute will work?

Yes: 106 (6.6%)

Maybe: 1041 (64.4%)

No: 470 (29.1%)

Interestingly enough, of those who think it will work with enough confidence to say 'yes', only 14 are actually signed up for cryonics.

sqlite> select count(*) from data where CryonicsNow="Yes" and Cryonics="Yes - signed up or just finishing up paperwork";

14

sqlite> select count(*) from data where CryonicsNow="Yes" and (Cryonics="Yes - signed up or just finishing up paperwork" OR Cryonics="No - would like to sign up but unavailable in my area" OR "No - would like to sign up but haven't gotten around to it" OR "No - would like to sign up but can't afford it");

34

CryonicsPossibility

Do you think cryonics works in principle?

Yes: 802 (49.3%)

Maybe: 701 (43.1%)

No: 125 (7.7%)

LessWrongers seem to be very bullish on the underlying physics of cryonics even if they're not as enthusiastic about current methods in use.

The Brain Preservation Foundation also did an analysis of cryonics responses to the LessWrong Survey.

Singularity

SingularityYear

By what year do you think the Singularity will occur? Answer such that you think, conditional on the Singularity occurring, there is an even chance of the Singularity falling before or after this year. If you think a singularity is so unlikely you don't even want to condition on it, leave this question blank.

Mean: 8.110300081581755e+16

Median: 2080.0

Mode: 2100.0

Stdev: 2.847858859055733e+18

I didn't bother to filter out the silly answers for this.

Obviously it's a bit hard to see without filtering out the uber-large answers, but the median doesn't seem to have changed much from the 2014 survey.

Genetic Engineering

ModifyOffspring

Would you ever consider having your child genetically modified for any reason?

Yes: 1552 (95.921%)

No: 66 (4.079%)

Well that's fairly overwhelming.

GeneticTreament

Would you be willing to have your child genetically modified to prevent them from getting an inheritable disease?

Yes: 1387 (85.5%)

Depends on the disease: 207 (12.8%)

No: 28 (1.7%)

I find it amusing how the strict "No" group shrinks considerably after this question.

GeneticImprovement

Would you be willing to have your child genetically modified for improvement purposes? (eg. To heighten their intelligence or reduce their risk of schizophrenia.)

Yes : 0 (0.0%)

Maybe a little: 176 (10.9%)

Depends on the strength of the improvements: 262 (16.2%)

No: 84 (5.2%)

Yes I know 'yes' is bugged, I don't know what causes this bug and despite my best efforts I couldn't track it down. There is also an issue here where 'reduce your risk of schizophrenia' is offered as an example which might confuse people, but the actual science of things cuts closer to that than it does to a clean separation between disease risk and 'improvement'.

 

This question is too important to just not have an answer to so I'll do it manually. Unfortunately I can't easily remove the 'excluded' entries so that we're dealing with the exact same distribution but only 13 or so responses are filtered out anyway.

sqlite> select count(*) from data where GeneticImprovement="Yes";

1100

>>> 1100 + 176 + 262 + 84
1622
>>> 1100 / 1622
0.6781750924784217

67.8% are willing to genetically engineer their children for improvements.

GeneticCosmetic

Would you be willing to have your child genetically modified for cosmetic reasons? (eg. To make them taller or have a certain eye color.)

Yes: 500 (31.0%)

Maybe a little: 381 (23.6%)

Depends on the strength of the improvements: 277 (17.2%)

No: 455 (28.2%)

These numbers go about how you would expect, with people being progressively less interested the more 'shallow' a genetic change is seen as.


GeneticOpinionD

What's your overall opinion of other people genetically modifying their children for disease prevention purposes?

Positive: 1177 (71.7%)

Mostly Positive: 311 (19.0%)

No strong opinion: 112 (6.8%)

Mostly Negative: 29 (1.8%)

Negative: 12 (0.7%)

GeneticOpinionI

What's your overall opinion of other people genetically modifying their children for improvement purposes?

Positive: 737 (44.9%)

Mostly Positive: 482 (29.4%)

No strong opinion: 273 (16.6%)

Mostly Negative: 111 (6.8%)

Negative: 38 (2.3%)

GeneticOpinionC

What's your overall opinion of other people genetically modifying their children for cosmetic reasons?

Positive: 291 (17.7%)

Mostly Positive: 290 (17.7%)

No strong opinion: 576 (35.1%)

Mostly Negative: 328 (20.0%)

Negative: 157 (9.6%)

All three of these seem largely consistent with peoples personal preferences about modification. Were I inclined I could do a deeper analysis that actually takes survey respondents row by row and looks at correlation between preference for ones own children and preference for others.

Technological Unemployment

LudditeFallacy

Do you think the Luddite's Fallacy is an actual fallacy?

Yes: 443 (30.936%)

No: 989 (69.064%)

We can use this as an overall measure of worry about technological unemployment, which would seem to be high among the LW demographic.

UnemploymentYear

By what year do you think the majority of people in your country will have trouble finding employment for automation related reasons? If you think this is something that will never happen leave this question blank.

Mean: 2102.9713740458014

Median: 2050.0

Mode: 2050.0

Stdev: 1180.2342850727339

Question is flawed because you can't distinguish answers of "never happen" from people who just didn't see it.

Interesting question that would be fun to take a look at in comparison to the estimates for the singularity.

EndOfWork

Do you think the "end of work" would be a good thing?

Yes: 1238 (81.287%)

No: 285 (18.713%)

Fairly overwhelming consensus, but with a significant minority of people who have a dissenting opinion.

EndOfWorkConcerns

If machines end all or almost all employment, what are your biggest worries? Pick two.

Question Count Percent
People will just idle about in destructive ways 513 16.71%
People need work to be fulfilled and if we eliminate work we'll all feel deep existential angst 543 17.687%
The rich are going to take all the resources for themselves and leave the rest of us to starve or live in poverty 1066 34.723%
The machines won't need us, and we'll starve to death or be otherwise liquidated 416 13.55%
Question is flawed because it demanded the user 'pick two' instead of up to two.

The plurality of worries are about elites who refuse to share their wealth.

Existential Risk

XRiskType

Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?

Nuclear war: +4.800% 326 (20.6%)

Asteroid strike: -0.200% 64 (4.1%)

Unfriendly AI: +1.000% 271 (17.2%)

Nanotech / grey goo: -2.000% 18 (1.1%)

Pandemic (natural): +0.100% 120 (7.6%)

Pandemic (bioengineered): +1.900% 355 (22.5%)

Environmental collapse (including global warming): +1.500% 252 (16.0%)

Economic / political collapse: -1.400% 136 (8.6%)

Other: 35 (2.217%)

Significantly more people worried about Nuclear War than last year. Effect of new respondents, or geopolitical situation? Who knows.

Charity And Effective Altruism

Charitable Giving

Income

What is your approximate annual income in US dollars (non-Americans: convert at www.xe.com)? Obviously you don't need to answer this question if you don't want to. Please don't include commas or dollar signs.

Sum: 66054140.47384

Mean: 64569.052271593355

Median: 40000.0

Mode: 30000.0

Stdev: 107297.53606321265

IncomeCharityPortion

How much money, in number of dollars, have you donated to charity over the past year? (non-Americans: convert to dollars at http://www.xe.com/ ). Please don't include commas or dollar signs in your answer. For example, 4000

Sum: 2389900.6530000004

Mean: 2914.5129914634144

Median: 353.0

Mode: 100.0

Stdev: 9471.962766896671

XriskCharity

How much money have you donated to charities aiming to reduce existential risk (other than MIRI/CFAR) in the past year?

Sum: 169300.89

Mean: 1991.7751764705883

Median: 200.0

Mode: 100.0

Stdev: 9219.941506342007

CharityDonations

How much have you donated in US dollars to the following charities in the past year? (Non-americans: convert to dollars at http://www.xe.com/) Please don't include commas or dollar signs in your answer. Options starting with "any" aren't the name of a charity but a category of charity.

Question Sum Mean Median Mode Stdev
Against Malaria Foundation 483935.027 1905.256 300.0 None 7216.020
Schistosomiasis Control Initiative 47908.0 840.491 200.0 1000.0 1618.785
Deworm the World Initiative 28820.0 565.098 150.0 500.0 1432.712
GiveDirectly 154410.177 1429.723 450.0 50.0 3472.082
Any kind of animal rights charity 83130.47 1093.821 154.235 500.0 2313.493
Any kind of bug rights charity 1083.0 270.75 157.5 None 353.396
Machine Intelligence Research Institute 141792.5 1417.925 100.0 100.0 5370.485
Any charity combating nuclear existential risk 491.0 81.833 75.0 100.0 68.060
Any charity combating global warming 13012.0 245.509 100.0 10.0 365.542
Center For Applied Rationality 127101.0 3177.525 150.0 100.0 12969.096
Strategies for Engineered Negligible Senescence Research Foundation 9429.0 554.647 100.0 20.0 1156.431
Wikipedia 12765.5 53.189 20.0 10.0 126.444
Internet Archive 2975.04 80.406 30.0 50.0 173.791
Any campaign for political office 38443.99 366.133 50.0 50.0 1374.305
Other 564890.46 1661.442 200.0 100.0 4670.805
"Bug Rights" charity was supposed to be a troll fakeout but apparently...

This table is interesting given the recent debates about how much money certain causes are 'taking up' in Effective Altruism.

Effective Altruism

Vegetarian

Do you follow any dietary restrictions related to animal products?

Yes, I am vegan: 54 (3.4%)

Yes, I am vegetarian: 158 (10.0%)

Yes, I restrict meat some other way (pescetarian, flexitarian, try to only eat ethically sourced meat): 375 (23.7%)

No: 996 (62.9%)

EAKnowledge

Do you know what Effective Altruism is?

Yes: 1562 (89.3%)

No but I've heard of it: 114 (6.5%)

No: 74 (4.2%)

EAIdentity

Do you self-identify as an Effective Altruist?

Yes: 665 (39.233%)

No: 1030 (60.767%)

The distribution given by the 2014 survey results does not sum to one, so it's difficult to determine if Effective Altruism's membership actually went up or not but if we take the numbers at face value it experienced an 11.13% increase in membership.

EACommunity

Do you participate in the Effective Altruism community?

Yes: 314 (18.427%)

No: 1390 (81.573%)

Same issue as last, taking the numbers at face value community participation went up by 5.727%

EADonations

Has Effective Altruism caused you to make donations you otherwise wouldn't?

Yes: 666 (39.269%)

No: 1030 (60.731%)

Wowza!

Effective Altruist Anxiety

EAAnxiety

Have you ever had any kind of moral anxiety over Effective Altruism?

Yes: 501 (29.6%)

Yes but only because I worry about everything: 184 (10.9%)

No: 1008 (59.5%)


There's an ongoing debate in Effective Altruism about what kind of rhetorical strategy is best for getting people on board and whether Effective Altruism is causing people significant moral anxiety.

It certainly appears to be. But is moral anxiety effective? Let's look:

Sample Size: 244
Average amount of money donated by people anxious about EA who aren't EAs: 257.5409836065574

Sample Size: 679
Average amount of money donated by people who aren't anxious about EA who aren't EAs: 479.7501384388807

Sample Size: 249 Average amount of money donated by EAs anxious about EA: 1841.5292369477913

Sample Size: 314
Average amount of money donated by EAs not anxious about EA: 1837.8248407643312

It seems fairly conclusive that anxiety is not a good way to get people to donate more than they already are, but is it a good way to get people to become Effective Altruists?

Sample Size: 1685
P(Effective Altruist): 0.3940652818991098
P(EA Anxiety): 0.29554896142433235
P(Effective Altruist | EA Anxiety): 0.5

Maybe. There is of course an argument to be made that sufficient good done by causing people anxiety outweighs feeding into peoples scrupulosity, but it can be discussed after I get through explaining it on the phone to wealthy PR-conscious donors and telling the local all-kill shelter where I want my shipment of dead kittens.

EAOpinion

What's your overall opinion of Effective Altruism?

Positive: 809 (47.6%)

Mostly Positive: 535 (31.5%)

No strong opinion: 258 (15.2%)

Mostly Negative: 75 (4.4%)

Negative: 24 (1.4%)

EA appears to be doing a pretty good job of getting people to like them.

Interesting Tables

Charity Donations By Political Affilation
Affiliation Income Charity Contributions % Income Donated To Charity Total Survey Charity % Sample Size
Anarchist 1677900.0 72386.0 4.314% 3.004% 50
Communist 298700.0 19190.0 6.425% 0.796% 13
Conservative 1963000.04 62945.04 3.207% 2.612% 38
Futarchist 1497494.1099999999 166254.0 11.102% 6.899% 31
Left-Libertarian 9681635.613839999 416084.0 4.298% 17.266% 245
Libertarian 11698523.0 214101.0 1.83% 8.885% 190
Moderate 3225475.0 90518.0 2.806% 3.756% 67
Neoreactionary 1383976.0 30890.0 2.232% 1.282% 28
Objectivist 399000.0 1310.0 0.328% 0.054% 10
Other 3150618.0 85272.0 2.707% 3.539% 132
Pragmatist 5087007.609999999 266836.0 5.245% 11.073% 131
Progressive 8455500.440000001 368742.78 4.361% 15.302% 217
Social Democrat 8000266.54 218052.5 2.726% 9.049% 237
Socialist 2621693.66 78484.0 2.994% 3.257% 126


Number Of Effective Altruists In The Diaspora Communities
Community Count % In Community Sample Size
LessWrong 136 38.418% 354
LessWrong Meetups 109 50.463% 216
LessWrong Facebook Group 83 48.256% 172
LessWrong Slack 22 39.286% 56
SlateStarCodex 343 40.98% 837
Rationalist Tumblr 175 49.716% 352
Rationalist Facebook 89 58.94% 151
Rationalist Twitter 24 40.0% 60
Effective Altruism Hub 86 86.869% 99
Good Judgement(TM) Open 23 74.194% 31
PredictionBook 31 51.667% 60
Hacker News 91 35.968% 253
#lesswrong on freenode 19 24.675% 77
#slatestarcodex on freenode 9 24.324% 37
#chapelperilous on freenode 2 18.182% 11
/r/rational 117 42.545% 275
/r/HPMOR 110 47.414% 232
/r/SlateStarCodex 93 37.959% 245
One or more private 'rationalist' groups 91 47.15% 193


Effective Altruist Donations By Political Affiliation
Affiliation EA Income EA Charity Sample Size
Anarchist 761000.0 57500.0 18
Futarchist 559850.0 114830.0 15
Left-Libertarian 5332856.0 361975.0 112
Libertarian 2725390.0 114732.0 53
Moderate 583247.0 56495.0 22
Other 1428978.0 69950.0 49
Pragmatist 1442211.0 43780.0 43
Progressive 4004097.0 304337.78 107
Social Democrat 3423487.45 149199.0 93
Socialist 678360.0 34751.0 41

Why we may elect our new AI overlords

2 Deku-shrub 04 September 2016 01:07AM

In which I examine some of the latest development in automated fact checking, prediction markets for policies and propose we get rich voting for robot politicians.

http://pirate.london/2016/09/why-we-may-elect-our-new-ai-overlords/

Paid research assistant position focusing on artificial intelligence and existential risk

7 crmflynn 02 May 2016 06:27PM

Yale Assistant Professor of Political Science Allan Dafoe is seeking Research Assistants for a project on the political dimensions of the existential risks posed by advanced artificial intelligence. The project will involve exploring issues related to grand strategy and international politics, reviewing possibilities for social scientific research in this area, and institution building. Familiarity with international relations, existential risk, Effective Altruism, and/or artificial intelligence are a plus but not necessary. The project is done in collaboration with the Future of Humanity Institute, located in the Faculty of Philosophy at the University of Oxford. There are additional career opportunities in this area, including in the coming academic year and in the future at Yale, Oxford, and elsewhere. If interested in the position, please email allan.dafoe@yale.edu with a copy of your CV, a writing sample, an unofficial copy of your transcript, and a short (200-500 word) statement of interest. Work can be done remotely, though being located in New Haven, CT or Oxford, UK is a plus.

[Link] Salon piece analyzing Donald Trump's appeal using rationality

-13 Gleb_Tsipursky 24 April 2016 04:36AM

I'm curious about your thoughts on my piece in Salon analyzing Trump's emotional appeal using rationality-informed ideas. My primary aim is using the Trump hook to get readers to consider the broader role of Systems 1 and 2 in politics, the backfire effect, wishful thinking, emotional intelligence, etc.

 

 

Suppose HBD is True

-12 OrphanWilde 21 April 2016 01:34PM

Suppose, for the purposes of argument, HBD (Human bio-diversity, the claim that distinct populations (I will be avoiding using the word "race" here insomuch as possible) of humans exist and have substantial genetical variance which accounts for some difference in average intelligence from population to population) is true, and that all its proponents are correct in accusing the politicization of science for burying this information.

I seek to ask the more interesting question: Would it matter?

1. Societal Ramifications of HBD: Eugenics

So, we now have some kind of nice, tidy explanation for different characters among different groups of people.  Okay.  We have a theory.  It has explanatory power.  What can we do with it?

Unless you're willing to commit to eugenics of some kind (be it restricting reproduction or genetic alteration), not much of anything.  And even given you're willing to commit to eugenics, HBD doesn't add anything  HBD doesn't actually change any of the arguments for eugenics - below-average people exist in every population group, and insofar as we regard below-average people a problem, the genetic population they happen to belong to doesn't matter.  If the point is to raise the average, the population group doesn't matter.  If the point is to reduce the number of socially dependent individuals, the population group doesn't matter.

Worse, insofar as we use HBD as a determinant in eugenics, our eugenics are less effective.  HBD says your population group has a relationship with intelligence; but if we're interested in intelligence, we have no reason to look at your population group, because we can measure intelligence more directly.  There's no reason to use the proxy of population group if we're interested in intelligence, and indeed, every reason not to; it's significantly less accurate and politically and historically problematic.

Yet still worse for our eugenics advocate, insomuch as population groups do have significant genetic diversity, using population groups instead of direct measurements of intelligence is far more likely to cause disease transmission risks.  (Genetic diversity is very important for population-level disease resistance.  Just look at bananas.)

2. Social Ramifications of HBD: Social Assistance

Let's suppose we're not interested in eugenics.  Let's suppose we're interested in maximizing our societal outcomes.

Well, again, HBD doesn't offer us anything new.  We can already test intelligence, and insofar as HBD is accurate, intelligence tests are more accurate.  So if we aim to streamline society, we don't need HBD to do so.  HBD might offer an argument against affirmative action, in that we have different base expectations for different populations, but affirmative action already takes different base expectations into account (if you live in a city of 50% black people and 50% white people, but 10% of local lawyers are black, your local law firm isn't required to have 50% black lawyers, but 10%).  We might desire to adjust the way we engage in affirmative action, insofar as affirmative action might not lead to the best results, but if you're interested in the best results, you can argue on the basis of best results without needing HBD.

I have yet to encounter someone who argues HBD who also argues we should do something with regard to HELPING PEOPLE on the basis of this, but that might actually be a more significant argument: If there are populations of people who are going to fall behind, that might be a good argument to provide additional resources to these populations of people, particularly if there are geographic correspondences - that is, if HBD is true, and if population groups are geographically segregated, individuals in these population groups will suffer disproportionately relative to their merits, because they don't have the local geographic social capital that equal-advantage people of other population groups would have.  (An average person in a poor region will do worse than an average person in a rich region.)  So HBD provides an argument for desegregation.

Curiously, HBD advocates have a tendency to argue that segregation would lead to the best outcome.  I'd welcome arguments that concentrating an -absence- of social capital is a good idea.

3. Scientific Ramifications of HBD

Well, if HBD were true, it would mean science is politicized.  This might be news to somebody, I guess.

4. Political Ramifications of HBD

We live in a meritocracy.  It's actually not an ideal thing, contrary to the views of some people, because it results in a systematic merit segregation that has completely deprived the lower classes of intellectual resources; talk to older people sometime, who remember, when they worked in the coal mines (or whatever), the one guy who you could trust to be able to answer your questions and provide advice.  Our meritocracy has advanced to the point where we are systematically stripping everybody of value from the lower classes and redistributing them to the middle and upper classes.

HBD might be meaningful here.  Insofar as people take HBD to its absurd extremes, it might actually result in an -improvement- for some lower-class groups, because if we stop taking all the intelligent people out of poor areas, there will still be intelligent people in those poor areas.  But racism as a force of utilitarian good isn't something I care to explore in any great detail, mostly because if I'm wrong it would be a very bad thing, and also because none of its advocates actually suggest anything like this, more interesting in promoting segregation than desegregation.

It doesn't change much else, either.  With HBD we continually run into the same problem - as a theory, it's the product of measuring individual differences, and as a theory, it doesn't add anything to our information that we don't already have with the individual differences.

5. The Big Problem: Individuality

Which is the crucial fault with HBD, iterated multiple times here, in multiple ways: It literally doesn't matter if HBD is true.  All the information it -might- provide us with, we can get with much more accuracy using the same tests we might use to arrive at HBD.  Anything we might want to do with the idea, we can do -better- without it.

HBD might predict we get fewer IQ-115, IQ-130, and IQ-145 people from particular population groups, but it doesn't actually rule them out.  Insofar as this kind of information is useful, it's -more- useful to have more accurate information.  HBD doesn't say "Black people are stupid", instead it says "The average IQ of black people is slightly lower than the average IQ of white people".  But since "black people" isn't a thing that exists, but rather an abstract concept referring to a group of "black persons", and HBD doesn't make any predictions at the individual level we couldn't more accurately obtain through listening to a person speak for five seconds, it doesn't actually make any useful predictions.  It adds literally nothing to our model of the world.

It's not the most important idea of the century.  It's not important at all.

If you think it's true - okay.  What does it -add- to your understanding of the world?  What useful predictions does it make?  How does it permit you to improve society?  I've heard people insist it's this majorly important idea that the scientific and political establishment is suppressing.  I'd like to introduce you to the aether, another idea that had explanatory power but made no useful predictions, and which was abandoned - not because anybody thought it was wrong, but because it didn't even rise to the level of wrong, because it was useless.

And that's what HBD is.  A useless idea.

And even worse, it's a useless idea that's hopelessly politicized.

[Link] Op-Ed on Brussels Attacks

-6 Gleb_Tsipursky 02 April 2016 05:38PM

Trigger warning: politics is hard mode.


"How to you make America safer from terrorists" is the title of my op-ed published in Sun Sentinel, a very prominent newspaper in Florida, one of the most swingiest of the swing states in the US for the presidential election, and the one with the most votes. The maximum length of the op-ed was 450 words, and it was significantly edited by the editor, so it doesn't convey the full message I wanted with all the nuances, but such is life. My primary goal with the piece was to convey methods of thinking more rationally about politics, such as to use probabilistic thinking, evaluating the full consequences of our actions, and avoiding attention bias. I used the example of the proposal to police heavily Muslim neighborhoods as a case study. Hope this helps Floridians think more rationally and raises the sanity waterline regarding politics!

 

 

EDIT: To be totally clear, I used guesstimates for the numbers I suggested. Following Yvain/Scott Alexander's advice, I prefer to use guesstimates rather than vague statements.

Is altruistic deception really necessary? Social activism and the free market

3 PhilGoetz 26 February 2016 06:38AM

I've said before that social reform often seems to require lying.  Only one-sided narratives offering simple solutions motivate humans to act, so reformers manufacture one-sided narratives such as we find in Marxism or radical feminism, which inspire action through indignation.  Suppose you tell someone, "Here's an important problem, but it's difficult and complicated.  If we do X and Y, then after five years, I think we'd have a 40% chance of causing a 15% reduction in symptoms."  They'd probably think they had something better to do.

But the examples I used in that previous post were all arguably bad social reforms: Christianity, Russian communism, and Cuban communism.

The argument that people need to be deceived into social reform assumes either that they're stupid, or that there's some game-theoretic reason why social reform that's very worthwhile to society as a whole isn't worthwhile to any individual in society.

Is that true?  Or are people correct and justified in not making sudden changes until there's a clear problem and a clear solution to it?

continue reading »

The Art of Lawfare and Litigation strategy

-4 [deleted] 17 December 2015 02:34PM

Bertrand Russell, well aware there were health risks of smoking, defended his addiction in a videotaped interview. See if you can spot his fallacy! 

Today on SBS (radio channel in Australia) I heard reporters breaking the news that Nature article reports that Cancer is largely due to choices. I was shocked by what appeared to be gross violations of cultural norms around the blaming of victims. I wanted to investigate further since science reporting is notoriously inaccurate.

The BBC reports:

Earlier this year, researchers sparked a debate after suggesting two-thirds of cancer types were down to luck rather than factors such as smoking.

The new study, in the journal Nature, used four approaches to conclude only 10-30% of cancers were down to the way the body naturally functions or "luck".

"They can't smoke and say it's bad luck if they have cancer."

-Dr Yusuf Hannun, the director of Stony Brook

-http://www.bbc.com/news/health-35111449

The BBC article is roughly concordant with the SBS report. 

I've had a fairly simple relationship with cigarettes. I've smoked others' cigarettes a few times, while drinking. I bought my first cigarette to try soon after I turned of age and discarded the rest of the packet. One of my favourite memories is trying a vanilla flavoured cigar. I still feel tempted to it again whenever I smell a nice scent, or think about that moment. Though now, I regularly reject offers to go to local venues and smoke hookah. Even after my first cigarette, I felt the tug of nicotine and tobacco. Though, I'm unusually sensitive to eve the mildest addictive substances, so that doesn't suprise me in respective. What does suprise me, is that society is starting to take a ubiquitous but increasingly undeniable health issue seriously despite deep entanglement with long standing way of doing things, political ideologues, individual addictions and addiction-driven political behaviour and shareholder's pockets.

Though the truth claim of the article isn't that suprising. The dangers of smoking are publicised everywhere. Emphasis mine:

13 die every day in Victoria as a result of smoking.

Tobacco use (which includes cigarettes, cigars, pipes, snuff, chewing tobacco) is the leading preventable cause of death and illness in our country. It causes more deaths annually than those killed by AIDS, alcohol, automobile accidents, murders, suicides, drugs and fires combined.

So I decided to learn more about the relationship between society and big tobacco, and government and big tobacco to see what other people interested in influencing public policy and public health can learn (effective altruism policy analytics, take note!) about policy tractability in suprising places.

Here's what might make for tractable public policy for public health interventions

Proof of concept

Governments are great at successfully suing the shit out of tobacco. And, big tobacco takes it like a champ:

It started with United State's states experimenting with suing big tobacco. Eventually only a couple of states hadn't done it. Big Tobacco and all those attorney generals gathered and arranged huge ass settlement that resulted in the disestablishment of several shill research institutes supporting big tobacco and big payouts to sponsor anti-smoking advocacy groups (which seem politically unethical, but consequentially good, but I suppose that's a different story). However, what's important to note here is the experimentation within US states culminating with the legitimacy of normative lawfare. It's called 'Diffusion theory' and is described here.

Wait wait wait. I know what you're thinking, non-US LessWrongers - another US centric analysis that isn't too transportable. No. I'm not American in any sense, it's just that the US seems to be a point of diffusion. What's happening regarding marajuana in the US now seems to mirror this in some sense, but it's ironically pro-smoking. That illustrates the cause-neutrality of this phenomenon.

That settlement wasn't the end of the lawfare:

On August 17, 2006, a U.S. district judge issued a landmark opinion in the government's case against Big Tobacco, finding that tobacco companies had violated civil racketeering laws and defrauded consumers by lying about the health risks of smoking.

In a 1,653 page ruling, the judge stated that the tobacco industry had deceived the American public by concealing the addictive nature of nicotine plus had targeted youth in order to get them hooked on cigarettes for life. (Appeals are still pending). 

Victims who ask for help

I also stumbled upon some smokers attitudes to smoking and their, well, seemingly vexacious attitudes to big tobacco when looking up lawsuits and big tobacco. Here's a copy of the comments section on one website. It's really heartbreaking. It's a small sample size but just note their education too - suggesting a socio-economic effect. Note, this comments were posted publicly and are blatant cries for help. This suggests political will at a grassroots level that is yet under-catered for by services and/or political action. That's a powerful thing, perhaps - visible need in public forums addressed to those that are in the relevant space. Note that they commented on a class action website.

http://s10.postimg.org/61h7b1rp5/099090.png

 

Note some of the language:

 

"I feel like I'm being tortured"

You don't see that kind of language used in any effective altruism branded publications.

Villains

Somewhat famous documents exposing the tobacco industries internal motivations and dodginess seem to be quoted everywhere in websites documenting and justifyications of lawfare against the tobacco industry. Public health and personal dangers of smoking don't seem to have been the big catalyst, but rather a villainous enemy. I'm reminded of how the Stop the boats campaign which villainised people smugglers instead of speaking of the potential to save lives of refugees who fall overboard shitty vessals. I think to Open Borders campaigners associated with GiveWell's Open Philanthropy Project, the perception of the project as just about the most intractable policy prospect around (I'd say a moratorium on AI research is up there), but at the same time, non identification of a villain in the picture. That's not entirely unsuprising. I recall the hate I received when I suggested that people should consider prostituting themselves for effective altruism, or soliciting donations from the porn industry where donors struggle to donate since many, particularly relgious charities refuge to accept their donations. Likewise, it's hard to get rid of encultured perceptions of what's good and what's bad, rather then enumerating ('or checking, as Eleizer writes in the sequence) the consequences.

Relative merit

This is something Effective Altruist is doing.

William Savedoff and Albert Alwang recently identified taxes on tobacco as, “the single most cost-effective way to save lives in developing countries” (2015, p.1).

...

Tobacco control programs often pursue many of these aims at once. However, raising taxes appears to be particularly cost-effective — e.g., raising taxes costs $3 - $70 per DALY avoided(Savedoff and Alwang, p.5; Ranson et al. 2002, p.311) — so I will focus solely on taxes. I will also focus only on low and middle income countries (LMICs) because that is where the problem is worst and where taxes can do the most good most cost-effectively.

..

But current trends need not continue. We can prevent deaths from tobacco use. Tobacco taxation is a well-tested and effective means of decreasing the prevalence of smoking—it gets people to stop and prevents others from starting. The reason is that smokers are responsive to price increases,provided that the real price goes up enough

...

Even if these numbers are off by a factor of 2 or 3, tobacco taxation appears to be on par with the most effective interventions identified by GiveWell and Giving What We Can. For example, GiveWell estimates that AMF can prevent a death for $3340 by providing bed nets to prevent malaria and estimates the cost of schistosomiasis deworming at $29 - $71 per DALY.

 

There are a few reasons to balk at recommending tobacco tax advocacy to those aiming to do the most good with their donations, time, and careers.

 

  • Tobacco taxes may not be a tractable issue
  • Tobacco taxes may be a “crowded” cause area
  • Unanswered questions about the empirical basis of cost-effectiveness estimates
  • There may not be a charity to donate to
...
Smoking is very harmful and very common.  Globally, 21% of people over 15 smoke (WHO GHO)

 

-https://www.givingwhatwecan.org/post/2015/09/tobacco-control-best-buy-developing-world/

 

Attributing public responsibility AND incentivising independently private interest in a cause


The Single Best Health Policy in the World: Tobacco Taxes

The single most cost-effective way to save lives in developing countries is in the hands of developing countries themselves: raising tobacco taxes. In fact, raising tobacco taxes is better than cost-effective. It saves lives while increasing revenues and saving poor households money when their members quit smoking.

-http://www.cgdev.org/publication/single-best-health-policy-world-tobacco-taxes)

 

Tobacco lawsuits can be hard to win but if you have been injured because of tobacco or smoking or secondary smoke exposure, you should contact an attorney as soon as possible.

  If you have lung cancer and are now, or were formerly, a smoker or used tobacco products, you may have a claim under the product liability laws. You should contact an experienced product liability attorney or a tobacco lawsuit attorney as soon as possible because a statute of limitations could apply. 

-http://smoking-tobacco.whocanisue.com/

There's a whole bunch of legal literature like this: http://heinonline.org/HOL/LandingPage?handle=hein.journals/clqv86&div=45&id=&page=

that I don't have the background to search for and interpret. So, if I'm missing important things, perhaps it's attributable to that. Point them out please.

So that's my analysis: plausible modifiable variables that influence the tractability of the public health policy initiative: 

(1) Attributing public responsibility AND incentivising independently private interest in a cause

(2) Relative merit

(3) Villains

(4) Victims that ask for help

(5) Low scale proof of concept

Remember, lawfare isn't just the domain of governments. Here's an example of non-government lawfare for public health. They are just better resourced, often, than individuals. They need groups to advocate on their behalf. Perhaps that's a direction the Open Philanthropy Project could take. 

I want to finish by soliciting an answer on the following question that is posed to smokers in a recurring survey by a tobacco control body:

Do you support or oppose the government suing tobacco companies to recover health care costs caused by tobacco use?

Now, there may be some 'reverse causation' at play here for why Tobacco Control has been so politically effect. BECAUSE it's such a good cause, it's a low hanging fruit that's already being picked. 

What's the case for or against this?


The case for it's cause selection: Tobacco control


Importance: high


tobacco is the leading preventable cause of death and disease in both the world (see: http://www.who.int/nmh/publications/fact_sheet_tobacco_en.pdf) and Australia (see: http://www.cancer.org.au/policy-and-advocacy/position-statements/smoking-and-tobacco-control/)


‘Tobacco smoking causes 20% of cancer deaths in Australia, making it the highest individual cancer risk factor. Smoking is a known cause of 16 different cancer types and is the main cause of Australia’s deadliest cancer, lung cancer. Smoking is responsible for 88% of lung cancer deaths in men and 75% of lung cancer cases in women in Australia.’


Tractable: high


The World Health Organization’s Framework Convention on Tobacco Control (FCTC) was the first public health treaty ever negotiate.


Based on private information, the balance of healthcare costs against tax revenues according to health advocates compared to treasury estimates in Australia may have been relevant to Australia’s leadership in tobacco regulation. That submission may or may not be adequate in complexity (ie. taking into account reduced lifespans impact on reduced pension payouts for instance). There is a good article about the behavioural economics of tobacco regulation here (http://baselinescenario.com/2011/03/22/incentives-dont-work/)



Room for advocacy: low


There are many hundreds of consumer support and advocacy groups, and cancer charities across Australia.


Room for employment: low?


Room for consulting: high

 

The rigour of analysis and achievements themselves in the Cancer Council of Australia annual review is underwhelming, as is the Cancer Council of Victoria’s annual report. There is a better organised body of evidence relating to their impact on their Wiki pages about effective interventions and policy priorities. At a glance, there appears to be room for more quantitative, methodologically rigorous and independent evaluation. I will be looking at GiveWell to see what I recommendations can be translated. I will keep records of my findings to formulate draft guidelines for advising organisations in the Cancer Councils’ positions which I estimate by vague memory of GiveWell’s claims are in the majority in the philanthropic space.

[Link] A rational response to the Paris attacks and ISIS

-1 Gleb_Tsipursky 23 November 2015 01:47AM

Here's my op-ed that uses long-term orientation, probabilistic thinking, numeracy, consider the alternative, reaching our actual goals, avoiding intuitive emotional reactions and attention bias, and other rationality techniques to suggest more rational responses to the Paris attacks and the ISIS threat. It's published in the Sunday edition of The Plain Dealer​, a major newspaper (16th in the US). This is part of my broader project, Intentional Insights, of conveying rational thinking, including about politics, to a broad audience to raise the sanity waterline.

Political Debiasing and the Political Bias Test

8 Stefan_Schubert 11 September 2015 07:04PM

Cross-posted from the EA forum. I asked for questions for this test here on LW about a year ago. Thanks to those who contributed.

Rationally, your political values shouldn't affect your factual beliefs. Nevertheless, that often happens. Many factual issues are politically controversial - typically because the true answer makes a certain political course of action more plausible - and on those issues, many partisans tend to disregard politically uncomfortable evidence.

This sort of political bias has been demonstrated in a large number of psychological studies. For instance, Yale professor Dan Kahan and his collaborators showed in a fascinating experiment that on politically controversial questions, people are quite likely to commit mathematical mistakes that help them retain their beliefs, but much less likely to commit mistakes that would force them to give up those belies. Examples like this abound in the literature.

Political bias is likely to be a major cause of misguided policies in democracies (even the main one according to economist Bryan Caplan). If they don’t have any special reason not to, people without special knowledge defer to the scientific consensus on technical issues. Thus, they do not interfere with the experts, who normally get things right. On politically controversial issues, however, they often let their political bias win over science and evidence, which means they’ll end up with false beliefs. And, in a democracy voters having systematically false beliefs obviously more often than not translates into misguided policy.

Can we reduce this kind of political bias? I’m fairly hopeful. One reason for optimism is that debiasing generally seems to be possible to at least some extent. This optimism of mine was strengthened by participating in a CFAR workshop last year. Political bias seems not to be fundamentally different from other kinds of biases and should thus be reducible too. But obviously one could argue against this view of mine. I’m happy to discuss this issue further.

Another reason for optimism is that it seems that the level of political bias is actually lower today than it was historically. People are better at judging politically controversial issues in a detached, scientific way today than they were in, say, the 14th century. This shows that progress is possible. There seems to be no reason to believe it couldn’t continue.

A third reason for optimism is that there seems to be a strong norm against political bias. Few people are consciously and intentionally politically biased. Instead most people seem to believe themselves to be politically rational, and hold that as a very important value (or so I believe). They fail to see their own biases due to the bias blind spot (which disables us from seeing our own biases).

Thus if you could somehow make it salient to people that they are biased, they would actually want to change. And if others saw how biased they are, the incentives to debias would be even stronger.

There are many ways in which you could make political bias salient. For instance, you could meticulously go through political debaters’ arguments and point out fallacies, like I have done on my blog. I will post more about that later. Here I want to focus on another method, however, namely a political bias test which I have constructed with ClearerThinking, run by EA-member Spencer Greenberg. Since learning how the test works might make you answer a bit differently, I will not explain how the test works here, but instead refer either to the explanatory sections of the test, or to Jess Whittlestone’s (also an EA member) Vox.com-article.

Our hope is of course that people taking the test might start thinking more both about their own biases, and about the problem of political bias in general. We want this important topic to be discussed more. Our test is produced for the American market, but hopefully, it could work as a generic template for bias tests in other countries (akin to the Political Compass or Voting Advice Applications).

Here is a guide for making new bias tests (where the main criticisms of our test are also discussed). Also, we hope that the test could inspire academic psychologists and political scientists to construct full-blown scientific political bias tests.

This does not mean, however, that we think that such bias tests in themselves will get rid of the problem of political bias. We need to attack the problem of political bias from many other angles as well.

Pro-Con-lists of arguments and onesidedness points

3 Stefan_Schubert 21 August 2015 02:15PM

Follow-up to Reverse Engineering of Belief Structures

Pro-con-lists of arguments such as ProCon.org and BalancedPolitics.org fill a useful purpose. They give an overview over complex debates, and arguably foster nuance. My network for evidence-based policy is currently in the process of constructing a similar site in Swedish.

 

I'm thinking it might be interesting to add more features to such a site. You could let people create a profile on the site. Then you would let them fill in whether they agree or disagree with the theses under discussion (cannabis legalization, GM foods legalization, etc), and also whether they agree or disagree with the different argument for and against these theses (alternatively, you could let them rate the arguments from 1-5).

Once you have this data, you could use them to give people different kinds of statistics. The most straightforward statistic would be their degree of "onesidedness". If you think that all of the arguments for the theses you believe in are good, and all the arguments against them are bad, then you're defined as onesided. If you, on the other hand, believe that some of your own side's arguments are bad, whereas some of the opponents' arguments are good, you're defined as not being onesided. (The exact mathematical function you would choose could be discussed.)

Once you've told people how one-sided they are, according to the test, you would discuss what might explain onesidedness. My hunch is that the most plausible explanation normally is different kinds of bias. Instead of reviewing new arguments impartially, people treat arguments for their views more leniently than arguments against their views. Hence they end up being onesided, according to the test.

There are other possible explanations, though. One is that all of the arguments against the thesis in question actually are bad. That might happen occassionally, but I don't think that's very common. As Eliezer Yudkowsky says in "Policy Debates Should Not Appear One-sided":

On questions of simple fact (for example, whether Earthly life arose by natural selection) there's a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called "balance of evidence" should reflect this.  Indeed, under the Bayesian definition of evidence, "strong evidence" is just that sort of evidence which we only expect to find on one side of an argument.

But there is no reason for complex actions with many consequences to exhibit this onesidedness property.  

Instead, the reason why people end up with one-sided beliefs is bias, Yudkowsky argues:

Why do people seem to want their policy debates to be one-sided?

Politics is the mind-killer.  Arguments are soldiers.  Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it's like stabbing your soldiers in the back.  If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary.

Especially if you're consistently one-sided in lots of different debates, it's hard to see that any other hypothesis besides bias is plausible. It depends a bit on what kinds of arguments you include in the list, though. In our lists we haven't really checked the quality of the arguments (our purpose is to summarize the debate, rather than to judge it), but you could also do that, of course.

My hope is that such a test would make people more aware both of their own biases, and of the problem of political bias in general. I'm thinking that is the first step towards debiasing. I've also constructed a political bias test with similar methods and purposes together with ClearerThinking, which should be released soon.

 

You could also add other features to a pro-con-list. For instance, you could classify arguments in different ways: ad hominem-arguments, consequentialist arguments, rights-based arguments, etc. (Some arguments might be hard to classify, and then you just wouldn't do that. You wouldn't necessarily have to classify every argument.) Using this info, you could give people a profile: e.g., what kinds of arguments do they find most persuasive? That could make them reflect more on what kinds of arguments really are valid.

You could also combine these two features. For instance, some people might accept ad hominem-arguments when they support their views, but not when they contradict them. That would make your use of ad hominem-arguments onesided.

 

Yet another feature that could be added is a standard political compass. Since people fill in what theses they believe in (cannabis legalization, GM goods legalization, etc) you could calcluate what party is closest to them, based on the parties' stances on these issues. That could potentially make the test more attractive to take.

 

Suggestions of more possible features are welcome, as well as general comments - especially about implementation.

[POLITICS] Jihadism and a new kind of existential threat

-5 MrMind 25 March 2015 09:37AM

Politics is the mind-killer. Politics IS really the mind-killer. Please meditate on this until politics flows over you like butter on hot teflon, and your neurons stops fibrillating and resume their normal operations.

Preface

I've always found silly that LW, one of the best and most focused group of rationalists on the web isn't able to talk evenly about politics. It's true that we are still human, but can't we just make an effort at being calm and level-headed? I think we can. Does gradual exposure works on group, too? Maybe a little bit of effort combined with a little bit of exposure will work as a vaccine.
And maybe tomorrow a beautiful naked valkyrie will bring me to utopia on her flying unicorn...
Anyway, I want to try. Let's see what happens.

Intro

Two recent events has prompted me to make this post: I'm reading "The rise of the Islamic State" by Patrick Coburn, which I think does a good job in presenting fairly the very recent history surrounding ISIS, and the terrorist attack in Tunis by the same group, which resulted in 18 foreigners killed.
I believe that their presence in the region is now definitive: they control an area that is wider than Great Britain, with a population tallying over six millions, not counting the territories controlled by affiliate group like Boko Haram. Their influence is also expanding, and the attack in Tunis shows that this entity is not going to stay confined between the borders of Syria and Iraq.
It may well be the case that in the next ten years or so, this will be an international entity which will bring ideas and mores predating the Middle Age back on the Mediterranean Sea.

A new kind of existential threat

To a mildly rational person, the conflict fueling the rise of the Islamic State, namely the doctrinal differences between Sunni and Shia Islam, is the worst kind of Blue/Green division. A separation that causes hundreds of billions of dollars (read that again) to be wasted trying kill each other. But here it is, and the world must deal with it.
In comparison, Democrats and Republicans are so close that they could be mistaken for Aumann agreeing.
I fear that ISIS is bringing a new kind of existential threat: one where is not the existence of humankind at risks, but the existence of the idea of rationality.
The funny thing is that while people can be extremely irrational, they can still work on technology to discover new things. Fundamentalism has never stopped a country to achieve technological progress: think about the wonderful skyscrapers and green patches in the desert of the Arab Emirates or the nuclear weapons of Pakistan. So it might well be the case that in the future some scientist will start a seed AI believing that Allah will guide it to evolve in the best way. But it also might be that in the future, African, Asian and maybe European (gasp!) rationalists will be hunted down and killed like rats.
It might be the very meme of rationality to be erased from existence.

Questions

I'll close with a bunch of questions, both strictly and loosely related. Mainly, I'm asking you to refrain from proposing a solution. Let's assess the situation first.

  • Do you think that the Islamic State is an entity which will vanish in the future or not?
  • Do you think that their particularly violent brand of jihadism is a worse menace to the sanity waterline than say, other kind of religious movements, past or present?
  • Do you buy the idea that fundamentalism can be coupled with technological advancement, so that the future will presents us with Islamic AI's?
  • Do you think that the very same idea of rationality can be the subject of existential risk?
  • What do Neoreactionaries think of the Islamic State? After all, it's an exemplar case of the reactionaries in those areas winning big. I know it's only a surface comparison, I'm sincerely curious about what a NR think of the situation.

Live long and prosper.

A bit of word-dissolving in political discussion

2 [deleted] 07 December 2014 05:05PM

I found Scott Alexander's steelmanning of the NRx critique to be an interesting, even persuassive critique of modern progressivism, having not been exposed to this movement prior to today. However I am also equally confused at the jump from "modern liberal democracies are flawed" to "restore the devine-right-of-kings!" I've always hated the quip "democracy is the worst form of government, except for all the others" (we've yet tried), but I think it applies here.

-- Mark Friedenbach

Of course, with the prompting to state my own thoughts, I simply had to go and start typing them out.  The following contains obvious traces of my own political leanings and philosophy (in short summary: if "Cthulhu only swims left", then I AM CTHULHU... at least until someone explains to me what a Great Old One is doing out of R'lyeh and in West Coast-flavored American politics), but those traces should be taken as evidence of what I believe rather than statements about it.

Because what I was actually trying to talk about, is rationality in politics.  Because in fact, while it is hard, while it is spiders, all the normal techniques work on it.  There is only one real Cardinal Sin of Attempting to be Rational in Politics, and it is the following argument, stated in generic form that I might capture it from the ether and bury it: "You only believe what you believe for political reasons!"  It does not matter if those "reasons" are signaling, privilege, hegemony, or having an invisible devil on your shoulder whispering into your bloody ear: to impugn someone else's epistemology entirely at the meta-level without saying a thing against their object-level claims is anti-epistemology.

Now, on to the ranting!  The following are more-or-less a semi-random collection of tips I vomited out for trying to deal with politics rationally.  I hope they help.  This is a Discussion post because Mark said that might be a good idea.

  1. Dissolve "democracy", and not just in the philosophical sense, but in the sense that there have been many different kinds of actually existing democracies.  There are always multiple object-level implementations of any meta-level idea, and most political ideas are sufficiently abstract to count as meta-level.  Even if, for purposes of a thought experiment, you find yourself saying, "I WILL ONLY EVER CONSIDER SYSTEMS THAT COUNT AS DEMOCRACY ACCORDING TO MY INTUITIVE DEMOCRACY-P() PREDICATE!", one can easily debate whether a mixed-member proportional Parliament performs better than a district-based bicameral Congress, or whether a pure Westminster system beats them both, or whether a Presidential system works better, or whatever.  Particular institutional designs yield particular institutional behaviors, and successfully inducing complex generalizations across large categories of institutional designs requires large amounts of evidence -- just as it does in any other form of hierarchical probabilistic reasoning.
  2. Dissolve words like "democracy", "capitalism", "socialism", and "government" in the philosophical sense, and ask: what are the terminal goals democracy serves?  How much do we support those goals, and how much do current democratic systems suffer approximation error by forcing our terminal goals to fit inside the hypothesis space our actual institutions instantiate?  For however much we do support those goals, why do we shape these particular institutions to serve those goals, and not other institutions? For all values of X, mah nishtana ha-X hazeh mikol ha-X-im? is a fundamental question of correct reasoning.  (Asking the question of why we instantiate particular institutions in particular places, when one believes in democratic states, is the core issue of democratic socialism, and I would indeed count myself a democratic socialist.  But you get different answers and inferences if you ask about schools or churches, don't you?)
  3. Learn first to explicitly identify yourself with a political "tribe", and next to consider political ideas individually, as questions of fact and value subject to investigation via epistemology and moral epistemology, rather than treating politics as "tribal".  Tribalism is the mind-killer: keeping your own explicit tribal identification in mind helps you notice when you're being tribalist, and helps you distinguish your own tribe's customs from universal truths -- both aids to your political rationality.  And yes, while politics has always been at least a little tribal, the particular form the tribes take varies through time and space: the division of society into a "blue tribe" and a "red tribe" (as oft-described by Yvain on Slate Star Codex), for example, is peculiar to late-20th-century and early-21st-century USA.  Those colors didn't even come into usage until the 2000 Presidential election, and hadn't firmly solidified as describing seemingly separate nationalities until 2004!  Other countries, and other times, have significantly different arrangements of tribes, so if you don't learn to distinguish between ideas and tribes, you'll not only fail at political rationality, you'll give yourself severe culture shock the first time you go abroad.
    1. General rule: you often think things are general rules of the world not because you have the large amount of evidence necessary to reason that they really are, but because you've seen so few alternatives that your subjective distribution over models contains only one or two models, both coarse-grained.  Unquestioned assumptions always feel like universal truths from the inside!
  4. Learn to check political ideas by looking at the actually-existing implementations, including the ones you currently oppose -- think of yourself as bloody Sauron if you have to!  This works, since most political ideas are not particularly original.  Commons trusts exist, for example, the "movement" supporting them just wants to scale them up to cover all society's important common assets rather than just tracts of land donated by philanthropists.  Universal health care exists in many countries.  Monarchy and dictatorship exist in many countries.  Religious rule exists in many countries.  Free tertiary education exists in some countries, and has previously existed in more.  Non-free but subsidized tertiary education exists in many countries.  Running the state off oil revenue has been tried in many countries.  Centrally-planned economies have been tried in many countries.  And it's damn well easier to compare "Canadian health-care" to "American health-care" to "Chinese health-care", all sampled in 2014, using fact-based policy studies, than to argue about the Visions of Human Life represented by each (the welfare state, the Company Man, and the Lone Fox, let's say) -- which of course assumes consequentialism.  In fact, I should issue a much stronger warning here: argumentation is an utterly unreliable guide to truth compared to data, and all these meta-level political conclusions require vast amounts of object-level data to induce correct causal models of the world that allow for proper planning and policy.
    1. This means that while the Soviet Union is not evidence for the total failure of "socialism" as I use the word, that's because I define socialism as a larger category of possible economies that strictly contains centralized state planning -- centralized state planning really was, by and large, a total fucking failure.  But there's a rationality lesson here: in politics, all opponents of an idea will have their own definition for it, but the supporters will only have one.  Learn to identify political terminology with the definitions advanced by supporters: these definitions might contain applause lights, but at least they pick out one single spot in policy-space or society-space (or, hopefully, a reasonably small subset of that space), while opponents don't generally agree on which precise point in policy-space or society-space they're actually attacking (because they're all opposed for their own reasons and thus not coordinating with each-other).
    2. This also means that if someone wants to talk about monarchies that rule by religious right, or even about absolute monarchies in general, they do have to account for the behavior of the Arab monarchies today, for example.  Or if they want to talk about religious rule in general (which very few do, to my knowledge, but hey, let's go with it), they actually do have to account for the behavior of Da3esh/ISIS.  Of course, they might do so by endorsing such regimes, just as some members of Western Communist Parties endorsed the Soviet Union -- and this can happen by lack of knowledge, by failure of rationality, or by difference of goals.
    3. And then of course, there are the complications of the real world: in the real world, neither perfect steelman-level central planning nor perfect steelman-level markets have ever been implemented, anywhere, with the result that once upon a time, the Soviet economy was allocatively efficient and prices in capitalist West Germany were just as bad at reflecting relative scarcities as those in centrally-planned East GermanyThe real advantage of market systems has ended up being the autonomy of firms, not allocative optimality (and that's being argued, right there, in the single most left-wing magazine I know of!).  Which leads us to repeat the warning: correct conclusions are induced from real-world data, not argued from a priori principles that usually turn out to be wildly mis-emphasized if not entirely wrong.
  5. Learn to notice when otherwise uninformed people are adopting political ideas as attire to gain status by joining a fashionable cause.  Keep in mind that what constitutes "fashionable" depends on the joiner's own place in society, not on your opinions about them.  For some people, things you and I find low-status (certain clothes or haircuts) are, in fact, high-status.  See Yvain's "Republicans are Douchebags" post for an example in a Western context: names that the American Red Tribe considers solid and respectable are viewed by the American Blue Tribe as "douchebag names".
  6. A heuristic that tends to immunize against certain failures of political rationality: if an argument does not base itself at all in facts external to itself or to the listener, but instead concentrates entirely on reinterpreting evidence, then it is probably either an argument about definitions, or sheer nonsense.  This is related to my comments on hierarchical reasoning above, and also to the general sense in which trying to refute an object-level claim by meta-level argumentation is not even wrong, but in fact anti-epistemology.
  7. A further heuristic, usable on actual electioneering campaigns the world over: whenever someone says "values", he is lying, and you should reach for your gun.  The word "values" is the single most overused, drained, meaningless word in politics.  It is a normative pronoun: it directs the listener to fill in warm fuzzy things here without concentrating the speaker and the listener on the same point in policy-space at all.  All over the world, politicians routinely seek power on phrases like "I have values", or "My opponent has no values", or "our values" or "our $TRIBE values", or "$APPLAUSE_LIGHT values".  Just cross those phrases and their entire containing sentences out with a big black marker, and then see what the speaker is actually saying.  Sometimes, if you're lucky (ie: voting for a Democrat), they're saying absolutely nothing.  Often, however, the word "values" means, "Good thing I'm here to tell you that you want this brand new oppressive/exploitative power elite, since you didn't even know!"
  8. As mentioned above, be very, very sure about what ethical framework you're working within before having a political discussion.  A consequentialist and a virtue-ethicist will often take completely different policy positions on, say, healthcare, and have absolutely nothing to talk about with each-other.  The consequentialist can point out the utilitarian gains of universal single-payer care, and the virtue-ethicist can point out the incentive structure of corporate-sponsored group plans for promoting hard work and loyalty to employers, but they are fundamentally talking past each-other.
    1. Often, the core matter of politics is how to trade off between ethical ideals that are otherwise left talking past each-other, because society has finite material resources, human morals are very complex, and real policies have unintended consequences.  For example, if we enact Victorian-style "poor laws" that penalize poverty for virtue-ethical reasons, the proponents of those laws need to be held accountable for accepting the unintended consequences of those laws, including higher crime rates, a less educated workforce, etc.  (This is a broad point in favor of consequentialism: a rational consequentialist always considers consequences, intended and unintended, or he fails at consequentialism.  A deontologist or virtue-ethicist, on the other hand, has license from his own ethics algorithm to not care about unintended consequences at all, provided the rules get followed or the rules or rulers are virtuous.)
  9. Almost all policies can be enacted more effectively with state power, and almost no policies can "take over the world" by sheer superiority of the idea all by themselves.  Demanding that a successful policy should "take over the world" by itself, as everyone naturally turns to the One True Path, is intellectually dishonest, and so is demanding that a policy should be maximally effective in miniature (when tried without the state, or in a small state, or in a weak state) before it is justified for the state to experiment with it.  Remember: the overwhelming majority of journals and conferences in professional science still employ frequentist statistics rather than Bayesianism, and this is 20 years after the PC revolution and the World Wide Web, and 40 years after computers became widespread in universities.  Human beings are utility-satisficing, adaptation-executing creatures with mostly-unknown utility functions: expecting them to adopt more effective policies quickly by mere effectiveness of the policy is downright unrealistic.
  10. The Appeal to Preconceptions is probably the single Darkest form of Dark Arts, and it's used everywhere in politics.  When someone says something to you that "stands to reason" or "sounds right", which genuinely seems quite plausible, actually, but without actually providing evidence, you need to interrogate your own beliefs and find the Equivalent Sample Size of the informative prior generating that subjective plausibility before you let yourself get talked into anything.  This applies triply in philosophy.

 

Is arguing worth it? If so, when and when not? Also, how do I become less arrogant?

9 27chaos 27 November 2014 09:28PM

I've had several political arguments about That Which Must Not Be Named in the past few days with people of a wide variety of... strong opinions. I'm rather doubtful I've changed anyone's mind about anything, but I've spent a lot of time trying to do so. I also seem to have offended one person I know rather severely. Also, even if I have managed to change someone's mind about something through argument, it feels as though someone will end up having to argue with them later down the line when the next controversy happens.

It's very discouraging to feel this way. It is frustrating when making an argument is taken as a reason for personal attack. And it's annoying to me to feel like I'm being forced into something by the disapproval of others. I'm tempted to just retreat from democratic engagement entirely. But there are disadvantages to this, for example it makes it easier to maintain irrational beliefs if you never talk to people who disagree with you.

I think a big part of the problem is that I have an irrational alief that makes me feel like my opinions are uniquely valuable and important to share with others. I do think I'm smarter, more moderate, and more creative than most. But the feeling's magnitude and influence over my behavior is far greater than what's justified by the facts.

How do I destroy this feeling? Indulging it satisfies some competitive urges of mine and boosts my self-esteem. But I think it's bad overall despite this, because it makes evaluating the social consequences of my choices more difficult. It's like a small addiction, and I have no idea how to get over it.

Does anyone else here have an opinion on any of this? Advice from your own lives, perhaps?

Three methods of attaining change

7 Stefan_Schubert 16 August 2014 03:38PM

Say that you want to change some social or political institution: the educational system, the monetary system, research on AGI safety, or what not. When trying to reach this goal, you may use one of the following broad strategies (or some combination of them):

1) You may directly try to lobby (i.e. influence) politicians to implement this change, or try to influence voters to vote for parties that promise to implement these changes. 

2) You may try to build an alternative system and hope that it eventually becomes so popular so that it replaces the existing system.

3) You may try to develop tools that a) appeal to users of existing systems and b) whose widespread use is bound to change those existing systems.

Let me give some examples of what I mean. Trying to persuade politicians that we should replace conventional currencies by a private currency or, for that matter, starting a pro-Bitcoin party, fall under 1), whereas starting a private currency and hope that it spreads falls under 2). (This post was inspired by a great comment by Gunnar Zarncke on precisely this topic. I take it that he was there talking of strategy 2.) Similarly, trying to lobby politicians to reform the academia falls under 1) whereas starting new research institutions which use new and hopefully more effective methods falls under 2). I take it that this is what, e.g. Leverage Research is trying to do, in part. Similarly, libertarians who vote for Ron Paul are taking the first course, while at least one possible motivation for the Seasteading Institute is to construct an alternative system that proves to be more efficient than existing governments.

Efficient Voting Advice Applications (VAA's), which advice you to vote on the basis of your views on different policy matters, can be an example of 3) (they are discussed here). Suppose that voters started to use them on a grand scale. This could potentially force politicians to adhere very closely to the views of the voters on each particular issue, since if you failed to do this you would stand little chance of winning. This may or may not be a good thing, but the point is that it would be a change that would not be caused by lobbying of politicians or by building an alternative system, but simply by constructing a tool whose widespread use could change the existing system.

Another similar tool is reputation or user review systems. Suppose that you're dissatisfied with the general standards of some institution: say university education, medical care, or what not. You may attain this by lobbying politicians to implement new regulations intended to ensure quality (1), or by starting your own, superior, universities or hospitals (2), hoping that others will follow. Another method is, however, to create a reliable reputation/review system which, if they became widely used, would guide students and patients to the best universities and hospitals, thereby incentivizing to improve.

Now of course, when you're trying to get people to use such review systems, you are, in effect, building an evaluation system that competes with existing systems (e.g. the Guardian university ranking), so on one level you are using the second strategy. Your ultimate goal is, however, to create better universities, to which better evaluation systems, is just a means (as a tool). Hence you're following the third strategy here, in my terms.

Strategy 1) is of course a "statist" one, since what you're doing here is that you're trying to get the government to change the institution in question for you. Strategies 2) and 3) are, in contrast, both "non-statist", since when you use them you're not directly trying to implement the change through the political system. Hence libertarians and other anti-statists should prefer them.

My hunch is that when people are trying to change things, many of them unthinkingly go for 1), even regarding issues where it is unlikely that they are going to succeed that way. (For instance, it seems to me that advocates for direct democracy who try to persuade voters to vote for direct democratic parties are unlikely to succeed, but that widespread of VAA's might get us considerably closer to their ideal, and that they therefore should opt for the third strategy.) A plausible explanation of this is availability bias; our tendency to focus on what we most often see around us. Attempts to change social institutions through politics get a lot of attention, which makes people think of this strategy first. Even though this strategy is often efficient, I'd guess it is, for this reason, generally overused and that people sometimes instead should go for 2) or 3). (Possibly, Europeans have an even stronger bias in favour of this strategy than Americans.)

I also suspect, though, that people go for 2) a bit too often relative to 3). I think that people find it appealing, for its own sake, to create an entirely alternatively structure. If you're a perfectionist, it might be satisfying to see what you consider "the perfect institution", even if it is very small and has little impact on society. Also, sometimes small groups of devotees flock to these alternatives, and a strong group identity is therefore created. Moreover, I think that availability bias may play a role here, also. Even though this sort of strategy gets less attention than lobbying, most people know what it is. It is quite clear what it means to do something like this, and being part of a project like this therefore gives you a clear identity. For these reasons, I think that we might sometimes fool ourselves into believing that these alternative structures are more likely to be succesful than they actually are.

Conversely, people might be biased against the third strategy because it's less obvious. Also, it has perhaps something vaguely manipulative over it which might bias idealistic people against it. What you're typically trying to do is to get people to use a tool (say VAA's) a side-effect of which is the change you wish to attain (in this case, correspondence between voters' views and actual policies). I don't think that this kind of manipulation is necessarily vicious (but it would need to be discussed on a case-by-case-basis) but the point is that people tend to think that it is. Also, even those who don't think that it is manipulative in an unethical sense would still think that it is somehow "unheroic". Starting your own environmental party or creating your own artifical libertarian island clearly has something heroic over it, but developing efficient VAA's, which as a side-effect changes the political landscape, does not.

I'd thus argue that people should start looking more closely at the third strategy. A group that does use a strategy similar to this is of course for-profit companies. They try to analyze what products would appeal to people, and in so doing, carefully consider how existing institutions shape people's preferences. For instance, companies like Uber, AirBnB and LinkedIn have been succesful because they realized that given the structure of the taxi, the hotel and the recruitment businesses, their products would be appealing.

Of course, these companies primary goal, profit, is very different from the political goals I'm talking about here. At the same time, I think it is useful to compare the two cases. I think that generally, when we're trying to attain political change, we're not "actually trying" (in CFAR's terminology) as hard as we do when we're trying to maximize profit . It is very easy to fall into a mode where you're focusing on making symbolic gestures (which express your identity) rather than on trying to change things in politics. (This is, in effect, what many traditional charities are doing, if the EA movement is right.)

Instead, we should think as hard as profit-maximizing companies what new tools are likely to catch on. Any kind of tools could in principle be used, but the ones that seem most obvious are various kind of social media and other internet based tools (such as those mentioned in this post). The technical progress gives us enormous opportunities to costruct new tools that could re-shape people's behaviour in a way that would impact existing social and political institutions on a large scale.

Developing such tools is not easy. Even very succesful companies again and again fail to predict what new products will appeal to people. Not the least, you need a profound understanding of human psychology in order to succeed. That said, political organizations have certain advantages visavi for-profit companies. More often than not, they might develop ideas publically, whereas for-profit companies often have to keep them secret until they product is launched. This facilitates wisdom of the crowd-reasoning, where many different kinds of people come up with solutions together. Such methods can, in my opinion, be very powerful.

 

Any input regarding, e.g. the taxonomy of methods, my speculations about biases, and, in particular, examples of institution changing tools are welcome. I'm also interested in comments on efficient methods for coming up with useful tools (e.g. tests of them). Finally, if anything's unclear I'd be happy to provide clarifications (it's a very complex topic).

Every Paul needs a Jesus

9 PhilGoetz 10 August 2014 07:13PM

My take on some historical religious/social/political movements:

  • Jesus taught a radical and highly impractical doctrine of love and disregard for one's own welfare. Paul took control of much of the church that Jesus' charisma had built, and reworked this into something that could function in a real community, re-emphasizing the social mores and connections that Jesus had spent so much effort denigrating, and converting Jesus' emphasis on radical social action into an emphasis on theology and salvation.
  • Marx taught a radical and highly impractical theory of how workers could take over the means of production and create a state-free Utopia. Lenin and Stalin took control of the organizations built around those theories, and reworked them into a strong, centrally-controlled state.
  • Che Guevara (I'm ignorant here and relying on Wikipedia; forgive me) joined Castro's rebel group early on, rose to the position of second in command, was largely responsible for the military success of the revolution, and had great motivating influence due to his charisma and his unyielding, idealistic, impractical ideas. It turned out his idealism prevented him from effectively running government institutions, so he had to go looking for other revolutions to fight in while Castro ran Cuba.

The best strategy for complex social movements is not honest rationality, because rational, practical approaches don't generate enthusiasm. A radical social movement needs one charismatic radical who enunciates appealing, impractical ideas, and another figure who can appropriate all of the energy and devotion generated by the first figure's idealism, yet not be held to their impractical ideals. It's a two-step process that is almost necessary, to protect the pretty ideals that generate popular enthusiasm from the grit and grease of institution and government. Someone needs to do a bait-and-switch. Either the original vision must be appropriated and bent to a different purpose by someone practical, or the original visionary must be dishonest or self-deceiving.

continue reading »

Politics is hard mode

28 RobbBB 21 July 2014 10:14PM

Summary: I don't think 'politics is the mind-killer' works well rthetorically. I suggest 'politics is hard mode' instead.


 

Some people in and catawampus to the LessWrong community have objected to "politics is the mind-killer" as a framing (/ slogan / taunt). Miri Mogilevsky explained on Facebook:

My usual first objection is that it seems odd to single politics out as a “mind-killer” when there’s plenty of evidence that tribalism happens everywhere. Recently, there has been a whole kerfuffle within the field of psychology about replication of studies. Of course, some key studies have failed to replicate, leading to accusations of “bullying” and “witch-hunts” and what have you. Some of the people involved have since walked their language back, but it was still a rather concerning demonstration of mind-killing in action. People took “sides,” people became upset at people based on their “sides” rather than their actual opinions or behavior, and so on.

Unless this article refers specifically to electoral politics and Democrats and Republicans and things (not clear from the wording), “politics” is such a frightfully broad category of human experience that writing it off entirely as a mind-killer that cannot be discussed or else all rationality flies out the window effectively prohibits a large number of important issues from being discussed, by the very people who can, in theory, be counted upon to discuss them better than most. Is it “politics” for me to talk about my experience as a woman in gatherings that are predominantly composed of men? Many would say it is. But I’m sure that these groups of men stand to gain from hearing about my experiences, since some of them are concerned that so few women attend their events.

In this article, Eliezer notes, “Politics is an important domain to which we should individually apply our rationality — but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.” But that means that we all have to individually, privately apply rationality to politics without consulting anyone who can help us do this well. After all, there is no such thing as a discussant who is “rational”; there is a reason the website is called “Less Wrong” rather than “Not At All Wrong” or “Always 100% Right.” Assuming that we are all trying to be more rational, there is nobody better to discuss politics with than each other.

The rest of my objection to this meme has little to do with this article, which I think raises lots of great points, and more to do with the response that I’ve seen to it — an eye-rolling, condescending dismissal of politics itself and of anyone who cares about it. Of course, I’m totally fine if a given person isn’t interested in politics and doesn’t want to discuss it, but then they should say, “I’m not interested in this and would rather not discuss it,” or “I don’t think I can be rational in this discussion so I’d rather avoid it,” rather than sneeringly reminding me “You know, politics is the mind-killer,” as though I am an errant child. I’m well-aware of the dangers of politics to good thinking. I am also aware of the benefits of good thinking to politics. So I’ve decided to accept the risk and to try to apply good thinking there. [...]

I’m sure there are also people who disagree with the article itself, but I don’t think I know those people personally. And to add a political dimension (heh), it’s relevant that most non-LW people (like me) initially encounter “politics is the mind-killer” being thrown out in comment threads, not through reading the original article. My opinion of the concept improved a lot once I read the article.

In the same thread, Andrew Mahone added, “Using it in that sneering way, Miri, seems just like a faux-rationalist version of ‘Oh, I don’t bother with politics.’ It’s just another way of looking down on any concerns larger than oneself as somehow dirty, only now, you know, rationalist dirty.” To which Miri replied: “Yeah, and what’s weird is that that really doesn’t seem to be Eliezer’s intent, judging by the eponymous article.”

Eliezer replied briefly, to clarify that he wasn't generally thinking of problems that can be directly addressed in local groups (but happen to be politically charged) as "politics":

Hanson’s “Tug the Rope Sideways” principle, combined with the fact that large communities are hard to personally influence, explains a lot in practice about what I find suspicious about someone who claims that conventional national politics are the top priority to discuss. Obviously local community matters are exempt from that critique! I think if I’d substituted ‘national politics as seen on TV’ in a lot of the cases where I said ‘politics’ it would have more precisely conveyed what I was trying to say.

But that doesn't resolve the issue. Even if local politics is more instrumentally tractable, the worry about polarization and factionalization can still apply, and may still make it a poor epistemic training ground.

A subtler problem with banning “political” discussions on a blog or at a meet-up is that it’s hard to do fairly, because our snap judgments about what counts as “political” may themselves be affected by partisan divides. In many cases the status quo is thought of as apolitical, even though objections to the status quo are ‘political.’ (Shades of Pretending to be Wise.)

Because politics gets personal fast, it’s hard to talk about it successfully. But if you’re trying to build a community, build friendships, or build a movement, you can’t outlaw everything ‘personal.’

And selectively outlawing personal stuff gets even messier. Last year, daenerys shared anonymized stories from women, including several that discussed past experiences where the writer had been attacked or made to feel unsafe. If those discussions are made off-limits because they relate to gender and are therefore ‘political,’ some folks may take away the message that they aren’t allowed to talk about, e.g., some harmful or alienating norm they see at meet-ups. I haven’t seen enough discussions of this failure mode to feel super confident people know how to avoid it.

Since this is one of the LessWrong memes that’s most likely to pop up in cross-subcultural dialogues (along with the even more ripe-for-misinterpretation “policy debates should not appear one-sided“…), as a first (very small) step, my action proposal is to obsolete the ‘mind-killer’ framing. A better phrase for getting the same work done would be ‘politics is hard mode’:

1. ‘Politics is hard mode’ emphasizes that ‘mind-killing’ (= epistemic difficulty) is quantitative, not qualitative. Some things might instead fall under Middlingly Hard Mode, or under Nightmare Mode…

2. ‘Hard’ invites the question ‘hard for whom?’, more so than ‘mind-killer’ does. We’re used to the fact that some people and some contexts change what’s ‘hard’, so it’s a little less likely we’ll universally generalize.

3. ‘Mindkill’ connotes contamination, sickness, failure, weakness. In contrast, ‘Hard Mode’ doesn’t imply that a thing is low-status or unworthy. As a result, it’s less likely to create the impression (or reality) that LessWrongers or Effective Altruists dismiss out-of-hand the idea of hypothetical-political-intervention-that-isn’t-a-terrible-idea. Maybe some people do want to argue for the thesis that politics is always useless or icky, but if so it should be done in those terms, explicitly — not snuck in as a connotation.

4. ‘Hard Mode’ can’t readily be perceived as a personal attack. If you accuse someone of being ‘mindkilled’, with no context provided, that smacks of insult — you appear to be calling them stupid, irrational, deluded, or the like. If you tell someone they’re playing on ‘Hard Mode,’ that’s very nearly a compliment, which makes your advice that they change behaviors a lot likelier to go over well.

5. ‘Hard Mode’ doesn’t risk bringing to mind (e.g., gendered) stereotypes about communities of political activists being dumb, irrational, or overemotional.

6. ‘Hard Mode’ encourages a growth mindset. Maybe some topics are too hard to ever be discussed. Even so, ranking topics by difficulty encourages an approach where you try to do better, rather than merely withdrawing. It may be wise to eschew politics, but we should not fear it. (Fear is the mind-killer.)

7. Edit: One of the larger engines of conflict is that people are so much worse at noticing their own faults and biases than noticing others'. People will be relatively quick to dismiss others as 'mindkilled,' while frequently flinching away from or just-not-thinking 'maybe I'm a bit mindkilled about this.' Framing the problem as a challenge rather than as a failing might make it easier to be reflective and even-handed.

This is not an attempt to get more people to talk about politics. I think this is a better framing whether or not you trust others (or yourself) to have productive political conversations.

When I playtested this post, Ciphergoth raised the worry that 'hard mode' isn't scary-sounding enough. As dire warnings go, it's light-hearted—exciting, even. To which I say: good. Counter-intuitive fears should usually be argued into people (e.g., via Eliezer's politics sequence), not connotation-ninja'd or chanted at them. The cognitive content is more clearly conveyed by 'hard mode,' and if some group (people who love politics) stands to gain the most from internalizing this message, the message shouldn't cast that very group (people who love politics) in an obviously unflattering light. LW seems fairly memetically stable, so the main issue is what would make this meme infect friends and acquaintances who haven't read the sequences. (Or Dune.)

If you just want a scary personal mantra to remind yourself of the risks, I propose 'politics is SPIDERS'. Though 'politics is the mind-killer' is fine there too.

If you and your co-conversationalists haven’t yet built up a lot of trust and rapport, or if tempers are already flaring, conveying the message ‘I’m too rational to discuss politics’ or ‘You’re too irrational to discuss politics’ can make things worse. In that context, ‘politics is the mind-killer’ is the mind-killer. At least, it’s a needlessly mind-killing way of warning people about epistemic hazards.

‘Hard Mode’ lets you speak as the Humble Aspirant rather than the Aloof Superior. Strive to convey: ‘I’m worried I’m too low-level to participate in this discussion; could you have it somewhere else?’ Or: ‘Could we talk about something closer to Easy Mode, so we can level up together?’ More generally: If you’re worried that what you talk about will impact group epistemology, you should be even more worried about how you talk about it.

A Parable of Elites and Takeoffs

23 gwern 30 June 2014 11:04PM

Let me tell you a parable of the future. Let’s say, 70 years from now, in a large Western country we’ll call Nacirema.

One day far from now: scientific development has continued apace, and a large government project (with, unsurprisingly, a lot of military funding) has taken the scattered pieces of cutting-edge research and put them together into a single awesome technology, which could revolutionize (or at least, vastly improve) all sectors of the economy. Leading thinkers had long forecast that this area of science’s mysteries would eventually yield to progress, despite theoretical confusion and perhaps-disappointing initial results and the scorn of more conservative types and the incomprehension (or outright disgust, for ‘playing god’) of the general population, and at last - it had! The future was bright.

Unfortunately, it was hurriedly decided to use an early prototype outside the lab in an impoverished foreign country. Whether out of arrogance, bureaucratic inertia, overconfidence on the part of the involved researchers, condescending racism, the need to justify the billions of grant-dollars that cumulative went into the project over the years by showing some use of it - whatever, the reasons no longer mattered after the final order was signed. The technology was used, but the consequences turned out to be horrific: over a brief period of what seemed like mere days, entire cities collapsed and scores - hundreds - of thousands of people died. (Modern economies are extremely interdependent and fragile, and small disruptions can have large consequences; more people died in the chaos of the evacuation of the areas around Fukushima than will die of the radiation.)

continue reading »

Democracy and individual liberty; decentralised prediction markets

-1 Chrysophylax 15 March 2014 12:27PM

A pair of links I found recently (via Marginal Revolution) and haven't found on LW:

 

http://www.cato-unbound.org/2014/03/10/mark-s-weiner/paradox-modern-individualism

https://bitcointalk.org/index.php?topic=475054.0;all

 

The former discusses liberty in the context of clannish behaviour, arguing that it is the existence of the institutions of modern democracies that allows people individual liberty, as it precludes the need for clan structures (extended family groups, crime syndicates, patronage networks and such).

The latter is a author's summary of a white paper on the subject of decentralised Bitcoin prediction markets with a link to the paper.

[LINK] Joseph Bottum on Politics as the Mindkiller

2 Salemicus 27 February 2014 07:40PM

One of my favourite Less Wrong articles is Politics is the mindkiller. Part of the reason that political discussion so bad is the poor incentives - if you have little chance to change the outcome, then there is little reason to strive for truth or accuracy - but a large part of the reason is our pre-political attitudes and dispositions. I don't mean to suggest that there is a neat divide; clearly, there is a reflexive relation between the incentives within political discussion and our view of the appropriate purpose and scope of politics. Nevertheless, I think it's a useful distinction to make, and so I applaud the fact that Eliezer doesn't start his essays on the subject by talking about incentives, feedback or rational irrationality - instead he starts with the fact that our approach to politics is instinctively tribal.

This brings me to Joseph Bottum's excellent recent article in The American, The Post-Protestant Ethic and Spirit of America. This charts what he sees as the tribal changes within America that have shaped current attitudes to politics. I think it's best seen in conjunction with Arnold Kling's excellent The Three Languages of Politics; while Kling talks about the political language and rhetoric of modern American political groupings, Bottum's essay is more about the social changes that have led to these kinds of language and rhetoric.

We live in what can only be called a spiritual age, swayed by its metaphysical fears and hungers, when we imagine that our ordinary political opponents are not merely mistaken, but actually evil. When we assume that past ages, and the people who lived in them, are defined by the systematic crimes of history. When we suppose that some vast ethical miasma, racism, radicalism, cultural self-hatred, selfish blindness, determines the beliefs of classes other than our own. When we can make no rhetorical distinction between absolute wickedness and the people with whom we disagree. The Republican Congress is the Taliban. President Obama is a Communist. Wisconsin’s governor is a Nazi.

...

The real question, of course, is how and why this happened. How and why politics became a mode of spiritual redemption for nearly everyone in America, but especially for the college-educated upper-middle class, who are probably best understood not as the elite, but as the elect, people who know themselves as good, as relieved of their spiritual anxieties by their attitudes toward social problems.

Video of a related lecture can also be found here.

Link: Poking the Bear (Podcast)

0 James_Miller 27 February 2014 03:43PM

A Dan Carlin Podcast about how the United States is foolishly antagonizing the Russians over Ukraine.  Carlin makes an analogy as to how the United States would feel if Russia helped overthrow the government of Mexico to install an anti-American government under conditions that might result in a Mexican civil war.  Because of the Russian nuclear arsenal, even a tiny chance of a war between the United States and Russia has a huge negative expected value.

How big of an impact would cleaner political debates have on society?

4 adamzerner 06 February 2014 12:24AM

See this Newsroom clip.

Basically, their news network is trying to change the way political debates work by having the moderator force the candidates to answer the questions that are asked of them, not interrupt each other, justify arguments that are based on obvious falsehoods etc.

How big of a positive impact do you guys think that this would have on society?

My initial thoughts are that it would be huge. It would lead to better politicians, which would be a high level of action. The positive effects would trickle down into many aspects of our society.

The question then becomes, "can we make this happen?". I don't see a way right now, but the idea has enough upside to me that I keep it in the back of my mind in case I come up with a plausible way of implementing the change.

Thoughts?

Democracy and rationality

8 homunq 30 October 2013 12:07PM

Note: This is a draft; so far, about the first half is complete. I'm posting it to Discussion for now; when it's finished, I'll move it to Main. In the mean time, I'd appreciate comments, including suggestions on style and/or format. In particular, if you think I should(n't) try to post this as a sequence of separate sections, let me know.

Summary: You want to find the truth? You want to win? You're gonna have to learn the right way to vote. Plurality voting sucks; better voting systems are built from the blocks of approval, medians (Bucklin cutoffs), delegation, and pairwise opposition. I'm working to promote these systems and I want your help.

Contents: 1. Overblown¹ rhetorical setup ... 2. Condorcet's ideals and Arrow's problem ... 3. Further issues for politics ... 4. Rating versus ranking; a solution? ... 5. Delegation and SODA ... 6. Criteria and pathologies ... 7. Representation, Proportional representation, and Sortition ... 8. What I'm doing about it and what you can ... 9. Conclusions and future directions ... 10. Appendix: voting systems table ... 11. Footnotes

1.

This is a website focused on becoming more rational. But that can't just mean getting a black belt in individual epistemic rationality. In a situation where you're not the one making the decision, that black belt is just a recipe for frustration.

Of course, there's also plenty of content here about how to interact rationally; how to argue for truth, including both hacking yourself to give in when you're wrong and hacking others to give in when they are. You can learn plenty here about Aumann's Agreement Theorem on how two rational Bayesians should never knowingly disagree.

But "two rational Bayesians" isn't a whole lot better as a model for society than "one rational Bayesian". Aspiring to be rational is well and good, but the Socratic ideal of a world tied together by two-person dialogue alone is as unrealistic as the sociopath's ideal of a world where their own voice rules alone. Society needs structures for more than two people to interact. And just as we need techniques for checking irrationality in one- and two-person contexts, we need them, perhaps all the more, in multi-person contexts.

Most of the basic individual and dialogical rationality techniques carry over. Things like noticing when you are confused, or making your opponent's arguments into a steel man, are still perfectly applicable. But there's also a new set of issues when n>2: the issues of democracy and voting. For a group of aspiring rationalists to come to a working consensus, of course they need to begin by evaluating and discussing the evidence, but eventually it will be time to cut off the discussion and just vote. When they do so, they should understand the strengths and pitfalls of voting in general and of their chosen voting method in particular.

And voting's not just useful for an aspiring rationalist community. As it happens, it's an important part of how governments are run. Discussing politics may be a mind-killer in many contexts, but there are an awful lot of domains where politics is a part of the road to winning.² Understanding voting processes a little bit can help you navigate that road; understanding them deeply opens the possibility of improving that road and thus winning more often.

2. Collective rationality: Condorcet's ideals and Arrow's problem

Imagine it's 1785, and you're a member of the French Academy of Sciences. You're rubbing elbows with most of the giants of science and mathematics of your day: Coulomb, Fourier, Lalande, Lagrange, Laplace, Lavoisier, Monge; even the odd foreign notable like Franklin with his ideas to unify electrostatics and electric flow.

They'll remember your names

One day, they'll put your names in front of lots of cameras (even though that foreign yokel Franklin will be in more pictures)

And this academy, with many of the smartest people in the world, has votes on stuff. Who will be our next president; who should edit and schedule our publications; etc. You're sure that if you all could just find the right way to do the voting, you'd get the right answer. In fact, you can easily prove that, or something like it: if a group is deciding between one right and one wrong option, and each member is independently more than 50% likely to get it right, then as the group size grows the chance of a majority vote choosing the right option goes to 1.

But somehow, there's still annoying politics getting in the way. Some people seem to win the elections simply because everyone expects them to win. So last year, the academy decided on a new election system to use, proposed by your rival, Charles de Borda, in which candidates get different points for being a voters first, second, or third choice, and the one with the most points wins. But you're convinced that this new system will lead to the opposite problem: people who win the election precisely because nobody expected them to win, by getting the points that voters strategically don't want to give to a strong rival. But when people point that possibility out to Borda, he only huffs that "my system is meant for honest men!"

So with your proof of the above intuitive, useful result about two-way elections, you try to figure out how to reduce an n-way election to the two-candidate case. Clearly, you can show that Borda's system will frequently give the wrong results from that perspective. But frustratingly, you find that there could sometimes be no right answer; that there will be no candidate who would beat all the others in one-on-one races. A crack has opened up; could it be that the collective decisions of intelligent individual rational agents could be irrational?

Of course, the "you" in this story is the Marquis de Condorcet, and the year 1785 is when he published his Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix, a work devoted to the question of how to acheive collective rationality. The theorem referenced above is Condorcet's Jury Theorem, which seems to offer hope that democracy can point the way from individually-imperfect rationality towards an ever-more-perfect collective rationality. Just as Aumann's Agreement Theorem shows that two rational agents should always move towards consensus, the Condorcet Jury Theorem apparently shows that if you have enough rational agents, the resulting consensus will be correct.

But as I said, Condorcet also opened a crack in that hope: the possibility that collective preferences will be cyclical. If the assumptions of the jury theorem don't hold — if each voter doesn't have a >50% chance of being right on a randomly-selected question, OR if the correctness of two randomly-selected voters is not independent and uncorrelated — then individually-sensible choices can lead to collectively-ridiculous ones. 

What do I mean by "collectively-ridiculous"? Let's imagine that the Rationalist Marching Band is choosing the colors for their summer, winter, and spring uniforms, and that they all agree that the only goal is to have as much as possible of the best possible colors. The summer-style uniforms come in red or blue, and they vote and pick blue; the winter-style ones come in blue or green, and they pick green; and the spring ones come in green or red, and they pick red.

Obviously, this makes us doubt their collective rationality. If, as they all agree they should, they had a consistent favorite color, they should have chosen that color both times that it was available, rather than choosing three different colors in the three cases. Theoretically, the salesperson could use such a fact to pump money out of them; for instance, offering to let them "trade up" their spring uniform from red to blue, then to green, then back to red, charging them a small fee each time; if they voted consistently as above, they would agree to each trade (though of course in reality human voters would probably catch on to the trick pretty soon, so the abstract ideal of an unending circular money pump wouldn't work).

This is the kind of irrationality that Condorcet showed was possible in collective decisionmaking. He also realized that there was a related issue with logical inconsistencies. If you were take a vote on 3 logically related propositions — say, "Should we have a Minister of Silly Walks, to be appointed by the Chancellor of the Excalibur", "Should we have a Minister of Silly Walks, but not appointed by the Chancellor of the Excalibur", and "Should we in fact have a Minister of Silly Walks at all", where the third cannot be true unless one of the first is — then you could easily get majority votes for inconsistent results — in this case, no, no, and yes, respectively. Obviously, there are many ways to fix the problem in this simple case — probably many less-wrong'ers would suggest some Bayesian tricks related to logical networks and treating votes as evidence⁸ — but it's a tough problem in general even today, especially when the logical relationships can be complex, and Condorcet was quite right to be worried about its implications for collective rationality.³

And that's not the only tough problem he correctly foresaw. Nearly 200 years later and an ocean away, in the 1960s, Kenneth Arrow showed that it was impossible for a preferential voting system to avoid the problem of a "Condorcet cycle" of preferences. Arrows theorem shows that any voting system which can consistently give the same winner (or, in ties, winners) for the same voter preferences; which does not make one voter the effective dictator; which is sure to elect a candidate if all voters prefer them; and which will switch the results for two candidates if you switch their names on all the votes... must exhibit, in at least some situation, the pathology that befell the Rationalist Marching Band above, or in other words, must fail "independence of irrelevant alternatives".

Arrow's theorem is far from obvious a priori, but proof is not hard to understand intuitively using Condorcet's insight. Say that there are three candidates, X, Y, and Z, with roughly equal bases of support; and that they form a Condorcet cycle, because in two-way races, X would beat Y with help from Z supporters, Y would beat Z with help from X supporters, and Z would beat X with help from Y supporters. So whoever wins in the three-way race — say, X — just remove the one who would have lost to them — Y in this case — and that "irrelevant" change will change the winner to be the third — Z in this case.

Summary of above: Collective rationality is harder than individual or two-way rationality. Condorcet saw the problem and tried to solve it, but Arrow saw that Condorcet had been doomed to fail.

3. Further issues for politics

So Condorcet's ideals of better rationality through voting appear to be in ruins. But at least we can hope that voting is a good way to do politics, right?

Not so fast. Arrow's theorem quickly led to further disturbing results. Alan Gibbard (and also Mark Satterthwaite) extended that there is no voting system which doesn't encourage voting strategy. That is, if you view an voting system as a class of games where the finite players and finite available strategies are fixed, no player is effectively a dictator, and the only thing that varies are the payoffs for each player from each outcome, there is no voting system where you can derive your best strategic vote purely by looking "honestly" at your own preferences; there is always the possibility of situations where you have to second-guess what others will do.

Amartya Sen piled on with another depressing extension of Arrows' logic. He showed that there is no possible way of aggregating individual choices into collective choice that satisfies two simple criteria. First, it shouldn't choose pareto-dominated outcomes; if everyone prefers situation XYZ to ABC, that they don't do XYZ. Second, it is "minimally liberal"; that is, there are at least two people who each get to freely make their own decision on at least one specific issue each, no matter what, so for instance I always get to decide between X and A (in Gibbard's⁴ example, colors for my house), and you always get to decide between Y and B (colors for your own house). The problem is that if you nosily care more about my house's color, the decision that should have been mine, and I nosily care about yours, more than we each care about our own, then the pareto-dominant situation is the one where we don't decide our own houses; and that nosiness could, in theory, be the case for any specific choice that, a priori, someone might have labelled as our Inalienable Right. It's not such a surprising result when you think about it that way, but it does clearly show that unswerving ideals of Democracy and Liberty will never truly be compatible.

Meanwhile, "public choice" theorists⁵ like Duncan Black, James Buchanan, etc. were busy undermining the idea of democratic government from another direction: the motivations of the politicians and bureaucrats who are supposed to keep it running. They showed that various incentives, including the strange voting scenarios explored by Condorcet and Arrow, would tend open a gap between the motives of the people and those of the government, and that strategic voting and agenda-setting within a legislature would tend to extend the impact of that gap. Where Gibbard and Sen had proved general results, these theorists worked from specific examples. And in one aspect, at least, their analysis is devastatingly unanswerable: the near-ubiquitous "democratic" system of plurality voting, also known as first-past-the-post or vote-for-one or biggest-minority-wins, is terrible in both theory and practice.

So, by the 1980s, things looked pretty depressing for the theory of democracy. Politics, the theory went, was doomed forever to be a worse than sausage factory; disgusting on the inside and distasteful even from outside.

Should an ethical rationalist just give up on politics, then? Of course not. As long as the results it produces are important, it's worth trying to optimize. And as soon as you take the engineer's attitude of optimizing, instead of dogmatically searching for perfection or uselessly whining about the problems, the results above don't seem nearly as bad.

From this engineer's perspective, public choice theory serves as an unsurprising warning that tradeoffs are necessary, but more usefully, as a map of where those tradeoffs can go particularly wrong. In particular, its clearest lesson, in all-caps bold with a blink tag, that PLURALITY IS BAD, can be seen as a hopeful suggestion that other voting systems may be better. Meanwhile, the logic of both Sen's and Gibbard's theorems are built on Arrow's earlier result. So if we could find a way around Arrow, it might help resolve the whole issue.

Summary of above: Democracy is the worst political system... (...except for all the others?) But perhaps it doesn't have to be quite so bad as it is today.

4. Rating versus ranking

So finding a way around Arrow's theorem could be key to this whole matter. As a mathematical theorem, of course, the logic is bulletproof. But it does make one crucial assumption: that the only inputs to a voting system are rankings, that is, voters' ordinal preference orders for the candidates. No distinctions can be made using ratings or grades; that is, as long as you prefer X to Y to Z, the strength of those preferences can't matter. Whether you put Y almost up near X or way down next to Z, the result must be the same.

Relax that assumption, and it's easy to create a voting system which meets Arrow's criteria. It's called Score voting⁶, and it just means rating each candidate with a number from some fixed interval (abstractly speaking, a real number; but in practice, usually an integer); the scores are added up and the highest total or average wins. (Unless there are missing values, of course, total or average amount to the same thing.) You've probably used it yourself on Yelp, IMDB, or similar sites. And it clearly passes all of Arrow's criteria. Non-dictatorship? Check. Unanimity? Check. Symmetry over switching candidate names? Check. Independence of irrelevant alternatives? In the mathematical sense — that is, as long as the scores for other candidates are unchanged — check.

So score voting is an ideal system? Well, it's certainly a far sight better than plurality. But let's check it against Sen and against Gibbard.

Sen's theorem was based on a logic similar to Arrow. However, while Arrow's theorem deals with broad outcomes like which candidate wins, Sen's deals with finely-grained outcomes like (in the example we discussed) how each separate house should be painted. Extending the cardinal numerical logic of score voting to such finely-grained outcomes, we find we've simply reinvented markets. While markets can be great things and often work well in practice, Sen's result still holds in this case; if everything is on the market, then there is no decision which is always yours to make. But since, in practice, as long as you aren't destitute, you tend to be able to make the decisions you care the most about, Sen's theorem seems to have lost its bite in this context.

What about Gibbard's theorem on strategy? Here, things are not so easy. Yes, Gibbard, like Sen, parallels Arrow. But while Arrow deals with what's written on the ballot, Gibbard deals with what's in the voters head. In particular, if a voter prefers X to Y by even the tiniest margin, Gibbard assumes (not unreasonably) that they may be willing to vote however they need to, if by doing so they can ensure X wins instead of Y. Thus, the internal preferences Gibbard treats are, effectively, just ordinal rankings; and the cardinal trick by which score voting avoided Arrovian problems no longer works.

How does score voting deal with strategic issues in practice? The answer to that has two sides. On the one hand, score never requires voters to be actually dishonest. Unlike the situation in a ranked system such as plurality, where we all know that the strategic vote may be to dishonestly ignore your true favorite and vote for a "lesser evil" among the two frontrunners, in score voting you never need to vote a less-preferred option above a more-preferred option. At worst, all you have to do is exaggerate some distinctions and minimize others, so that you might end giving equal votes to less- and more-preferred options.

Did I say "at worst"? I meant, "almost always". Voting strategy only matters to the result when, aside from your vote, two or more candidates are within one vote of being tied for first. Except in unrealistic, perfectly-balanced conditions, as the number of voters rises, the probability that anyone but the two a priori frontrunner candidates is in on this tie falls to zero.⁷ Thus, in score voting, the optimal strategy is nearly always to vote your preferred frontrunner and all candidate above at the maximum, and your less-preferred frontrunner and all candidates below at the minimum. In other words, strategic score voting is basically equivalent to approval voting, where you give each candidate a 1 or 0 and the highest total wins.

In one sense, score voting reducing to approval OK. Approval voting is not a bad system at all. For instance, if there's a known majority Condorcet winner — a candidate who could beat any other by a majority in a one-on-one race — and voters are strategic — they anticipate the unique strong Nash equilibrium, the situation where no group of voters could improve the outcome for all its members by changing their votes, whenever such a unique equilibrium exists — then the Condorcet winner will win under approval. That's a lot of words to say that approval will get the "democratic" results you'd expect in most cases.

But in another sense, it's a problem. If one side of an issue is more inclined to be strategic than the other side, the more-strategic faction could win even if it's a minority. That clashes with many people's ideals of democracy; and worse, it encourages mind-killing political attitudes, where arguments are used as soldiers rather than as ways to seek the truth.

But score and approval voting are not the only systems which escape Arrow's theorem through the trapdoor of ratings. If score voting, using the average of voter ratings, too-strongly encourages voters to strategically seek extreme ratings, then why not use the median rating instead? We know that medians are less sensitive to outliers than averages. And indeed, median-based systems are more resistant to one-sided strategy than average-based ones, giving better hope for reasonable discussion to prosper. That is to say, in a simple model, a minority would need twice as much strategic coordination under median as under average, in order to overcome a majority; and there's good reason to believe that, because of natural factional separation, reality is even more favorable to median systems than that model.

There are several different median systems available. In the US during the 1910-1925 Progressive Era, early versions collectively called "Bucklin voting" were used briefly in over a dozen cities. These reforms, based on counting all top preferences, then adding lower preferences one level at a time until some candidate(s) reach a majority, were all rolled back soon after, principally by party machines upset at upstart challenges or victories. The possibility of multiple, simultaneous majorities is a principal reason for the variety of Bucklin/Median systems. Modern proposals of median systems include Majority Approval Voting, Majority Judgment, and Graduated Majority Judgment, which would probably give the same winners almost all of the time. An important detail is that most median system ballots use verbal or letter grades rather than numeric scores. This is justifiable because the median is preserved under any monotonic transformation, and studies suggest that it would help discourage strategic voting.

Serious attention to rated systems like approval, score, and median systems barely began in the 1980s, and didn't really pick up until 2000. Meanwhile, the increased amateur interest in voting systems in this period — perhaps partially attributable to the anomalous 2000 US presidential election, or to more-recent anomalies in the UK, Canada, and Australia — has led to new discoveries in ranked systems as well. Though such systems are still clearly subject to Arrow's theorem, new "improved Condorcet" methods which use certain tricks to count a voter's equal preferences between to candidates on either side of the ledger depending on the strategic needs, seem to offer promise that Arrovian pathologies can be kept to a minimum.

With this embarrassment of riches of systems to choose from, how should we evaluate which is best? Well, at least one thing is a clear consensus: plurality is a horrible system. Beyond that, things are more controversial; there are dozens of possible objective criteria one could formulate, and any system's inventor and/or supporters can usually formulate some criterion by which it shines.

Ideally, we'd like to measure the utility of each voting system in the real world. Since that's impossible — it would take not just a statistically-significant sample of large-scale real-world elections for each system, but also some way to measure the true internal utility of a result in situations where voters are inevitably strategically motivated to lie about that utility — we must do the next best thing, and measure it in a computer, with simulated voters whose utilities are assigned measurable values. Unfortunately, that requires assumptions about how those utilities are distributed, how voter turnout is decided, and how and whether voters strategize. At best, those assumptions can be varied, to see if findings are robust.

In 2000, Warren Smith performed such simulations for a number of voting systems. He found that score voting had, very robustly, one of the top expected social utilities (or, as he termed it, lowest Bayesian regret). Close on its heels were a median system and approval voting. Unfortunately, though he explored a wide parameter space in terms of voter utility models and inherent strategic inclination of the voters, his simulations did not include voters who were more inclined to be strategic when strategy was more effective. His strategic assumptions were also unfavorable to ranked systems, and slightly unrealistic in other ways. Still, though certain of his numbers must be taken with a grain of salt, some of his results were large and robust enough to be trusted. For instance, he found that plurality voting and instant runoff voting were clearly inferior to rated systems; and that approval voting, even at its worst, offered over half the benefits compared to plurality of any other system.

Summary of above: Rated systems, such as approval voting, score voting, and Majority Approval Voting, can avoid the problems of Arrow's theorem. Though they are certainly not immune to issues of strategic voting, they are a clear step up from plurality. Starting with this section, the opinions are my own; the two prior sections were based on general expert views on the topic.

5. Delegation and SODA

Rated systems are not the only way to try to beat the problems of Arrow and Gibbard (/Satterthwaite).

Summary of above:

6. Criteria and pathologies

do.

Summary of above:

7. Representation, proportionality, and sortition

do.

Summary of above:

8. What I'm doing about it and what you can

do.

Summary of above:

9. Conclusions and future directions

do.

Summary of above:

10. Appendix: voting systems table

Compliance of selected systems (table)

The following table shows which of the above criteria are met by several single-winner systems. Note: contains some errors; I'll carefully vet this when I'm finished with the writing. Still generally reliable though.

 Major­ity/
MMC
Condorcet/
Majority Condorcet
Cond.
loser

Mono­tone
Consist­ency/
Particip­ation
Rever­sal
sym­metry

IIA
Cloneproof
Poly­time/
Resolv­able
Summ­able
Equal rankings
allowed
Later
prefs
allowed
Later-no-harm­/
Later-no-help
FBC:No
favorite
betrayal
Approval[nb 1] Ambig­uous NoStrategic yes[nb 2] No Yes Yes[nb 2] Yes Ambig­uous Ambig.­[nb 3] Yes O(N) Yes No [nb 4] Yes
Borda count No No Yes Yes Yes Yes No No (teaming) Yes O(N) No Yes No No
Copeland Yes Yes Yes Yes No Yes No (but ISDA) No (crowding) Yes/No O(N2) Yes Yes No No
IRV (AV) Yes No Yes No No No No Yes Yes O(N!)­[nb 5] No Yes Yes No
Kemeny-Young Yes Yes Yes Yes No Yes No (but ISDA) No (teaming) No/Yes O(N2[nb 6] Yes Yes No No
Majority Judg­ment[nb 7] Yes[nb 8] NoStrategic yes[nb 2] No[nb 9] Yes No[nb 10] No[nb 11] Yes Yes Yes O(N)­[nb 12] Yes Yes No[nb 13] Yes Yes
Minimax Yes/No Yes[nb 14] No Yes No No No No (spoilers) Yes O(N2) Some variants Yes No[nb 14] No
Plurality Yes/No No No Yes Yes No No No (spoilers) Yes O(N) No No [nb 4] No
Range voting[nb 1] No NoStrategic yes[nb 2] No Yes Yes[nb 2] Yes Yes[nb 15] Ambig.­[nb 3] Yes O(N) Yes Yes No Yes
Ranked pairs Yes Yes Yes Yes No Yes No (but ISDA) Yes Yes O(N2) Yes Yes No No
Runoff voting Yes/No No Yes No No No No No (spoilers) Yes O(N)­[nb 16] No No[nb 17] Yes[nb 18] No
Schulze Yes Yes Yes Yes No Yes No (but ISDA) Yes Yes O(N2) Yes Yes No No
SODA voting[nb 19] Yes Strategic yes/yes Yes Ambig­uous[nb 20] Yes/Up to 4 cand. [nb 21] Yes[nb 22] Up to 4 candidates[nb 21] Up to 4 cand. (then crowds) [nb 21] Yes[nb 23] O(N) Yes Limited[nb 24] Yes Yes
Random winner/
arbitrary winner[nb 25]
No No No NA No Yes Yes NA Yes/No O(1) No No   Yes
Random ballot[nb 26] No No No Yes Yes Yes Yes Yes Yes/No O(N) No No   Yes

"Yes/No", in a column which covers two related criteria, signifies that the given system passes the first criterion and not the second one.

  1. Jump up to:a b These criteria assume that all voters vote their true preference order. This is problematic for Approval and Range, where various votes are consistent with the same order. See approval voting for compliance under various voter models.
  2. Jump up to:a b c d e In Approval, Range, and Majority Judgment, if all voters have perfect information about each other's true preferences and use rational strategy, any Majority Condorcet or Majority winner will be strategically forced – that is, win in the unique Strong Nash equilibrium. In particular if every voter knows that "A or B are the two most-likely to win" and places their "approval threshold" between the two, then the Condorcet winner, if one exists and is in the set {A,B}, will always win. These systems also satisfy the majority criterion in the weaker sense that any majority can force their candidate to win, if it so desires. (However, as the Condorcet criterion is incompatible with the participation criterion and the consistency criterion, these systems cannot satisfy these criteria in this Nash-equilibrium sense. Laslier, J.-F. (2006) "Strategic approval voting in a large electorate,"IDEP Working Papers No. 405 (Marseille, France: Institut D'Economie Publique).)
  3. Jump up to:a b The original independence of clones criterion applied only to ranked voting methods. (T. Nicolaus Tideman, "Independence of clones as a criterion for voting rules", Social Choice and Welfare Vol. 4, No. 3 (1987), pp. 185–206.) There is some disagreement about how to extend it to unranked methods, and this disagreement affects whether approval and range voting are considered independent of clones. If the definition of "clones" is that "every voter scores them within ±ε in the limit ε→0+", then range voting is immune to clones.
  4. Jump up to:a b Approval and Plurality do not allow later preferences. Technically speaking, this means that they pass the technical definition of the LNH criteria - if later preferences or ratings are impossible, then such preferences can not help or harm. However, from the perspective of a voter, these systems do not pass these criteria. Approval, in particular, encourages the voter to give the same ballot rating to a candidate who, in another voting system, would get a later rating or ranking. Thus, for approval, the practically meaningful criterion would be not "later-no-harm" but "same-no-harm" - something neither approval nor any other system satisfies.
  5. Jump up^ The number of piles that can be summed from various precincts is floor((e-1) N!) - 1.
  6. Jump up^ Each prospective Kemeny-Young ordering has score equal to the sum of the pairwise entries that agree with it, and so the best ordering can be found using the pairwise matrix.
  7. Jump up^ Bucklin voting, with skipped and equal-rankings allowed, meets the same criteria as Majority Judgment; in fact, Majority Judgment may be considered a form of Bucklin voting. Without allowing equal rankings, Bucklin's criteria compliance is worse; in particular, it fails Independence of Irrelevant Alternatives, which for a ranked method like this variant is incompatible with the Majority Criterion.
  8. Jump up^ Majority judgment passes the rated majority criterion (a candidate rated solo-top by a majority must win). It does not pass the ranked majority criterion, which is incompatible with Independence of Irrelevant Alternatives.
  9. Jump up^ Majority judgment passes the "majority condorcet loser" criterion; that is, a candidate who loses to all others by a majority cannot win. However, if some of the losses are not by a majority (including equal-rankings), the Condorcet loser can, theoretically, win in MJ, although such scenarios are rare.
  10. Jump up^ Balinski and Laraki, Majority Judgment's inventors, point out that it meets a weaker criterion they call "grade consistency": if two electorates give the same rating for a candidate, then so will the combined electorate. Majority Judgment explicitly requires that ratings be expressed in a "common language", that is, that each rating have an absolute meaning. They claim that this is what makes "grade consistency" significant. MJ. Balinski M. and R. Laraki (2007) «A theory of measuring, electing and ranking». Proceedings of the National Academy of Sciences USA, vol. 104, no. 21, 8720-8725.
  11. Jump up^ Majority judgment can actually pass or fail reversal symmetry depending on the rounding method used to find the median when there are even numbers of voters. For instance, in a two-candidate, two-voter race, if the ratings are converted to numbers and the two central ratings are averaged, then MJ meets reversal symmetry; but if the lower one is taken, it does not, because a candidate with ["fair","fair"] would beat a candidate with ["good","poor"] with or without reversal. However, for rounding methods which do not meet reversal symmetry, the chances of breaking it are on the order of the inverse of the number of voters; this is comparable with the probability of an exact tie in a two-candidate race, and when there's a tie, any method can break reversal symmetry.
  12. Jump up^ Majority Judgment is summable at order KN, where K, the number of ranking categories, is set beforehand.
  13. Jump up^ Majority judgment meets a related, weaker criterion: ranking an additional candidate below the median grade (rather than your own grade) of your favorite candidate, cannot harm your favorite.
  14. Jump up to:a b A variant of Minimax that counts only pairwise opposition, not opposition minus support, fails the Condorcet criterion and meets later-no-harm.
  15. Jump up^ Range satisfies the mathematical definition of IIA, that is, if each voter scores each candidate independently of which other candidates are in the race. However, since a given range score has no agreed-upon meaning, it is thought that most voters would either "normalize" or exaggerate their vote such that it votes at least one candidate each at the top and bottom possible ratings. In this case, Range would not be independent of irrelevant alternatives. Balinski M. and R. Laraki (2007) «A theory of measuring, electing and ranking». Proceedings of the National Academy of Sciences USA, vol. 104, no. 21, 8720-8725.
  16. Jump up^ Once for each round.
  17. Jump up^ Later preferences are only possible between the two candidates who make it to the second round.
  18. Jump up^ That is, second-round votes cannot harm candidates already eliminated.
  19. Jump up^ Unless otherwise noted, for SODA's compliances:
    • Delegated votes are considered to be equivalent to voting the candidate's predeclared preferences.
    • Ballots only are considered (In other words, voters are assumed not to have preferences that cannot be expressed by a delegated or approval vote.)
    • Since at the time of assigning approvals on delegated votes there is always enough information to find an optimum strategy, candidates are assumed to use such a strategy.
  20. Jump up^ For up to 4 candidates, SODA is monotonic. For more than 4 candidates, it is monotonic for adding an approval, for changing from an approval to a delegation ballot, and for changes in a candidate's preferences. However, if changes in a voter's preferences are executed as changes from a delegation to an approval ballot, such changes are not necessarily monotonic with more than 4 candidates.
  21. Jump up to:a b c For up to 4 candidates, SODA meets the Participation, IIA, and Cloneproof criteria. It can fail these criteria in certain rare cases with more than 4 candidates. This is considered here as a qualified success for the Consistency and Participation criteria, which do not intrinsically have to do with numerous candidates, and as a qualified failure for the IIA and Cloneproof criteria, which do.
  22. Jump up^ SODA voting passes reversal symmetry for all scenarios that are reversible under SODA; that is, if each delegated ballot has a unique last choice. In other situations, it is not clear what it would mean to reverse the ballots, but there is always some possible interpretation under which SODA would pass the criterion.
  23. Jump up^ SODA voting is always polytime computable. There are some cases where the optimal strategy for a candidate assigning delegated votes may not be polytime computable; however, such cases are entirely implausible for a real-world election.
  24. Jump up^ Later preferences are only possible through delegation, that is, if they agree with the predeclared preferences of the favorite.
  25. Jump up^ Random winner: Uniformly randomly chosen candidate is winner. Arbitrary winner: some external entity, not a voter, chooses the winner. These systems are not, properly speaking, voting systems at all, but are included to show that even a horrible system can still pass some of the criteria.
  26. Jump up^ Random ballot: Uniformly random-chosen ballot determines winner. This and closely related systems are of mathematical interest because they are the only possible systems which are truly strategy-free, that is, your best vote will never depend on anything about the other voters. They also satisfy both consistency and IIA, which is impossible for a deterministic ranked system. However, this system is not generally considered as a serious proposal for a practical method.

11. Footnotes

¹ When I call my introduction "overblown", I mean that I reserve the right to make broad generalizations there, without getting distracted by caveats. If you don't like this style, feel free to skip to section 2.

 

² Of course, the original "politics is a mind killer" sequence was perfectly clear about this: "Politics is an important domain to which we should individually apply our rationality—but it's a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational." The focus here is on the first part of that quote, because I think Less Wrong as a whole has moved too far in the direction of avoiding politics as not a domain for rationalists.

 

³ Bayes developed his theorem decades before Condorcet's Essai, but Condorcet probably didn't know of it, as it wasn't popularized by Laplace until about 30 years later, after Condorcet was dead.

 

⁴ Yes, this happens to be the same Alan Gibbard from the previous paragraph.

 

⁵ Confusingly, "public choice" refers to a school of thought, while "social choice" is the name for the broader domain of study. Stop reading this footnote now if you don't want to hear mind-killing partisan identification. "Public choice" theorists are generally seen as politically conservative in the solutions they suggest. It seems to me that the broader "social choice" has avoided taking on a partisan connotation in this sense.

 

⁶ Score voting is also called "range voting" by some. It is not a particularly new idea — for instance, the "loudest cheer wins" rule of ancient Sparta, and even aspects of honeybees' process for choosing new hives, can be seen as score voting — but it was first analyzed theoretically around 2000. Approval voting, which can be seen as a form of score voting where the scores are restricted to 0 and 1, had entered theory only about two decades earlier, though it too has a history of practical use back to antiquity.

 

⁷ OK, fine, this is a simplification. As a voter, you have imperfect information about the true level of support and propensity to vote in the superpopulation of eligible voters, so in reality the chances of a decisive tie between other than your two expected frontrunners is non-zero. Still, in most cases, it's utterly negligible.

 

⁸ This article will focus more on the literature on multi-player strategic voting (competing boundedly-instrumentally-rational agents) than on multi-player Aumann (cooperating boundedly-epistemically-rational agents). If you're interested in the latter, here are some starting points: Scott Aaronson's work is, as far as I know, the state of the art on 2-player Aumann, but its framework assumes that the players have a sophisticated ability to empathize and reason about each others' internal knowledge, and the problems with this that Aaronson plausibly handwaves away in the 2-player case are probably less tractable in the multi-player one. Dalkiran et al deal with an Aumann-like problem over a social network; they find that attempts to "jump ahead" to a final consensus value instead of simply dumbly approaching it asymptotically can lead to failure to converge. And Kanoria et al have perhaps the most interesting result from the perspective of this article; they use the convergence of agents using a naive voting-based algorithm to give a nice upper bound on the difficulty of full Bayesian reasoning itself. None of these papers explicitly considers the problem of coming to consensus on more than one logically-related question at once, though Aaronson's work at least would clearly be easy to extend in that direction, and I think such extensions would be unsurprisingly Bayesian.

Less Wrong’s political bias

-6 Sophronius 25 October 2013 04:38PM

(Disclaimer: This post refers to a certain political party as being somewhat crazy, which got some people upset, so sorry about that. That is not what this post is *about*, however. The article is instead about Less Wrong's social norms against pointing certain things out. I have edited it a bit to try and make it less provocative.)

 

A well-known post around these parts is Yudkowski’s “politics is the mind killer”. This article proffers an important point: People tend to go funny in the head when discussing politics, as politics is largely about signalling tribal affiliation. The conclusion drawn from this by the Less Wrong crowd seems simple: Don’t discuss political issues, or at least keep it as fair and balanced as possible when you do. However, I feel that there is a very real downside to treating political issues in this way, which I shall try to explain here. Since this post is (indirectly) about politics, I will try to bring this as gently as possible so as to avoid mind-kill. As a result this post is a bit lengthier than I would like it to be, so I apologize for that in advance.

I find that a good way to examine the value of a policy is to ask in which of all possible worlds this policy would work, and in which worlds it would not. So let’s start by imagining a perfectly convenient world: In a universe whose politics are entirely reasonable and fair, people start political parties to represent certain interests and preferences. For example, you might have the kitten party for people who like kittens, and the puppy party for people who favour puppies. In this world Less Wrong’s unofficial policy is entirely reasonable: There is no sense in discussing politics, since politics is only about personal preferences, and any discussion of this can only lead to a “Jay kittens, boo dogs!” emotivism contest. At best you can do a poll now and again to see what people currently favour.

Now let’s imagine a less reasonable world, where things don’t have to happen for good reasons and the universe doesn’t give a crap about what’s fair. In this unreasonable world, you can get a “Thrives through Bribes” party or an “Appeal to emotions” party or a “Do stupid things for stupid reasons” party as well as more reasonable parties that actually try to be about something. In this world it makes no sense to pretend that all parties are equal, because there is really no reason to believe that they are.

As you might have guessed, I believe that we live in the second world. As a result, I do not believe that all parties are equally valid/crazy/corrupt, and as such I like to be able to identify which are the most crazy/corrupt/stupid. Now I happen to be fairly happy with the political system where I live. We have a good number of more-or-less reasonable parties here, and only one major crazy party that gives me the creeps. The advantage of this is that whenever I am in a room with intelligent people, I can safely say something like “That crazy racist party sure is crazy and racist”, and everyone will go “Yup, they sure are, now do you want to talk about something of substance?” This seems to me the only reasonable reply.

The problem is that Less Wrong seems primarily US-based, and in the US… things do not go like this. In the US, it seems to me that there are only two significant parties, one of which is flawed and which I do not agree with on many points, while the other is, well… can I just say that some of the things they profess do not so much sound wrong as they sound crazy? And yet, it seems to me that everyone here is being very careful to not point this out, because doing so would necessarily be favouring one party over the other, and why, that’s politics! That’s not what we do here on Less Wrong!

And from what I can tell, based on the discussion I have seen so far and participated in on Less Wrong, this introduces a major bias. Pick any major issue of contention, and chances are that the two major parties will tend to have opposing views on the subject. And naturally, the saner party of the two tends to hold a more reasonable view, because they are less crazy. But you can’t defend the more reasonable point of view now, because then you’re defending the less-crazy party, and that’s politics. Instead, you can get free karma just by saying something trite like “well, both sides have important points on the matter” or “both parties have their own flaws” or “politics in general are messed up”, because that just sounds so reasonable and fair who doesn’t like things to be reasonable and fair? But I don’t think we live in a reasonable and fair world.

It’s hard to prove the existence of such a bias and so this is mostly just an impression I have. But I can give a couple of points in support of this impression. Firstly there are the frequent accusations of group think towards Less Wrong, which I am increasingly though reluctantly prone to agree with. I can’t help but notice that posts which remark on for example *retracted* being a thing tend to get quite a few downvotes while posts that take care to express the nuance of the issue get massive upvotes regardless of whether really are two sides on the issue. Then there are the community poll results, which show that for example 30% of Less Wrongers favour a particular political allegiance even though only 1% of voters vote for the most closely corresponding party. I sincerely doubt that this skewed representation is the result of honest and reasonable discussion on Less Wrong that has convinced members to follow what is otherwise a minority view, since I have never seen any such discussion. So without necessarily criticizing the position itself, I have to wonder what causes this skewed representation. I fear that this “let’s not criticize political views” stance is causing Less Wrong to shift towards holding more and more eccentric views, since a lack of criticism can be taken as tacit approval. What especially worries me is that giving the impression that all sides are equal automatically lends credibility to the craziest viewpoint, as proponents of that side can now say that sceptics take their views seriously which benefits them the most. This seems to me literally the worst possible outcome of any politics debate.

I find that the same rule holds for politics as for life in general: You can try to win or you can give up and lose by default, but you can’t choose not to play.

Dark Arts 101: Winning via destruction and dualism

-13 PhilGoetz 21 September 2013 01:53AM

Recalling first that life is a zero-sum game, it is immediately obvious that the quickest and easiest path to success is not to accomplish things yourself—that's a game for heroes and other suckers—but to tear down the accomplishments and reputations of others. Destruction is easy. The difficulty lies in constructing a situation so that that destruction is to your net benefit.

continue reading »

Another way our brains betray us

3 polymathwannabe 17 September 2013 01:56PM

This appeared in the news yesterday.

http://www.alternet.org/media/most-depressing-discovery-about-brain-ever?paging=off

It turns out that in the public realm, a lack of information isn’t the real problem. The hurdle is how our minds work, no matter how smart we think we are. We want to believe we’re rational, but reason turns out to be the ex post facto way we rationalize what our emotions already want to believe.

...

The bleakest finding was that the more advanced that people’s math skills were, the more likely it was that their political views, whether liberal or conservative, made them less able to solve the math problem. [...] what these studies of how our minds work suggest is that the political judgments we’ve already made are impervious to facts that contradict us.

...

Denial is business-as-usual for our brains. More and better facts don’t turn low-information voters into well-equipped citizens. It just makes them more committed to their misperceptions.

...

When there’s a conflict between partisan beliefs and plain evidence, it’s the beliefs that win. The power of emotion over reason isn’t a bug in our human operating systems, it’s a feature.

Consider the Most Important Facts

-9 CarlJ 22 July 2013 08:39PM

Followup to: Choose that which is most important to you

When you have written down what your own fundamental political values are, the next step is to get an understanding of all possible societies so you can see which one is best. And by best I mean that society which comes closest to meeting your criteria of what you find most valuable.

So, to construct a model for thinking about this issue two things are needed. First, a list of all possible societies. And then some lists of those facts which would seem to rule out the largest number of possible societies as not being best; it would close in on the best society. The important point for this post regards the second list, but I still have a little discussion on the scope of the first list. If it seems obvious to, more or less, look at variants of economic systems, you can skip the next section and go straight to Facts which rule out and points toward certain societies.

A list of all possible societies – How long and exhaustive should it be?
I don't know if anyone has made such an exhaustive list. One might be constructed if one takes the list of economic systems (which regards laws, institutions, and how they are produced, and some culture) from Wikipedia and imagines that each of those systems may vary somewhat by different cultural norms. Not all cultural norms are compatible with every economic systems (objectivist virtue ethics with central planning), but every system would seem to allow some variation.This means 54 broad economic systems with, let's just say, ten broad cultural variations of these. So there's approximately 500 types of societies that people discuss today to take into account.

There's an obvious limitation to all this, which is that for every type of system, that system may vary in five million ways regarding certain laws. So, the Nordic model have changed a lot during the last 25 years. And if you take each law and consider a society of this type to be able to switch that on or off, there's, from that period alone, enough laws to be changed that the total combination exceeds five million. Many of the laws are however interdependent on one another, but there's still room for enormous configuration to ”construct” different societies.

So, maybe there are around a billion to a trillion possible societies. Now, it seems obviously clear that it is wrong to start discussing what, of two quite similar possible societies are better than the other – even if each society can have one million variations.1 That is because each are highly unlikely to be the best society.

If we can make one assumption, this will be much more easy. And that is that societies which we today would consider to be more similar than others would produce more or less the same results relative to other societies. There are some areas where every society would change drastically with just a small change in that area since it would lead to drastic change in the rest of the society. These areas are of great importance when we come to changing systems, but for now I assume these areas are too few in number to be of any importance.

With this assumption we can return to look at broad systems, because if societies of one category would seem to be better than other societies, we do not need to look more closely at that sort of society. If one type of mercantilistic society looks bad compared to a free-trade economy, any other type of the former are not worth looking at again.

Again, societies have these fundamental attributes (i) some general rules regarding how their laws are structured, (ii) some definitive rules on how these rules should be changed, and (iii) cultural norms. This model is still somewhat limiting, however. It seems to assume that a society can only have only one law and so on. But that problem disappears if we assume they can be different for different time, places and people. In all, this means we're back to some 500 possible societies.

Facts which rule out and points toward certain societies
Before considering any facts that has an impact on how you view a society, all societies should appear to be equally probable of being the best. This starting point may seem strange to some. It means that one should not dismiss even the policies of Nazi Germany out of hand. That is just the starting point however. After one accumulates more and more data some societies will appear less and less probable to be on that best fulfill your criteria.

But, since you don't have time to read everything, it is necessary to construct a model of how humans (and other beings, for post-singularity issues2) function and interact, that first only considers the most important facts. This could be done in several ways.

One could begin by just following normal science and ask what general facts can explain most of observed behavior and then see what those facts would predict about all societies. That seems wise to do, in and of itself, because it forces the discussion (which will ensue with others who follow the same method) to be very methodical and well grounded in a rich theory. This can be called the general method.

But this path is not the quickest, since these general facts would probably not damn enough societies to be unsuitable to your goals. A much faster way, but which will paint a more sketchy painting, is to just list those facts which will rule out the most societies. This is quicker since it will go straight to the chase. These facts may be thought of by thinking on what assumptions certain systems rely on to work adequately and trying to figure out what facts disprove most of these assumptions. This can be called the specific method.

Then there are statements which you are uncertain about but if they were true, it would become really obvious what society is best. So, not facts actually, but those ideas which you believe are worth learning more about. These potential facts should be the ones you are pondering or those which are the root cause of many debates among those with similar goals. This can be called the search method.

Here's an illustration of all three methods. Except for the last illustration, I write my own views, but these are not my own most important facts but the 11th to 20th.

The general method:

  1. People tend to conform to popular opinion.
  2. Societies become wealthier with extended markets, more savings, gaining better knowledge, producing more advanced technology, peace, and institutions which support these activities.
  3. Man is not a perfectly rational creature but has the possibility to correct his mistakes
  4. To wield power over others one generally need superior military strength.
  5. Most people fear being ostracised.
  6. Ideologies are usually formed by the social structure, and the social structure can be changed by those ideologies.
  7. People tend to enjoy the company of those who they are similar to.
  8. On markets with freedom of entry, prices for reproducible goods tends to be as low as their cost of production.
  9. Producers who don't sell what the customers want tend to receive lower earnings.
  10. Most people are adept at spotting others mistakes, but do quite poorly on noticing their own.

The specific method:

  1. All or almost all states today have tariffs to protect a certain industry or firm from competition.
  2. Generally, to know for sure if one possible society is better than another, one must be able to discuss their respective merits and demerits.
  3. The leaders of large governments tend to have less incentive to produce collective goods, rather than private goods, relative to leaders of smaller states.
  4. Most people today in democratic states give in to pressure to support policies which they are unable to know if they actually are for their own good or not.
  5. Children can be indoctrinated to glorify mass-murderers and to want to join them as soldiers, asking nothing about the justice of their cause.
  6. People are disposed to believe that the society they grow up in is good.
  7. Most people are conservative; they dislike change.
  8. All centrally planned economies perform less well than market based economic systems.
  9. Firms tend to invest money in rent-seeking if it's profitable until the expected return is similar to normal investments.
  10. Generally, it's difficult for new facts to overturn one's ideology without a contrasting ideology and it is difficult to come up with a new one by oneself.

The search method:

  1. Political system X will best achieve my goals.
  2. Political system X leads to the best incentives for everyone to produce the most important collective goods.

Now, these facts are not simply facts. They are the tip of a theoretical ice-berg; they are interpretation of reality. As such they will not by themselves explicate what system they damn. For oneself they should be clear what they mean, but if one should discuss it with others it might be necessary to write down the points and their theoretical point of view explicitly.

In any case, if you've followed my steps you should have one candidate which seems to be best. This step might, of course, take years, but if you're confident you should next estimate how much a political action towards these societies might cost.

Notes
[1] It might seem that I'd imply that that is what most people do today when they discuss politics – which, by its nature, is usually limited to tweaking the existing system one small way here and there, instead of looking at larger changes to be made. That implication is tempting to make, but most people seem to be more engaged in a ideological debate. I'd guess, anyway – I do not know for sure.

[2] They are too hard to predict so I'll skip discussing them.

Choose that which is most important to you

-4 CarlJ 21 July 2013 10:38PM

Followup to: The Domain of Politics

To create your own political world view you need to know about societies and your own political goals/values. In this post I'll discuss the latter, and in the next post the former.

What sort of goals? Those which you wish to achieve for their own sake, and not because they simply are a means to an end. That is, those goals you value intrinsically. Or, if you believe that there exists only one ultimate goal or value, then think of those means which are not that far removed from being intrinsic goal. That is, a birthday party might be just of instrumental value but most would agree that it is more far away from the intrinsic value than, say, good tires. I will for the rest of the post assume that most people value a lot of things intrinsically, and by values I will denote intrinsic values.

So, I'd like to draw a line between values and that which achieve those values. The latter is what we're trying to figure out what they are, without first proposing what they are. Those are political systems, or parts of them; they are institutions and laws. This is not to say that these things cannot be valued for their own sake – I put value on a system, possibly for aesthetic reasons – but those values should be disentangled from the other benefit a system produces.

With that in mind, you should now list all the things you value in ranking order. To rank them is necessary since we live in a world of scarce resources, so you won't necessarily achieve all your goals, but you will want to achieve those that are most important to you.

Now, what one values may change over time, so naturally what seems to be most important may also change. That which was on place #7 may go to #1 and vice versa. That is, values are changing with new information and a change in one's condition. That said, one's political values don't probably shift all that much. And even if they do, if you can't predict how they will change, you still need them to be able to know what political system is good for you.

There are many ways to get a feel of what your most highly valued political values are. Introspection, discussing with friends, think through a number of thought experiments, read the literature on what makes most people happy, listen to what experiences have been most horrible or pleasurable to others, etc.. In any case, here's a thought experiment to help with finding your ideological preferences, should you need it:

A genie appears and it says that it will make ten wishes come true and then it will be gone forever. As this genie will make more than three wishes come true it has an added restriction: all wishes need to be political in nature. By luck you get to make the wishes – what do you wish for?

The important thing to remember is that, if you should lose one wish, you will be less sorry to give up your tenth wish than any other. And less sorry to give up the ninth wish than the eight if you lost two wishes, and so on.

To make it clearer what I mean I'll write down some of the things I value. Not my most preferred goals, but those on 11th to 20th place:

  1. Those who have trouble excelling in life should receive whatever help can be given so they may become better.
  2. If someone comes up with a previously unknown idea for improving the world, and if three knowledgeable and unrelated individuals believe the idea is very good, it should only take some hours for everyone to be able to know that this matter is of importance.
  3. Everyone should have access to some means of totally private communication.
  4. There should be no infringement on the right to develop one's mind, whatever technology one uses.
  5. All animals should be, if the technology ever becomes available, sufficiently mentally enhanced to be given the choice of whether or not to become as intelligent (or more) as humans.
  6. If it ever seems likely to be possible, we should strive towards creating a technology to resurrect the dead sooner rather than later.
  7. The civilization should be able to co-exist with other peaceful civilizations.
  8. There shouldn't be any ultimate certainty on the nature of existence or in any one reality tunnel; some balkanization of epistemology is good.
  9. Everyone who share these values should know or learn the art of creating sustainable groups for collective action.
  10. The civilization which embodies these values should continue indefinitely.

EDIT: DanielLC notes that this simple ranking wouldn't give you any information on how valuable a 90% completion of one goal is relative to a 95% completion of another goal. That information will however be important when you have to choose between incremental steps towards several different goals.

To create a ranking which displays that information, imagine that each goal you have written down can be in progress in five stages - 0%, 25%, 50%, 75%, 100% - so that it is possible to be 75% or 0% on the way to achieve any particular goal. So, for instance, the goal of having private communication for everyone might be 50% completed if half the population have access to secret communication channels, but the other half doesn't.

Next, assume your one wish (in the scenario) is divided into five parts, one for each stage. And then rank every wish again following the same rule. This will look something like this:

  1. 100% of my first goal.
  2. 100% of my second goal.
  3. 100% of my third goal.
  4. 100% of my fourth goal.
  5. 75% of my first goal.
  6. 100% of my fifth goal.
  7. 50% of my first goal.
  8. 75% of my second goal.

(This was made purely for illustrative purposes. I haven't thought the matter through completely on how much I value these incremental parts.)

Another option is to do these more fine-tuned rankings on a gut level. Just having an imprecise feeling that, somewhere being closer to goal A stops being as important as being closer to B. This should be appropriate for those areas where your uncertainty about your preferences is high or where you don't care that much about which goal gets satisfied.

Next post: "Consider the Most Important Facts"

The Domain of Politics

0 CarlJ 21 July 2013 06:30PM

Followup to: How To Construct a Political Ideology

Related to: What Do We Mean By "Rationality"?

Politics is the art of the possible.

The word 'politics' is derived from the Greek word 'poly', meaning many, and the English word 'ticks', meaning blood sucking parasites.

Politics can be inspiring; there have been several groups that have organized to achieve wonderful ends now and in the past. Such as ending slavery, the subjugation of women, and the censorship of ideas. (None of these have, however, been brought to their full completion yet.)

Politics can also be irritating. As when some politician or bureaucrat wastes money or lies in a particularly annoying way, or when the supporters of that politician or that bureau talk about the wonders of politics while ignoring all its bad parts. (Politics can also be horrible and devastating.)

Predictably, some of us who find politics today to be more irritating than inspiring will define politics somewhat differently. For some, politics is ”a relic of a barbaric past” because politics always entails the threat of violence, and if we should ever find ourselves in a better state of affairs, politics will have had nothing to do with it. But many others would contend that wherever there's civic life there's politics – for some that's true even in a stateless society.

So, there's a little disagreement on the definition of politics. For my part, I will use the latter definition, which contends that politics deals with certain areas of life – regarding civic life, elections, war, fund-raising for a cause, influencing cultural norms, establishing alliances and so on. This is almost the same as the definition used by Wiktionary, but it seems to have a broader focus than the one used by Wikipedia. The goal of political action can then be said to be to act rationally in this domain, just as one would act rationally in any other domain.

That definition isn't too detailed, so let me try and give a fuller definition. I will do that by introducing a hypothetical scenario which explores some fundamental political strategies:

You live in a village by a river, and you are interested in building a bridge across it.  But a fisherman also lives in the village and if you'd build the bridge it would make it difficult for him to fish during that time. No one else will be directly effected by this project. You bring up the issue with the fisherman and ask what he thinks about all this.

The fisherman could then have two basic attitudes towards your project: it would  either be a concern for him or it wouldn't. If it is the latter, then you are not in any conflict, but have a (weak) harmonious relationship. All that remains is for you to build the bridge, which I'll discuss later.

First, let's assume that the fisherman opposes your plans. Let us assume that he is willing to physically prevent you from building the bridge. What can you do then, given that you still want to construct the bridge? It seems only these six general strategies are available:

Persuasion  – You can try to convince the fisherman that it is in his best interest that the bridge be built, or that the construction will not disturb him so much as he believes. That is, convince him that the project will not become problematic for him.

Deceit – You can try to convince the fisherman that the construction won't be problematic, while lying.

Trade - You take his stated preferences, true or not, as given and you offer him something in return for letting you build the bridge.

Threat – You offer to give him something/do something to him which he does not want, if he doesn't let you build the bridge.

Bypass – You ignore the fisherman and try to build the bridge without him knowing about it.

Force – You can try to physically stop him from preventing you to build the bridge. As in, hitting him on the head, poisoning him or locking him up. 

(There might be other strategies I've missed, but for now it's not necessary to know all fundamental strategies.)

Suppose now that the fisherman doesn't mind at all that you build the bridge. Well then, what happens  now?

Well, either you  want the help of others in doing this or not. If not, there's no more politics. If you do want the help of others, and they are willing to help you, then everything is also settled. But, if they do not want to help you right away, then you can use persuasion, deceit, exchange, threats and force. Bypassing is not an option here, since that would be pointless.

Each option entails costs, and they could all have too high a cost so that there's no point in going forth with anything.  In that case, it's time to do something else. On the other hand, the cost for each mode of action might be so low that any option is advantageous. In that case the only prudent move is to choose whatever has the lowest cost, the one which let's you pursue and reach the largest number of your most highly valued ends. The point is that not only does an option have costs in money and time but it can also affect any further actions in, at least, two ways. First off, if the action should fail, some, or all, of the other options might become totally improbable to succeed. And secondly, even if the action succeeds, it might have some negative effects in other non-political circumstances, making it less likely to achieve your goals. Thus, the costs worth pondering are the opportunity costs of an action - the loss is what you otherwise could have achieved.

It seems  that every political problem can be seen through the lens of this framework. Both for, loosely speaking, dealing with conflicts and producing values. What about upholding laws that support certain property rights? Well, you can persuade or force those who disagree with the norms to accept them. You can even bargain with them. What about helping those who are addicted to drugs? Same thing, you can either get their consent or choose to force them. Everything can ultimately be seen as how you interact with others.

What does this then tell us about the goal of political action? Well suppose you need to interact with others regarding the bridge-project (either with the fisherman or someone else). You will need to perceive the effects of each path and compare their effects to choose whichever is most beneficial to you.  After that has been solved, that should be the end of politics. But, what does it mean to solve the problem? Well, what goals will be harder to reach if you choose to trick the fisherman into letting you build the bridge? That depends on a lot of circumstances, but, for most villages, I'd guess you lose any chance of being on really good terms with the fisherman, and you'd lose favour with most people in the village (if you weren't already dominant in the village). And what if you'd traded with others to get their help in constructing the bridge? You'd only lost the money, probably.

Now, maybe this doesn't feel like that hard of a problem. But let's suppose that you will face one thousand such scenarios in your life, every one of which are intertwined with each other. That is, you will want to build a bridge, but you may also want to be friends to friends of the fisherman, be on good terms with everyone in the village, be secure in your property rights, help fund the building of a local town hall, change the current law on building-restrictions, support the abolishment of the Bakers' guild, do a whole lot of ordinary things and so on. Now your choice in one area will have to fit with every other area. Or, at least those you care about the most.

All of this calls for you to create a meta-strategy; a grand plan plan so all those small plans are compatible with each other and will produce the most benefit to you. How to make that plan and follow it through is the essence of political choice, it's an essential part of your goal in politics.

To know what plan to choose you need to know two things: (1) what your political values/goals are and (2) what sort of political system (society) would be best in promoting your goals.

If you know everything about your preferences, but nothing about societies, then you can't support any complex system without running the risk of supporting something which is totally detrimental to your values. If you, on the other hand, know everything about how societies function but are, somehow, unable to know what you really want, then you cannot decide what society to strive towards.

The next two posts will discuss these two issues - first goals and thereafter means.

Next post: "Choose that which is most important to you"

View more: Next