Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Trying to make politics less irrational by cognitive bias-checking the US presidential debates

-4 Gleb_Tsipursky 22 October 2016 02:32AM

[Link] Politics Is Upstream of AI

4 iceman 28 September 2016 09:47PM

2016 LessWrong Diaspora Survey Analysis: Part Four (Politics, Calibration & Probability, Futurology, Charity & Effective Altruism)

10 ingres 10 September 2016 03:51AM


The LessWrong survey has a very involved section dedicated to politics. In previous analysis the benefits of this weren't fully realized. In the 2016 analysis we can look at not just the political affiliation of a respondent, but what beliefs are associated with a certain affiliation. The charts below summarize most of the results.

Political Opinions By Political Affiliation

Miscellaneous Politics

There were also some other questions in this section which aren't covered by the above charts.


On a scale from 1 (not interested at all) to 5 (extremely interested), how would you describe your level of interest in politics?

1: 67 (2.182%)

2: 257 (8.371%)

3: 461 (15.016%)

4: 595 (19.381%)

5: 312 (10.163%)


Did you vote in your country's last major national election? (LW Turnout Versus General Election Turnout By Country)
Group Turnout
LessWrong 68.9%
Austrailia 91%
Brazil 78.90%
Britain 66.4%
Canada 68.3%
Finland 70.1%
France 79.48%
Germany 71.5%
India 66.3%
Israel 72%
New Zealand 77.90%
Russia 65.25%
United States 54.9%
Numbers taken from Wikipedia, accurate as of the last general election in each country listed at time of writing.


If you are an American, what party are you registered with?

Democratic Party: 358 (24.5%)

Republican Party: 72 (4.9%)

Libertarian Party: 26 (1.8%)

Other third party: 16 (1.1%)

Not registered for a party: 451 (30.8%)

(option for non-Americans who want an option): 541 (37.0%)

Calibration And Probability Questions

Calibration Questions

I just couldn't analyze these, sorry guys. I put many hours into trying to get them into a decent format I could even read and that sucked up an incredible amount of time. It's why this part of the survey took so long to get out. Thankfully another LessWrong user, Houshalter, has kindly done their own analysis.

All my calibration questions were meant to satisfy a few essential properties:

  1. They should be 'self contained'. I.E, something you can reasonably answer or at least try to answer with a 5th grade science education and normal life experience.
  2. They should, at least to a certain extent, be Fermi Estimable.
  3. They should progressively scale in difficulty so you can see whether somebody understands basic probability or not. (eg. In an 'or' question do they put a probability of less than 50% of being right?)

At least one person requested a workbook, so I might write more in the future. I'll obviously write more for the survey.

Probability Questions

Question Mean Median Mode Stdev
Please give the obvious answer to this question, so I can automatically throw away all surveys that don't follow the rules: What is the probability of a fair coin coming up heads? 49.821 50.0 50.0 3.033
What is the probability that the Many Worlds interpretation of quantum mechanics is more or less correct? 44.599 50.0 50.0 29.193
What is the probability that non-human, non-Earthly intelligent life exists in the observable universe? 75.727 90.0 99.0 31.893
...in the Milky Way galaxy? 45.966 50.0 10.0 38.395
What is the probability that supernatural events (including God, ghosts, magic, etc) have occurred since the beginning of the universe? 13.575 1.0 1.0 27.576
What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe? 15.474 1.0 1.0 27.891
What is the probability that any of humankind's revealed religions is more or less correct? 10.624 0.5 1.0 26.257
What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then? 21.225 10.0 5.0 26.782
What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time? 25.263 10.0 1.0 30.510
What is the probability that our universe is a simulation? 25.256 10.0 50.0 28.404
What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions? 83.307 90.0 90.0 23.167
What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity? 76.310 80.0 80.0 22.933


Probability questions is probably the area of the survey I put the least effort into. My plan for next year is to overhaul these sections entirely and try including some Tetlock-esque forecasting questions, a link to some advice on how to make good predictions, etc.


This section got a bit of a facelift this year. Including new cryonics questions, genetic engineering, and technological unemployment in addition to the previous years.



Are you signed up for cryonics?

Yes - signed up or just finishing up paperwork: 48 (2.9%)

No - would like to sign up but unavailable in my area: 104 (6.3%)

No - would like to sign up but haven't gotten around to it: 180 (10.9%)

No - would like to sign up but can't afford it: 229 (13.8%)

No - still considering it: 557 (33.7%)

No - and do not want to sign up for cryonics: 468 (28.3%)

Never thought about it / don't understand: 68 (4.1%)


Do you think cryonics, as currently practiced by Alcor/Cryonics Institute will work?

Yes: 106 (6.6%)

Maybe: 1041 (64.4%)

No: 470 (29.1%)

Interestingly enough, of those who think it will work with enough confidence to say 'yes', only 14 are actually signed up for cryonics.

sqlite> select count(*) from data where CryonicsNow="Yes" and Cryonics="Yes - signed up or just finishing up paperwork";


sqlite> select count(*) from data where CryonicsNow="Yes" and (Cryonics="Yes - signed up or just finishing up paperwork" OR Cryonics="No - would like to sign up but unavailable in my area" OR "No - would like to sign up but haven't gotten around to it" OR "No - would like to sign up but can't afford it");



Do you think cryonics works in principle?

Yes: 802 (49.3%)

Maybe: 701 (43.1%)

No: 125 (7.7%)

LessWrongers seem to be very bullish on the underlying physics of cryonics even if they're not as enthusiastic about current methods in use.

The Brain Preservation Foundation also did an analysis of cryonics responses to the LessWrong Survey.



By what year do you think the Singularity will occur? Answer such that you think, conditional on the Singularity occurring, there is an even chance of the Singularity falling before or after this year. If you think a singularity is so unlikely you don't even want to condition on it, leave this question blank.

Mean: 8.110300081581755e+16

Median: 2080.0

Mode: 2100.0

Stdev: 2.847858859055733e+18

I didn't bother to filter out the silly answers for this.

Obviously it's a bit hard to see without filtering out the uber-large answers, but the median doesn't seem to have changed much from the 2014 survey.

Genetic Engineering


Would you ever consider having your child genetically modified for any reason?

Yes: 1552 (95.921%)

No: 66 (4.079%)

Well that's fairly overwhelming.


Would you be willing to have your child genetically modified to prevent them from getting an inheritable disease?

Yes: 1387 (85.5%)

Depends on the disease: 207 (12.8%)

No: 28 (1.7%)

I find it amusing how the strict "No" group shrinks considerably after this question.


Would you be willing to have your child genetically modified for improvement purposes? (eg. To heighten their intelligence or reduce their risk of schizophrenia.)

Yes : 0 (0.0%)

Maybe a little: 176 (10.9%)

Depends on the strength of the improvements: 262 (16.2%)

No: 84 (5.2%)

Yes I know 'yes' is bugged, I don't know what causes this bug and despite my best efforts I couldn't track it down. There is also an issue here where 'reduce your risk of schizophrenia' is offered as an example which might confuse people, but the actual science of things cuts closer to that than it does to a clean separation between disease risk and 'improvement'.


This question is too important to just not have an answer to so I'll do it manually. Unfortunately I can't easily remove the 'excluded' entries so that we're dealing with the exact same distribution but only 13 or so responses are filtered out anyway.

sqlite> select count(*) from data where GeneticImprovement="Yes";


>>> 1100 + 176 + 262 + 84
>>> 1100 / 1622

67.8% are willing to genetically engineer their children for improvements.


Would you be willing to have your child genetically modified for cosmetic reasons? (eg. To make them taller or have a certain eye color.)

Yes: 500 (31.0%)

Maybe a little: 381 (23.6%)

Depends on the strength of the improvements: 277 (17.2%)

No: 455 (28.2%)

These numbers go about how you would expect, with people being progressively less interested the more 'shallow' a genetic change is seen as.


What's your overall opinion of other people genetically modifying their children for disease prevention purposes?

Positive: 1177 (71.7%)

Mostly Positive: 311 (19.0%)

No strong opinion: 112 (6.8%)

Mostly Negative: 29 (1.8%)

Negative: 12 (0.7%)


What's your overall opinion of other people genetically modifying their children for improvement purposes?

Positive: 737 (44.9%)

Mostly Positive: 482 (29.4%)

No strong opinion: 273 (16.6%)

Mostly Negative: 111 (6.8%)

Negative: 38 (2.3%)


What's your overall opinion of other people genetically modifying their children for cosmetic reasons?

Positive: 291 (17.7%)

Mostly Positive: 290 (17.7%)

No strong opinion: 576 (35.1%)

Mostly Negative: 328 (20.0%)

Negative: 157 (9.6%)

All three of these seem largely consistent with peoples personal preferences about modification. Were I inclined I could do a deeper analysis that actually takes survey respondents row by row and looks at correlation between preference for ones own children and preference for others.

Technological Unemployment


Do you think the Luddite's Fallacy is an actual fallacy?

Yes: 443 (30.936%)

No: 989 (69.064%)

We can use this as an overall measure of worry about technological unemployment, which would seem to be high among the LW demographic.


By what year do you think the majority of people in your country will have trouble finding employment for automation related reasons? If you think this is something that will never happen leave this question blank.

Mean: 2102.9713740458014

Median: 2050.0

Mode: 2050.0

Stdev: 1180.2342850727339

Question is flawed because you can't distinguish answers of "never happen" from people who just didn't see it.

Interesting question that would be fun to take a look at in comparison to the estimates for the singularity.


Do you think the "end of work" would be a good thing?

Yes: 1238 (81.287%)

No: 285 (18.713%)

Fairly overwhelming consensus, but with a significant minority of people who have a dissenting opinion.


If machines end all or almost all employment, what are your biggest worries? Pick two.

Question Count Percent
People will just idle about in destructive ways 513 16.71%
People need work to be fulfilled and if we eliminate work we'll all feel deep existential angst 543 17.687%
The rich are going to take all the resources for themselves and leave the rest of us to starve or live in poverty 1066 34.723%
The machines won't need us, and we'll starve to death or be otherwise liquidated 416 13.55%
Question is flawed because it demanded the user 'pick two' instead of up to two.

The plurality of worries are about elites who refuse to share their wealth.

Existential Risk


Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?

Nuclear war: +4.800% 326 (20.6%)

Asteroid strike: -0.200% 64 (4.1%)

Unfriendly AI: +1.000% 271 (17.2%)

Nanotech / grey goo: -2.000% 18 (1.1%)

Pandemic (natural): +0.100% 120 (7.6%)

Pandemic (bioengineered): +1.900% 355 (22.5%)

Environmental collapse (including global warming): +1.500% 252 (16.0%)

Economic / political collapse: -1.400% 136 (8.6%)

Other: 35 (2.217%)

Significantly more people worried about Nuclear War than last year. Effect of new respondents, or geopolitical situation? Who knows.

Charity And Effective Altruism

Charitable Giving


What is your approximate annual income in US dollars (non-Americans: convert at www.xe.com)? Obviously you don't need to answer this question if you don't want to. Please don't include commas or dollar signs.

Sum: 66054140.47384

Mean: 64569.052271593355

Median: 40000.0

Mode: 30000.0

Stdev: 107297.53606321265


How much money, in number of dollars, have you donated to charity over the past year? (non-Americans: convert to dollars at http://www.xe.com/ ). Please don't include commas or dollar signs in your answer. For example, 4000

Sum: 2389900.6530000004

Mean: 2914.5129914634144

Median: 353.0

Mode: 100.0

Stdev: 9471.962766896671


How much money have you donated to charities aiming to reduce existential risk (other than MIRI/CFAR) in the past year?

Sum: 169300.89

Mean: 1991.7751764705883

Median: 200.0

Mode: 100.0

Stdev: 9219.941506342007


How much have you donated in US dollars to the following charities in the past year? (Non-americans: convert to dollars at http://www.xe.com/) Please don't include commas or dollar signs in your answer. Options starting with "any" aren't the name of a charity but a category of charity.

Question Sum Mean Median Mode Stdev
Against Malaria Foundation 483935.027 1905.256 300.0 None 7216.020
Schistosomiasis Control Initiative 47908.0 840.491 200.0 1000.0 1618.785
Deworm the World Initiative 28820.0 565.098 150.0 500.0 1432.712
GiveDirectly 154410.177 1429.723 450.0 50.0 3472.082
Any kind of animal rights charity 83130.47 1093.821 154.235 500.0 2313.493
Any kind of bug rights charity 1083.0 270.75 157.5 None 353.396
Machine Intelligence Research Institute 141792.5 1417.925 100.0 100.0 5370.485
Any charity combating nuclear existential risk 491.0 81.833 75.0 100.0 68.060
Any charity combating global warming 13012.0 245.509 100.0 10.0 365.542
Center For Applied Rationality 127101.0 3177.525 150.0 100.0 12969.096
Strategies for Engineered Negligible Senescence Research Foundation 9429.0 554.647 100.0 20.0 1156.431
Wikipedia 12765.5 53.189 20.0 10.0 126.444
Internet Archive 2975.04 80.406 30.0 50.0 173.791
Any campaign for political office 38443.99 366.133 50.0 50.0 1374.305
Other 564890.46 1661.442 200.0 100.0 4670.805
"Bug Rights" charity was supposed to be a troll fakeout but apparently...

This table is interesting given the recent debates about how much money certain causes are 'taking up' in Effective Altruism.

Effective Altruism


Do you follow any dietary restrictions related to animal products?

Yes, I am vegan: 54 (3.4%)

Yes, I am vegetarian: 158 (10.0%)

Yes, I restrict meat some other way (pescetarian, flexitarian, try to only eat ethically sourced meat): 375 (23.7%)

No: 996 (62.9%)


Do you know what Effective Altruism is?

Yes: 1562 (89.3%)

No but I've heard of it: 114 (6.5%)

No: 74 (4.2%)


Do you self-identify as an Effective Altruist?

Yes: 665 (39.233%)

No: 1030 (60.767%)

The distribution given by the 2014 survey results does not sum to one, so it's difficult to determine if Effective Altruism's membership actually went up or not but if we take the numbers at face value it experienced an 11.13% increase in membership.


Do you participate in the Effective Altruism community?

Yes: 314 (18.427%)

No: 1390 (81.573%)

Same issue as last, taking the numbers at face value community participation went up by 5.727%


Has Effective Altruism caused you to make donations you otherwise wouldn't?

Yes: 666 (39.269%)

No: 1030 (60.731%)


Effective Altruist Anxiety


Have you ever had any kind of moral anxiety over Effective Altruism?

Yes: 501 (29.6%)

Yes but only because I worry about everything: 184 (10.9%)

No: 1008 (59.5%)

There's an ongoing debate in Effective Altruism about what kind of rhetorical strategy is best for getting people on board and whether Effective Altruism is causing people significant moral anxiety.

It certainly appears to be. But is moral anxiety effective? Let's look:

Sample Size: 244
Average amount of money donated by people anxious about EA who aren't EAs: 257.5409836065574

Sample Size: 679
Average amount of money donated by people who aren't anxious about EA who aren't EAs: 479.7501384388807

Sample Size: 249 Average amount of money donated by EAs anxious about EA: 1841.5292369477913

Sample Size: 314
Average amount of money donated by EAs not anxious about EA: 1837.8248407643312

It seems fairly conclusive that anxiety is not a good way to get people to donate more than they already are, but is it a good way to get people to become Effective Altruists?

Sample Size: 1685
P(Effective Altruist): 0.3940652818991098
P(EA Anxiety): 0.29554896142433235
P(Effective Altruist | EA Anxiety): 0.5

Maybe. There is of course an argument to be made that sufficient good done by causing people anxiety outweighs feeding into peoples scrupulosity, but it can be discussed after I get through explaining it on the phone to wealthy PR-conscious donors and telling the local all-kill shelter where I want my shipment of dead kittens.


What's your overall opinion of Effective Altruism?

Positive: 809 (47.6%)

Mostly Positive: 535 (31.5%)

No strong opinion: 258 (15.2%)

Mostly Negative: 75 (4.4%)

Negative: 24 (1.4%)

EA appears to be doing a pretty good job of getting people to like them.

Interesting Tables

Charity Donations By Political Affilation
Affiliation Income Charity Contributions % Income Donated To Charity Total Survey Charity % Sample Size
Anarchist 1677900.0 72386.0 4.314% 3.004% 50
Communist 298700.0 19190.0 6.425% 0.796% 13
Conservative 1963000.04 62945.04 3.207% 2.612% 38
Futarchist 1497494.1099999999 166254.0 11.102% 6.899% 31
Left-Libertarian 9681635.613839999 416084.0 4.298% 17.266% 245
Libertarian 11698523.0 214101.0 1.83% 8.885% 190
Moderate 3225475.0 90518.0 2.806% 3.756% 67
Neoreactionary 1383976.0 30890.0 2.232% 1.282% 28
Objectivist 399000.0 1310.0 0.328% 0.054% 10
Other 3150618.0 85272.0 2.707% 3.539% 132
Pragmatist 5087007.609999999 266836.0 5.245% 11.073% 131
Progressive 8455500.440000001 368742.78 4.361% 15.302% 217
Social Democrat 8000266.54 218052.5 2.726% 9.049% 237
Socialist 2621693.66 78484.0 2.994% 3.257% 126

Number Of Effective Altruists In The Diaspora Communities
Community Count % In Community Sample Size
LessWrong 136 38.418% 354
LessWrong Meetups 109 50.463% 216
LessWrong Facebook Group 83 48.256% 172
LessWrong Slack 22 39.286% 56
SlateStarCodex 343 40.98% 837
Rationalist Tumblr 175 49.716% 352
Rationalist Facebook 89 58.94% 151
Rationalist Twitter 24 40.0% 60
Effective Altruism Hub 86 86.869% 99
Good Judgement(TM) Open 23 74.194% 31
PredictionBook 31 51.667% 60
Hacker News 91 35.968% 253
#lesswrong on freenode 19 24.675% 77
#slatestarcodex on freenode 9 24.324% 37
#chapelperilous on freenode 2 18.182% 11
/r/rational 117 42.545% 275
/r/HPMOR 110 47.414% 232
/r/SlateStarCodex 93 37.959% 245
One or more private 'rationalist' groups 91 47.15% 193

Effective Altruist Donations By Political Affiliation
Affiliation EA Income EA Charity Sample Size
Anarchist 761000.0 57500.0 18
Futarchist 559850.0 114830.0 15
Left-Libertarian 5332856.0 361975.0 112
Libertarian 2725390.0 114732.0 53
Moderate 583247.0 56495.0 22
Other 1428978.0 69950.0 49
Pragmatist 1442211.0 43780.0 43
Progressive 4004097.0 304337.78 107
Social Democrat 3423487.45 149199.0 93
Socialist 678360.0 34751.0 41

Why we may elect our new AI overlords

2 Deku-shrub 04 September 2016 01:07AM

In which I examine some of the latest development in automated fact checking, prediction markets for policies and propose we get rich voting for robot politicians.


Paid research assistant position focusing on artificial intelligence and existential risk

7 crmflynn 02 May 2016 06:27PM

Yale Assistant Professor of Political Science Allan Dafoe is seeking Research Assistants for a project on the political dimensions of the existential risks posed by advanced artificial intelligence. The project will involve exploring issues related to grand strategy and international politics, reviewing possibilities for social scientific research in this area, and institution building. Familiarity with international relations, existential risk, Effective Altruism, and/or artificial intelligence are a plus but not necessary. The project is done in collaboration with the Future of Humanity Institute, located in the Faculty of Philosophy at the University of Oxford. There are additional career opportunities in this area, including in the coming academic year and in the future at Yale, Oxford, and elsewhere. If interested in the position, please email allan.dafoe@yale.edu with a copy of your CV, a writing sample, an unofficial copy of your transcript, and a short (200-500 word) statement of interest. Work can be done remotely, though being located in New Haven, CT or Oxford, UK is a plus.

[Link] Salon piece analyzing Donald Trump's appeal using rationality

-13 Gleb_Tsipursky 24 April 2016 04:36AM

I'm curious about your thoughts on my piece in Salon analyzing Trump's emotional appeal using rationality-informed ideas. My primary aim is using the Trump hook to get readers to consider the broader role of Systems 1 and 2 in politics, the backfire effect, wishful thinking, emotional intelligence, etc.



Suppose HBD is True

-12 OrphanWilde 21 April 2016 01:34PM

Suppose, for the purposes of argument, HBD (Human bio-diversity, the claim that distinct populations (I will be avoiding using the word "race" here insomuch as possible) of humans exist and have substantial genetical variance which accounts for some difference in average intelligence from population to population) is true, and that all its proponents are correct in accusing the politicization of science for burying this information.

I seek to ask the more interesting question: Would it matter?

1. Societal Ramifications of HBD: Eugenics

So, we now have some kind of nice, tidy explanation for different characters among different groups of people.  Okay.  We have a theory.  It has explanatory power.  What can we do with it?

Unless you're willing to commit to eugenics of some kind (be it restricting reproduction or genetic alteration), not much of anything.  And even given you're willing to commit to eugenics, HBD doesn't add anything  HBD doesn't actually change any of the arguments for eugenics - below-average people exist in every population group, and insofar as we regard below-average people a problem, the genetic population they happen to belong to doesn't matter.  If the point is to raise the average, the population group doesn't matter.  If the point is to reduce the number of socially dependent individuals, the population group doesn't matter.

Worse, insofar as we use HBD as a determinant in eugenics, our eugenics are less effective.  HBD says your population group has a relationship with intelligence; but if we're interested in intelligence, we have no reason to look at your population group, because we can measure intelligence more directly.  There's no reason to use the proxy of population group if we're interested in intelligence, and indeed, every reason not to; it's significantly less accurate and politically and historically problematic.

Yet still worse for our eugenics advocate, insomuch as population groups do have significant genetic diversity, using population groups instead of direct measurements of intelligence is far more likely to cause disease transmission risks.  (Genetic diversity is very important for population-level disease resistance.  Just look at bananas.)

2. Social Ramifications of HBD: Social Assistance

Let's suppose we're not interested in eugenics.  Let's suppose we're interested in maximizing our societal outcomes.

Well, again, HBD doesn't offer us anything new.  We can already test intelligence, and insofar as HBD is accurate, intelligence tests are more accurate.  So if we aim to streamline society, we don't need HBD to do so.  HBD might offer an argument against affirmative action, in that we have different base expectations for different populations, but affirmative action already takes different base expectations into account (if you live in a city of 50% black people and 50% white people, but 10% of local lawyers are black, your local law firm isn't required to have 50% black lawyers, but 10%).  We might desire to adjust the way we engage in affirmative action, insofar as affirmative action might not lead to the best results, but if you're interested in the best results, you can argue on the basis of best results without needing HBD.

I have yet to encounter someone who argues HBD who also argues we should do something with regard to HELPING PEOPLE on the basis of this, but that might actually be a more significant argument: If there are populations of people who are going to fall behind, that might be a good argument to provide additional resources to these populations of people, particularly if there are geographic correspondences - that is, if HBD is true, and if population groups are geographically segregated, individuals in these population groups will suffer disproportionately relative to their merits, because they don't have the local geographic social capital that equal-advantage people of other population groups would have.  (An average person in a poor region will do worse than an average person in a rich region.)  So HBD provides an argument for desegregation.

Curiously, HBD advocates have a tendency to argue that segregation would lead to the best outcome.  I'd welcome arguments that concentrating an -absence- of social capital is a good idea.

3. Scientific Ramifications of HBD

Well, if HBD were true, it would mean science is politicized.  This might be news to somebody, I guess.

4. Political Ramifications of HBD

We live in a meritocracy.  It's actually not an ideal thing, contrary to the views of some people, because it results in a systematic merit segregation that has completely deprived the lower classes of intellectual resources; talk to older people sometime, who remember, when they worked in the coal mines (or whatever), the one guy who you could trust to be able to answer your questions and provide advice.  Our meritocracy has advanced to the point where we are systematically stripping everybody of value from the lower classes and redistributing them to the middle and upper classes.

HBD might be meaningful here.  Insofar as people take HBD to its absurd extremes, it might actually result in an -improvement- for some lower-class groups, because if we stop taking all the intelligent people out of poor areas, there will still be intelligent people in those poor areas.  But racism as a force of utilitarian good isn't something I care to explore in any great detail, mostly because if I'm wrong it would be a very bad thing, and also because none of its advocates actually suggest anything like this, more interesting in promoting segregation than desegregation.

It doesn't change much else, either.  With HBD we continually run into the same problem - as a theory, it's the product of measuring individual differences, and as a theory, it doesn't add anything to our information that we don't already have with the individual differences.

5. The Big Problem: Individuality

Which is the crucial fault with HBD, iterated multiple times here, in multiple ways: It literally doesn't matter if HBD is true.  All the information it -might- provide us with, we can get with much more accuracy using the same tests we might use to arrive at HBD.  Anything we might want to do with the idea, we can do -better- without it.

HBD might predict we get fewer IQ-115, IQ-130, and IQ-145 people from particular population groups, but it doesn't actually rule them out.  Insofar as this kind of information is useful, it's -more- useful to have more accurate information.  HBD doesn't say "Black people are stupid", instead it says "The average IQ of black people is slightly lower than the average IQ of white people".  But since "black people" isn't a thing that exists, but rather an abstract concept referring to a group of "black persons", and HBD doesn't make any predictions at the individual level we couldn't more accurately obtain through listening to a person speak for five seconds, it doesn't actually make any useful predictions.  It adds literally nothing to our model of the world.

It's not the most important idea of the century.  It's not important at all.

If you think it's true - okay.  What does it -add- to your understanding of the world?  What useful predictions does it make?  How does it permit you to improve society?  I've heard people insist it's this majorly important idea that the scientific and political establishment is suppressing.  I'd like to introduce you to the aether, another idea that had explanatory power but made no useful predictions, and which was abandoned - not because anybody thought it was wrong, but because it didn't even rise to the level of wrong, because it was useless.

And that's what HBD is.  A useless idea.

And even worse, it's a useless idea that's hopelessly politicized.

[Link] Op-Ed on Brussels Attacks

-6 Gleb_Tsipursky 02 April 2016 05:38PM

Trigger warning: politics is hard mode.

"How to you make America safer from terrorists" is the title of my op-ed published in Sun Sentinel, a very prominent newspaper in Florida, one of the most swingiest of the swing states in the US for the presidential election, and the one with the most votes. The maximum length of the op-ed was 450 words, and it was significantly edited by the editor, so it doesn't convey the full message I wanted with all the nuances, but such is life. My primary goal with the piece was to convey methods of thinking more rationally about politics, such as to use probabilistic thinking, evaluating the full consequences of our actions, and avoiding attention bias. I used the example of the proposal to police heavily Muslim neighborhoods as a case study. Hope this helps Floridians think more rationally and raises the sanity waterline regarding politics!



EDIT: To be totally clear, I used guesstimates for the numbers I suggested. Following Yvain/Scott Alexander's advice, I prefer to use guesstimates rather than vague statements.

Is altruistic deception really necessary? Social activism and the free market

3 PhilGoetz 26 February 2016 06:38AM

I've said before that social reform often seems to require lying.  Only one-sided narratives offering simple solutions motivate humans to act, so reformers manufacture one-sided narratives such as we find in Marxism or radical feminism, which inspire action through indignation.  Suppose you tell someone, "Here's an important problem, but it's difficult and complicated.  If we do X and Y, then after five years, I think we'd have a 40% chance of causing a 15% reduction in symptoms."  They'd probably think they had something better to do.

But the examples I used in that previous post were all arguably bad social reforms: Christianity, Russian communism, and Cuban communism.

The argument that people need to be deceived into social reform assumes either that they're stupid, or that there's some game-theoretic reason why social reform that's very worthwhile to society as a whole isn't worthwhile to any individual in society.

Is that true?  Or are people correct and justified in not making sudden changes until there's a clear problem and a clear solution to it?

continue reading »

The Art of Lawfare and Litigation strategy

-4 Clarity 17 December 2015 02:34PM

Bertrand Russell, well aware there were health risks of smoking, defended his addiction in a videotaped interview. See if you can spot his fallacy! 

Today on SBS (radio channel in Australia) I heard reporters breaking the news that Nature article reports that Cancer is largely due to choices. I was shocked by what appeared to be gross violations of cultural norms around the blaming of victims. I wanted to investigate further since science reporting is notoriously inaccurate.

The BBC reports:

Earlier this year, researchers sparked a debate after suggesting two-thirds of cancer types were down to luck rather than factors such as smoking.

The new study, in the journal Nature, used four approaches to conclude only 10-30% of cancers were down to the way the body naturally functions or "luck".

"They can't smoke and say it's bad luck if they have cancer."

-Dr Yusuf Hannun, the director of Stony Brook


The BBC article is roughly concordant with the SBS report. 

I've had a fairly simple relationship with cigarettes. I've smoked others' cigarettes a few times, while drinking. I bought my first cigarette to try soon after I turned of age and discarded the rest of the packet. One of my favourite memories is trying a vanilla flavoured cigar. I still feel tempted to it again whenever I smell a nice scent, or think about that moment. Though now, I regularly reject offers to go to local venues and smoke hookah. Even after my first cigarette, I felt the tug of nicotine and tobacco. Though, I'm unusually sensitive to eve the mildest addictive substances, so that doesn't suprise me in respective. What does suprise me, is that society is starting to take a ubiquitous but increasingly undeniable health issue seriously despite deep entanglement with long standing way of doing things, political ideologues, individual addictions and addiction-driven political behaviour and shareholder's pockets.

Though the truth claim of the article isn't that suprising. The dangers of smoking are publicised everywhere. Emphasis mine:

13 die every day in Victoria as a result of smoking.

Tobacco use (which includes cigarettes, cigars, pipes, snuff, chewing tobacco) is the leading preventable cause of death and illness in our country. It causes more deaths annually than those killed by AIDS, alcohol, automobile accidents, murders, suicides, drugs and fires combined.

So I decided to learn more about the relationship between society and big tobacco, and government and big tobacco to see what other people interested in influencing public policy and public health can learn (effective altruism policy analytics, take note!) about policy tractability in suprising places.

Here's what might make for tractable public policy for public health interventions

Proof of concept

Governments are great at successfully suing the shit out of tobacco. And, big tobacco takes it like a champ:

It started with United State's states experimenting with suing big tobacco. Eventually only a couple of states hadn't done it. Big Tobacco and all those attorney generals gathered and arranged huge ass settlement that resulted in the disestablishment of several shill research institutes supporting big tobacco and big payouts to sponsor anti-smoking advocacy groups (which seem politically unethical, but consequentially good, but I suppose that's a different story). However, what's important to note here is the experimentation within US states culminating with the legitimacy of normative lawfare. It's called 'Diffusion theory' and is described here.

Wait wait wait. I know what you're thinking, non-US LessWrongers - another US centric analysis that isn't too transportable. No. I'm not American in any sense, it's just that the US seems to be a point of diffusion. What's happening regarding marajuana in the US now seems to mirror this in some sense, but it's ironically pro-smoking. That illustrates the cause-neutrality of this phenomenon.

That settlement wasn't the end of the lawfare:

On August 17, 2006, a U.S. district judge issued a landmark opinion in the government's case against Big Tobacco, finding that tobacco companies had violated civil racketeering laws and defrauded consumers by lying about the health risks of smoking.

In a 1,653 page ruling, the judge stated that the tobacco industry had deceived the American public by concealing the addictive nature of nicotine plus had targeted youth in order to get them hooked on cigarettes for life. (Appeals are still pending). 

Victims who ask for help

I also stumbled upon some smokers attitudes to smoking and their, well, seemingly vexacious attitudes to big tobacco when looking up lawsuits and big tobacco. Here's a copy of the comments section on one website. It's really heartbreaking. It's a small sample size but just note their education too - suggesting a socio-economic effect. Note, this comments were posted publicly and are blatant cries for help. This suggests political will at a grassroots level that is yet under-catered for by services and/or political action. That's a powerful thing, perhaps - visible need in public forums addressed to those that are in the relevant space. Note that they commented on a class action website.



Note some of the language:


"I feel like I'm being tortured"

You don't see that kind of language used in any effective altruism branded publications.


Somewhat famous documents exposing the tobacco industries internal motivations and dodginess seem to be quoted everywhere in websites documenting and justifyications of lawfare against the tobacco industry. Public health and personal dangers of smoking don't seem to have been the big catalyst, but rather a villainous enemy. I'm reminded of how the Stop the boats campaign which villainised people smugglers instead of speaking of the potential to save lives of refugees who fall overboard shitty vessals. I think to Open Borders campaigners associated with GiveWell's Open Philanthropy Project, the perception of the project as just about the most intractable policy prospect around (I'd say a moratorium on AI research is up there), but at the same time, non identification of a villain in the picture. That's not entirely unsuprising. I recall the hate I received when I suggested that people should consider prostituting themselves for effective altruism, or soliciting donations from the porn industry where donors struggle to donate since many, particularly relgious charities refuge to accept their donations. Likewise, it's hard to get rid of encultured perceptions of what's good and what's bad, rather then enumerating ('or checking, as Eleizer writes in the sequence) the consequences.

Relative merit

This is something Effective Altruist is doing.

William Savedoff and Albert Alwang recently identified taxes on tobacco as, “the single most cost-effective way to save lives in developing countries” (2015, p.1).


Tobacco control programs often pursue many of these aims at once. However, raising taxes appears to be particularly cost-effective — e.g., raising taxes costs $3 - $70 per DALY avoided(Savedoff and Alwang, p.5; Ranson et al. 2002, p.311) — so I will focus solely on taxes. I will also focus only on low and middle income countries (LMICs) because that is where the problem is worst and where taxes can do the most good most cost-effectively.


But current trends need not continue. We can prevent deaths from tobacco use. Tobacco taxation is a well-tested and effective means of decreasing the prevalence of smoking—it gets people to stop and prevents others from starting. The reason is that smokers are responsive to price increases,provided that the real price goes up enough


Even if these numbers are off by a factor of 2 or 3, tobacco taxation appears to be on par with the most effective interventions identified by GiveWell and Giving What We Can. For example, GiveWell estimates that AMF can prevent a death for $3340 by providing bed nets to prevent malaria and estimates the cost of schistosomiasis deworming at $29 - $71 per DALY.


There are a few reasons to balk at recommending tobacco tax advocacy to those aiming to do the most good with their donations, time, and careers.


  • Tobacco taxes may not be a tractable issue
  • Tobacco taxes may be a “crowded” cause area
  • Unanswered questions about the empirical basis of cost-effectiveness estimates
  • There may not be a charity to donate to
Smoking is very harmful and very common.  Globally, 21% of people over 15 smoke (WHO GHO)




Attributing public responsibility AND incentivising independently private interest in a cause

The Single Best Health Policy in the World: Tobacco Taxes

The single most cost-effective way to save lives in developing countries is in the hands of developing countries themselves: raising tobacco taxes. In fact, raising tobacco taxes is better than cost-effective. It saves lives while increasing revenues and saving poor households money when their members quit smoking.



Tobacco lawsuits can be hard to win but if you have been injured because of tobacco or smoking or secondary smoke exposure, you should contact an attorney as soon as possible.

  If you have lung cancer and are now, or were formerly, a smoker or used tobacco products, you may have a claim under the product liability laws. You should contact an experienced product liability attorney or a tobacco lawsuit attorney as soon as possible because a statute of limitations could apply. 


There's a whole bunch of legal literature like this: http://heinonline.org/HOL/LandingPage?handle=hein.journals/clqv86&div=45&id=&page=

that I don't have the background to search for and interpret. So, if I'm missing important things, perhaps it's attributable to that. Point them out please.

So that's my analysis: plausible modifiable variables that influence the tractability of the public health policy initiative: 

(1) Attributing public responsibility AND incentivising independently private interest in a cause

(2) Relative merit

(3) Villains

(4) Victims that ask for help

(5) Low scale proof of concept

Remember, lawfare isn't just the domain of governments. Here's an example of non-government lawfare for public health. They are just better resourced, often, than individuals. They need groups to advocate on their behalf. Perhaps that's a direction the Open Philanthropy Project could take. 

I want to finish by soliciting an answer on the following question that is posed to smokers in a recurring survey by a tobacco control body:

Do you support or oppose the government suing tobacco companies to recover health care costs caused by tobacco use?

Now, there may be some 'reverse causation' at play here for why Tobacco Control has been so politically effect. BECAUSE it's such a good cause, it's a low hanging fruit that's already being picked. 

What's the case for or against this?

The case for it's cause selection: Tobacco control

Importance: high

tobacco is the leading preventable cause of death and disease in both the world (see: http://www.who.int/nmh/publications/fact_sheet_tobacco_en.pdf) and Australia (see: http://www.cancer.org.au/policy-and-advocacy/position-statements/smoking-and-tobacco-control/)

‘Tobacco smoking causes 20% of cancer deaths in Australia, making it the highest individual cancer risk factor. Smoking is a known cause of 16 different cancer types and is the main cause of Australia’s deadliest cancer, lung cancer. Smoking is responsible for 88% of lung cancer deaths in men and 75% of lung cancer cases in women in Australia.’

Tractable: high

The World Health Organization’s Framework Convention on Tobacco Control (FCTC) was the first public health treaty ever negotiate.

Based on private information, the balance of healthcare costs against tax revenues according to health advocates compared to treasury estimates in Australia may have been relevant to Australia’s leadership in tobacco regulation. That submission may or may not be adequate in complexity (ie. taking into account reduced lifespans impact on reduced pension payouts for instance). There is a good article about the behavioural economics of tobacco regulation here (http://baselinescenario.com/2011/03/22/incentives-dont-work/)

Room for advocacy: low

There are many hundreds of consumer support and advocacy groups, and cancer charities across Australia.

Room for employment: low?

Room for consulting: high


The rigour of analysis and achievements themselves in the Cancer Council of Australia annual review is underwhelming, as is the Cancer Council of Victoria’s annual report. There is a better organised body of evidence relating to their impact on their Wiki pages about effective interventions and policy priorities. At a glance, there appears to be room for more quantitative, methodologically rigorous and independent evaluation. I will be looking at GiveWell to see what I recommendations can be translated. I will keep records of my findings to formulate draft guidelines for advising organisations in the Cancer Councils’ positions which I estimate by vague memory of GiveWell’s claims are in the majority in the philanthropic space.

[Link] A rational response to the Paris attacks and ISIS

-1 Gleb_Tsipursky 23 November 2015 01:47AM

Here's my op-ed that uses long-term orientation, probabilistic thinking, numeracy, consider the alternative, reaching our actual goals, avoiding intuitive emotional reactions and attention bias, and other rationality techniques to suggest more rational responses to the Paris attacks and the ISIS threat. It's published in the Sunday edition of The Plain Dealer​, a major newspaper (16th in the US). This is part of my broader project, Intentional Insights, of conveying rational thinking, including about politics, to a broad audience to raise the sanity waterline.

Political Debiasing and the Political Bias Test

8 Stefan_Schubert 11 September 2015 07:04PM

Cross-posted from the EA forum. I asked for questions for this test here on LW about a year ago. Thanks to those who contributed.

Rationally, your political values shouldn't affect your factual beliefs. Nevertheless, that often happens. Many factual issues are politically controversial - typically because the true answer makes a certain political course of action more plausible - and on those issues, many partisans tend to disregard politically uncomfortable evidence.

This sort of political bias has been demonstrated in a large number of psychological studies. For instance, Yale professor Dan Kahan and his collaborators showed in a fascinating experiment that on politically controversial questions, people are quite likely to commit mathematical mistakes that help them retain their beliefs, but much less likely to commit mistakes that would force them to give up those belies. Examples like this abound in the literature.

Political bias is likely to be a major cause of misguided policies in democracies (even the main one according to economist Bryan Caplan). If they don’t have any special reason not to, people without special knowledge defer to the scientific consensus on technical issues. Thus, they do not interfere with the experts, who normally get things right. On politically controversial issues, however, they often let their political bias win over science and evidence, which means they’ll end up with false beliefs. And, in a democracy voters having systematically false beliefs obviously more often than not translates into misguided policy.

Can we reduce this kind of political bias? I’m fairly hopeful. One reason for optimism is that debiasing generally seems to be possible to at least some extent. This optimism of mine was strengthened by participating in a CFAR workshop last year. Political bias seems not to be fundamentally different from other kinds of biases and should thus be reducible too. But obviously one could argue against this view of mine. I’m happy to discuss this issue further.

Another reason for optimism is that it seems that the level of political bias is actually lower today than it was historically. People are better at judging politically controversial issues in a detached, scientific way today than they were in, say, the 14th century. This shows that progress is possible. There seems to be no reason to believe it couldn’t continue.

A third reason for optimism is that there seems to be a strong norm against political bias. Few people are consciously and intentionally politically biased. Instead most people seem to believe themselves to be politically rational, and hold that as a very important value (or so I believe). They fail to see their own biases due to the bias blind spot (which disables us from seeing our own biases).

Thus if you could somehow make it salient to people that they are biased, they would actually want to change. And if others saw how biased they are, the incentives to debias would be even stronger.

There are many ways in which you could make political bias salient. For instance, you could meticulously go through political debaters’ arguments and point out fallacies, like I have done on my blog. I will post more about that later. Here I want to focus on another method, however, namely a political bias test which I have constructed with ClearerThinking, run by EA-member Spencer Greenberg. Since learning how the test works might make you answer a bit differently, I will not explain how the test works here, but instead refer either to the explanatory sections of the test, or to Jess Whittlestone’s (also an EA member) Vox.com-article.

Our hope is of course that people taking the test might start thinking more both about their own biases, and about the problem of political bias in general. We want this important topic to be discussed more. Our test is produced for the American market, but hopefully, it could work as a generic template for bias tests in other countries (akin to the Political Compass or Voting Advice Applications).

Here is a guide for making new bias tests (where the main criticisms of our test are also discussed). Also, we hope that the test could inspire academic psychologists and political scientists to construct full-blown scientific political bias tests.

This does not mean, however, that we think that such bias tests in themselves will get rid of the problem of political bias. We need to attack the problem of political bias from many other angles as well.

Pro-Con-lists of arguments and onesidedness points

3 Stefan_Schubert 21 August 2015 02:15PM

Follow-up to Reverse Engineering of Belief Structures

Pro-con-lists of arguments such as ProCon.org and BalancedPolitics.org fill a useful purpose. They give an overview over complex debates, and arguably foster nuance. My network for evidence-based policy is currently in the process of constructing a similar site in Swedish.


I'm thinking it might be interesting to add more features to such a site. You could let people create a profile on the site. Then you would let them fill in whether they agree or disagree with the theses under discussion (cannabis legalization, GM foods legalization, etc), and also whether they agree or disagree with the different argument for and against these theses (alternatively, you could let them rate the arguments from 1-5).

Once you have this data, you could use them to give people different kinds of statistics. The most straightforward statistic would be their degree of "onesidedness". If you think that all of the arguments for the theses you believe in are good, and all the arguments against them are bad, then you're defined as onesided. If you, on the other hand, believe that some of your own side's arguments are bad, whereas some of the opponents' arguments are good, you're defined as not being onesided. (The exact mathematical function you would choose could be discussed.)

Once you've told people how one-sided they are, according to the test, you would discuss what might explain onesidedness. My hunch is that the most plausible explanation normally is different kinds of bias. Instead of reviewing new arguments impartially, people treat arguments for their views more leniently than arguments against their views. Hence they end up being onesided, according to the test.

There are other possible explanations, though. One is that all of the arguments against the thesis in question actually are bad. That might happen occassionally, but I don't think that's very common. As Eliezer Yudkowsky says in "Policy Debates Should Not Appear One-sided":

On questions of simple fact (for example, whether Earthly life arose by natural selection) there's a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called "balance of evidence" should reflect this.  Indeed, under the Bayesian definition of evidence, "strong evidence" is just that sort of evidence which we only expect to find on one side of an argument.

But there is no reason for complex actions with many consequences to exhibit this onesidedness property.  

Instead, the reason why people end up with one-sided beliefs is bias, Yudkowsky argues:

Why do people seem to want their policy debates to be one-sided?

Politics is the mind-killer.  Arguments are soldiers.  Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it's like stabbing your soldiers in the back.  If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary.

Especially if you're consistently one-sided in lots of different debates, it's hard to see that any other hypothesis besides bias is plausible. It depends a bit on what kinds of arguments you include in the list, though. In our lists we haven't really checked the quality of the arguments (our purpose is to summarize the debate, rather than to judge it), but you could also do that, of course.

My hope is that such a test would make people more aware both of their own biases, and of the problem of political bias in general. I'm thinking that is the first step towards debiasing. I've also constructed a political bias test with similar methods and purposes together with ClearerThinking, which should be released soon.


You could also add other features to a pro-con-list. For instance, you could classify arguments in different ways: ad hominem-arguments, consequentialist arguments, rights-based arguments, etc. (Some arguments might be hard to classify, and then you just wouldn't do that. You wouldn't necessarily have to classify every argument.) Using this info, you could give people a profile: e.g., what kinds of arguments do they find most persuasive? That could make them reflect more on what kinds of arguments really are valid.

You could also combine these two features. For instance, some people might accept ad hominem-arguments when they support their views, but not when they contradict them. That would make your use of ad hominem-arguments onesided.


Yet another feature that could be added is a standard political compass. Since people fill in what theses they believe in (cannabis legalization, GM goods legalization, etc) you could calcluate what party is closest to them, based on the parties' stances on these issues. That could potentially make the test more attractive to take.


Suggestions of more possible features are welcome, as well as general comments - especially about implementation.

[POLITICS] Jihadism and a new kind of existential threat

-5 MrMind 25 March 2015 09:37AM

Politics is the mind-killer. Politics IS really the mind-killer. Please meditate on this until politics flows over you like butter on hot teflon, and your neurons stops fibrillating and resume their normal operations.


I've always found silly that LW, one of the best and most focused group of rationalists on the web isn't able to talk evenly about politics. It's true that we are still human, but can't we just make an effort at being calm and level-headed? I think we can. Does gradual exposure works on group, too? Maybe a little bit of effort combined with a little bit of exposure will work as a vaccine.
And maybe tomorrow a beautiful naked valkyrie will bring me to utopia on her flying unicorn...
Anyway, I want to try. Let's see what happens.


Two recent events has prompted me to make this post: I'm reading "The rise of the Islamic State" by Patrick Coburn, which I think does a good job in presenting fairly the very recent history surrounding ISIS, and the terrorist attack in Tunis by the same group, which resulted in 18 foreigners killed.
I believe that their presence in the region is now definitive: they control an area that is wider than Great Britain, with a population tallying over six millions, not counting the territories controlled by affiliate group like Boko Haram. Their influence is also expanding, and the attack in Tunis shows that this entity is not going to stay confined between the borders of Syria and Iraq.
It may well be the case that in the next ten years or so, this will be an international entity which will bring ideas and mores predating the Middle Age back on the Mediterranean Sea.

A new kind of existential threat

To a mildly rational person, the conflict fueling the rise of the Islamic State, namely the doctrinal differences between Sunni and Shia Islam, is the worst kind of Blue/Green division. A separation that causes hundreds of billions of dollars (read that again) to be wasted trying kill each other. But here it is, and the world must deal with it.
In comparison, Democrats and Republicans are so close that they could be mistaken for Aumann agreeing.
I fear that ISIS is bringing a new kind of existential threat: one where is not the existence of humankind at risks, but the existence of the idea of rationality.
The funny thing is that while people can be extremely irrational, they can still work on technology to discover new things. Fundamentalism has never stopped a country to achieve technological progress: think about the wonderful skyscrapers and green patches in the desert of the Arab Emirates or the nuclear weapons of Pakistan. So it might well be the case that in the future some scientist will start a seed AI believing that Allah will guide it to evolve in the best way. But it also might be that in the future, African, Asian and maybe European (gasp!) rationalists will be hunted down and killed like rats.
It might be the very meme of rationality to be erased from existence.


I'll close with a bunch of questions, both strictly and loosely related. Mainly, I'm asking you to refrain from proposing a solution. Let's assess the situation first.

  • Do you think that the Islamic State is an entity which will vanish in the future or not?
  • Do you think that their particularly violent brand of jihadism is a worse menace to the sanity waterline than say, other kind of religious movements, past or present?
  • Do you buy the idea that fundamentalism can be coupled with technological advancement, so that the future will presents us with Islamic AI's?
  • Do you think that the very same idea of rationality can be the subject of existential risk?
  • What do Neoreactionaries think of the Islamic State? After all, it's an exemplar case of the reactionaries in those areas winning big. I know it's only a surface comparison, I'm sincerely curious about what a NR think of the situation.

Live long and prosper.

A bit of word-dissolving in political discussion

2 [deleted] 07 December 2014 05:05PM

I found Scott Alexander's steelmanning of the NRx critique to be an interesting, even persuassive critique of modern progressivism, having not been exposed to this movement prior to today. However I am also equally confused at the jump from "modern liberal democracies are flawed" to "restore the devine-right-of-kings!" I've always hated the quip "democracy is the worst form of government, except for all the others" (we've yet tried), but I think it applies here.

-- Mark Friedenbach

Of course, with the prompting to state my own thoughts, I simply had to go and start typing them out.  The following contains obvious traces of my own political leanings and philosophy (in short summary: if "Cthulhu only swims left", then I AM CTHULHU... at least until someone explains to me what a Great Old One is doing out of R'lyeh and in West Coast-flavored American politics), but those traces should be taken as evidence of what I believe rather than statements about it.

Because what I was actually trying to talk about, is rationality in politics.  Because in fact, while it is hard, while it is spiders, all the normal techniques work on it.  There is only one real Cardinal Sin of Attempting to be Rational in Politics, and it is the following argument, stated in generic form that I might capture it from the ether and bury it: "You only believe what you believe for political reasons!"  It does not matter if those "reasons" are signaling, privilege, hegemony, or having an invisible devil on your shoulder whispering into your bloody ear: to impugn someone else's epistemology entirely at the meta-level without saying a thing against their object-level claims is anti-epistemology.

Now, on to the ranting!  The following are more-or-less a semi-random collection of tips I vomited out for trying to deal with politics rationally.  I hope they help.  This is a Discussion post because Mark said that might be a good idea.

  1. Dissolve "democracy", and not just in the philosophical sense, but in the sense that there have been many different kinds of actually existing democracies.  There are always multiple object-level implementations of any meta-level idea, and most political ideas are sufficiently abstract to count as meta-level.  Even if, for purposes of a thought experiment, you find yourself saying, "I WILL ONLY EVER CONSIDER SYSTEMS THAT COUNT AS DEMOCRACY ACCORDING TO MY INTUITIVE DEMOCRACY-P() PREDICATE!", one can easily debate whether a mixed-member proportional Parliament performs better than a district-based bicameral Congress, or whether a pure Westminster system beats them both, or whether a Presidential system works better, or whatever.  Particular institutional designs yield particular institutional behaviors, and successfully inducing complex generalizations across large categories of institutional designs requires large amounts of evidence -- just as it does in any other form of hierarchical probabilistic reasoning.
  2. Dissolve words like "democracy", "capitalism", "socialism", and "government" in the philosophical sense, and ask: what are the terminal goals democracy serves?  How much do we support those goals, and how much do current democratic systems suffer approximation error by forcing our terminal goals to fit inside the hypothesis space our actual institutions instantiate?  For however much we do support those goals, why do we shape these particular institutions to serve those goals, and not other institutions? For all values of X, mah nishtana ha-X hazeh mikol ha-X-im? is a fundamental question of correct reasoning.  (Asking the question of why we instantiate particular institutions in particular places, when one believes in democratic states, is the core issue of democratic socialism, and I would indeed count myself a democratic socialist.  But you get different answers and inferences if you ask about schools or churches, don't you?)
  3. Learn first to explicitly identify yourself with a political "tribe", and next to consider political ideas individually, as questions of fact and value subject to investigation via epistemology and moral epistemology, rather than treating politics as "tribal".  Tribalism is the mind-killer: keeping your own explicit tribal identification in mind helps you notice when you're being tribalist, and helps you distinguish your own tribe's customs from universal truths -- both aids to your political rationality.  And yes, while politics has always been at least a little tribal, the particular form the tribes take varies through time and space: the division of society into a "blue tribe" and a "red tribe" (as oft-described by Yvain on Slate Star Codex), for example, is peculiar to late-20th-century and early-21st-century USA.  Those colors didn't even come into usage until the 2000 Presidential election, and hadn't firmly solidified as describing seemingly separate nationalities until 2004!  Other countries, and other times, have significantly different arrangements of tribes, so if you don't learn to distinguish between ideas and tribes, you'll not only fail at political rationality, you'll give yourself severe culture shock the first time you go abroad.
    1. General rule: you often think things are general rules of the world not because you have the large amount of evidence necessary to reason that they really are, but because you've seen so few alternatives that your subjective distribution over models contains only one or two models, both coarse-grained.  Unquestioned assumptions always feel like universal truths from the inside!
  4. Learn to check political ideas by looking at the actually-existing implementations, including the ones you currently oppose -- think of yourself as bloody Sauron if you have to!  This works, since most political ideas are not particularly original.  Commons trusts exist, for example, the "movement" supporting them just wants to scale them up to cover all society's important common assets rather than just tracts of land donated by philanthropists.  Universal health care exists in many countries.  Monarchy and dictatorship exist in many countries.  Religious rule exists in many countries.  Free tertiary education exists in some countries, and has previously existed in more.  Non-free but subsidized tertiary education exists in many countries.  Running the state off oil revenue has been tried in many countries.  Centrally-planned economies have been tried in many countries.  And it's damn well easier to compare "Canadian health-care" to "American health-care" to "Chinese health-care", all sampled in 2014, using fact-based policy studies, than to argue about the Visions of Human Life represented by each (the welfare state, the Company Man, and the Lone Fox, let's say) -- which of course assumes consequentialism.  In fact, I should issue a much stronger warning here: argumentation is an utterly unreliable guide to truth compared to data, and all these meta-level political conclusions require vast amounts of object-level data to induce correct causal models of the world that allow for proper planning and policy.
    1. This means that while the Soviet Union is not evidence for the total failure of "socialism" as I use the word, that's because I define socialism as a larger category of possible economies that strictly contains centralized state planning -- centralized state planning really was, by and large, a total fucking failure.  But there's a rationality lesson here: in politics, all opponents of an idea will have their own definition for it, but the supporters will only have one.  Learn to identify political terminology with the definitions advanced by supporters: these definitions might contain applause lights, but at least they pick out one single spot in policy-space or society-space (or, hopefully, a reasonably small subset of that space), while opponents don't generally agree on which precise point in policy-space or society-space they're actually attacking (because they're all opposed for their own reasons and thus not coordinating with each-other).
    2. This also means that if someone wants to talk about monarchies that rule by religious right, or even about absolute monarchies in general, they do have to account for the behavior of the Arab monarchies today, for example.  Or if they want to talk about religious rule in general (which very few do, to my knowledge, but hey, let's go with it), they actually do have to account for the behavior of Da3esh/ISIS.  Of course, they might do so by endorsing such regimes, just as some members of Western Communist Parties endorsed the Soviet Union -- and this can happen by lack of knowledge, by failure of rationality, or by difference of goals.
    3. And then of course, there are the complications of the real world: in the real world, neither perfect steelman-level central planning nor perfect steelman-level markets have ever been implemented, anywhere, with the result that once upon a time, the Soviet economy was allocatively efficient and prices in capitalist West Germany were just as bad at reflecting relative scarcities as those in centrally-planned East GermanyThe real advantage of market systems has ended up being the autonomy of firms, not allocative optimality (and that's being argued, right there, in the single most left-wing magazine I know of!).  Which leads us to repeat the warning: correct conclusions are induced from real-world data, not argued from a priori principles that usually turn out to be wildly mis-emphasized if not entirely wrong.
  5. Learn to notice when otherwise uninformed people are adopting political ideas as attire to gain status by joining a fashionable cause.  Keep in mind that what constitutes "fashionable" depends on the joiner's own place in society, not on your opinions about them.  For some people, things you and I find low-status (certain clothes or haircuts) are, in fact, high-status.  See Yvain's "Republicans are Douchebags" post for an example in a Western context: names that the American Red Tribe considers solid and respectable are viewed by the American Blue Tribe as "douchebag names".
  6. A heuristic that tends to immunize against certain failures of political rationality: if an argument does not base itself at all in facts external to itself or to the listener, but instead concentrates entirely on reinterpreting evidence, then it is probably either an argument about definitions, or sheer nonsense.  This is related to my comments on hierarchical reasoning above, and also to the general sense in which trying to refute an object-level claim by meta-level argumentation is not even wrong, but in fact anti-epistemology.
  7. A further heuristic, usable on actual electioneering campaigns the world over: whenever someone says "values", he is lying, and you should reach for your gun.  The word "values" is the single most overused, drained, meaningless word in politics.  It is a normative pronoun: it directs the listener to fill in warm fuzzy things here without concentrating the speaker and the listener on the same point in policy-space at all.  All over the world, politicians routinely seek power on phrases like "I have values", or "My opponent has no values", or "our values" or "our $TRIBE values", or "$APPLAUSE_LIGHT values".  Just cross those phrases and their entire containing sentences out with a big black marker, and then see what the speaker is actually saying.  Sometimes, if you're lucky (ie: voting for a Democrat), they're saying absolutely nothing.  Often, however, the word "values" means, "Good thing I'm here to tell you that you want this brand new oppressive/exploitative power elite, since you didn't even know!"
  8. As mentioned above, be very, very sure about what ethical framework you're working within before having a political discussion.  A consequentialist and a virtue-ethicist will often take completely different policy positions on, say, healthcare, and have absolutely nothing to talk about with each-other.  The consequentialist can point out the utilitarian gains of universal single-payer care, and the virtue-ethicist can point out the incentive structure of corporate-sponsored group plans for promoting hard work and loyalty to employers, but they are fundamentally talking past each-other.
    1. Often, the core matter of politics is how to trade off between ethical ideals that are otherwise left talking past each-other, because society has finite material resources, human morals are very complex, and real policies have unintended consequences.  For example, if we enact Victorian-style "poor laws" that penalize poverty for virtue-ethical reasons, the proponents of those laws need to be held accountable for accepting the unintended consequences of those laws, including higher crime rates, a less educated workforce, etc.  (This is a broad point in favor of consequentialism: a rational consequentialist always considers consequences, intended and unintended, or he fails at consequentialism.  A deontologist or virtue-ethicist, on the other hand, has license from his own ethics algorithm to not care about unintended consequences at all, provided the rules get followed or the rules or rulers are virtuous.)
  9. Almost all policies can be enacted more effectively with state power, and almost no policies can "take over the world" by sheer superiority of the idea all by themselves.  Demanding that a successful policy should "take over the world" by itself, as everyone naturally turns to the One True Path, is intellectually dishonest, and so is demanding that a policy should be maximally effective in miniature (when tried without the state, or in a small state, or in a weak state) before it is justified for the state to experiment with it.  Remember: the overwhelming majority of journals and conferences in professional science still employ frequentist statistics rather than Bayesianism, and this is 20 years after the PC revolution and the World Wide Web, and 40 years after computers became widespread in universities.  Human beings are utility-satisficing, adaptation-executing creatures with mostly-unknown utility functions: expecting them to adopt more effective policies quickly by mere effectiveness of the policy is downright unrealistic.
  10. The Appeal to Preconceptions is probably the single Darkest form of Dark Arts, and it's used everywhere in politics.  When someone says something to you that "stands to reason" or "sounds right", which genuinely seems quite plausible, actually, but without actually providing evidence, you need to interrogate your own beliefs and find the Equivalent Sample Size of the informative prior generating that subjective plausibility before you let yourself get talked into anything.  This applies triply in philosophy.


Is arguing worth it? If so, when and when not? Also, how do I become less arrogant?

9 27chaos 27 November 2014 09:28PM

I've had several political arguments about That Which Must Not Be Named in the past few days with people of a wide variety of... strong opinions. I'm rather doubtful I've changed anyone's mind about anything, but I've spent a lot of time trying to do so. I also seem to have offended one person I know rather severely. Also, even if I have managed to change someone's mind about something through argument, it feels as though someone will end up having to argue with them later down the line when the next controversy happens.

It's very discouraging to feel this way. It is frustrating when making an argument is taken as a reason for personal attack. And it's annoying to me to feel like I'm being forced into something by the disapproval of others. I'm tempted to just retreat from democratic engagement entirely. But there are disadvantages to this, for example it makes it easier to maintain irrational beliefs if you never talk to people who disagree with you.

I think a big part of the problem is that I have an irrational alief that makes me feel like my opinions are uniquely valuable and important to share with others. I do think I'm smarter, more moderate, and more creative than most. But the feeling's magnitude and influence over my behavior is far greater than what's justified by the facts.

How do I destroy this feeling? Indulging it satisfies some competitive urges of mine and boosts my self-esteem. But I think it's bad overall despite this, because it makes evaluating the social consequences of my choices more difficult. It's like a small addiction, and I have no idea how to get over it.

Does anyone else here have an opinion on any of this? Advice from your own lives, perhaps?

Three methods of attaining change

7 Stefan_Schubert 16 August 2014 03:38PM

Say that you want to change some social or political institution: the educational system, the monetary system, research on AGI safety, or what not. When trying to reach this goal, you may use one of the following broad strategies (or some combination of them):

1) You may directly try to lobby (i.e. influence) politicians to implement this change, or try to influence voters to vote for parties that promise to implement these changes. 

2) You may try to build an alternative system and hope that it eventually becomes so popular so that it replaces the existing system.

3) You may try to develop tools that a) appeal to users of existing systems and b) whose widespread use is bound to change those existing systems.

Let me give some examples of what I mean. Trying to persuade politicians that we should replace conventional currencies by a private currency or, for that matter, starting a pro-Bitcoin party, fall under 1), whereas starting a private currency and hope that it spreads falls under 2). (This post was inspired by a great comment by Gunnar Zarncke on precisely this topic. I take it that he was there talking of strategy 2.) Similarly, trying to lobby politicians to reform the academia falls under 1) whereas starting new research institutions which use new and hopefully more effective methods falls under 2). I take it that this is what, e.g. Leverage Research is trying to do, in part. Similarly, libertarians who vote for Ron Paul are taking the first course, while at least one possible motivation for the Seasteading Institute is to construct an alternative system that proves to be more efficient than existing governments.

Efficient Voting Advice Applications (VAA's), which advice you to vote on the basis of your views on different policy matters, can be an example of 3) (they are discussed here). Suppose that voters started to use them on a grand scale. This could potentially force politicians to adhere very closely to the views of the voters on each particular issue, since if you failed to do this you would stand little chance of winning. This may or may not be a good thing, but the point is that it would be a change that would not be caused by lobbying of politicians or by building an alternative system, but simply by constructing a tool whose widespread use could change the existing system.

Another similar tool is reputation or user review systems. Suppose that you're dissatisfied with the general standards of some institution: say university education, medical care, or what not. You may attain this by lobbying politicians to implement new regulations intended to ensure quality (1), or by starting your own, superior, universities or hospitals (2), hoping that others will follow. Another method is, however, to create a reliable reputation/review system which, if they became widely used, would guide students and patients to the best universities and hospitals, thereby incentivizing to improve.

Now of course, when you're trying to get people to use such review systems, you are, in effect, building an evaluation system that competes with existing systems (e.g. the Guardian university ranking), so on one level you are using the second strategy. Your ultimate goal is, however, to create better universities, to which better evaluation systems, is just a means (as a tool). Hence you're following the third strategy here, in my terms.

Strategy 1) is of course a "statist" one, since what you're doing here is that you're trying to get the government to change the institution in question for you. Strategies 2) and 3) are, in contrast, both "non-statist", since when you use them you're not directly trying to implement the change through the political system. Hence libertarians and other anti-statists should prefer them.

My hunch is that when people are trying to change things, many of them unthinkingly go for 1), even regarding issues where it is unlikely that they are going to succeed that way. (For instance, it seems to me that advocates for direct democracy who try to persuade voters to vote for direct democratic parties are unlikely to succeed, but that widespread of VAA's might get us considerably closer to their ideal, and that they therefore should opt for the third strategy.) A plausible explanation of this is availability bias; our tendency to focus on what we most often see around us. Attempts to change social institutions through politics get a lot of attention, which makes people think of this strategy first. Even though this strategy is often efficient, I'd guess it is, for this reason, generally overused and that people sometimes instead should go for 2) or 3). (Possibly, Europeans have an even stronger bias in favour of this strategy than Americans.)

I also suspect, though, that people go for 2) a bit too often relative to 3). I think that people find it appealing, for its own sake, to create an entirely alternatively structure. If you're a perfectionist, it might be satisfying to see what you consider "the perfect institution", even if it is very small and has little impact on society. Also, sometimes small groups of devotees flock to these alternatives, and a strong group identity is therefore created. Moreover, I think that availability bias may play a role here, also. Even though this sort of strategy gets less attention than lobbying, most people know what it is. It is quite clear what it means to do something like this, and being part of a project like this therefore gives you a clear identity. For these reasons, I think that we might sometimes fool ourselves into believing that these alternative structures are more likely to be succesful than they actually are.

Conversely, people might be biased against the third strategy because it's less obvious. Also, it has perhaps something vaguely manipulative over it which might bias idealistic people against it. What you're typically trying to do is to get people to use a tool (say VAA's) a side-effect of which is the change you wish to attain (in this case, correspondence between voters' views and actual policies). I don't think that this kind of manipulation is necessarily vicious (but it would need to be discussed on a case-by-case-basis) but the point is that people tend to think that it is. Also, even those who don't think that it is manipulative in an unethical sense would still think that it is somehow "unheroic". Starting your own environmental party or creating your own artifical libertarian island clearly has something heroic over it, but developing efficient VAA's, which as a side-effect changes the political landscape, does not.

I'd thus argue that people should start looking more closely at the third strategy. A group that does use a strategy similar to this is of course for-profit companies. They try to analyze what products would appeal to people, and in so doing, carefully consider how existing institutions shape people's preferences. For instance, companies like Uber, AirBnB and LinkedIn have been succesful because they realized that given the structure of the taxi, the hotel and the recruitment businesses, their products would be appealing.

Of course, these companies primary goal, profit, is very different from the political goals I'm talking about here. At the same time, I think it is useful to compare the two cases. I think that generally, when we're trying to attain political change, we're not "actually trying" (in CFAR's terminology) as hard as we do when we're trying to maximize profit . It is very easy to fall into a mode where you're focusing on making symbolic gestures (which express your identity) rather than on trying to change things in politics. (This is, in effect, what many traditional charities are doing, if the EA movement is right.)

Instead, we should think as hard as profit-maximizing companies what new tools are likely to catch on. Any kind of tools could in principle be used, but the ones that seem most obvious are various kind of social media and other internet based tools (such as those mentioned in this post). The technical progress gives us enormous opportunities to costruct new tools that could re-shape people's behaviour in a way that would impact existing social and political institutions on a large scale.

Developing such tools is not easy. Even very succesful companies again and again fail to predict what new products will appeal to people. Not the least, you need a profound understanding of human psychology in order to succeed. That said, political organizations have certain advantages visavi for-profit companies. More often than not, they might develop ideas publically, whereas for-profit companies often have to keep them secret until they product is launched. This facilitates wisdom of the crowd-reasoning, where many different kinds of people come up with solutions together. Such methods can, in my opinion, be very powerful.


Any input regarding, e.g. the taxonomy of methods, my speculations about biases, and, in particular, examples of institution changing tools are welcome. I'm also interested in comments on efficient methods for coming up with useful tools (e.g. tests of them). Finally, if anything's unclear I'd be happy to provide clarifications (it's a very complex topic).

Every Paul needs a Jesus

9 PhilGoetz 10 August 2014 07:13PM

My take on some historical religious/social/political movements:

  • Jesus taught a radical and highly impractical doctrine of love and disregard for one's own welfare. Paul took control of much of the church that Jesus' charisma had built, and reworked this into something that could function in a real community, re-emphasizing the social mores and connections that Jesus had spent so much effort denigrating, and converting Jesus' emphasis on radical social action into an emphasis on theology and salvation.
  • Marx taught a radical and highly impractical theory of how workers could take over the means of production and create a state-free Utopia. Lenin and Stalin took control of the organizations built around those theories, and reworked them into a strong, centrally-controlled state.
  • Che Guevara (I'm ignorant here and relying on Wikipedia; forgive me) joined Castro's rebel group early on, rose to the position of second in command, was largely responsible for the military success of the revolution, and had great motivating influence due to his charisma and his unyielding, idealistic, impractical ideas. It turned out his idealism prevented him from effectively running government institutions, so he had to go looking for other revolutions to fight in while Castro ran Cuba.

The best strategy for complex social movements is not honest rationality, because rational, practical approaches don't generate enthusiasm. A radical social movement needs one charismatic radical who enunciates appealing, impractical ideas, and another figure who can appropriate all of the energy and devotion generated by the first figure's idealism, yet not be held to their impractical ideals. It's a two-step process that is almost necessary, to protect the pretty ideals that generate popular enthusiasm from the grit and grease of institution and government. Someone needs to do a bait-and-switch. Either the original vision must be appropriated and bent to a different purpose by someone practical, or the original visionary must be dishonest or self-deceiving.

continue reading »

Politics is hard mode

27 RobbBB 21 July 2014 10:14PM

Summary: I don't think 'politics is the mind-killer' works well rthetorically. I suggest 'politics is hard mode' instead.


Some people in and catawampus to the LessWrong community have objected to "politics is the mind-killer" as a framing (/ slogan / taunt). Miri Mogilevsky explained on Facebook:

My usual first objection is that it seems odd to single politics out as a “mind-killer” when there’s plenty of evidence that tribalism happens everywhere. Recently, there has been a whole kerfuffle within the field of psychology about replication of studies. Of course, some key studies have failed to replicate, leading to accusations of “bullying” and “witch-hunts” and what have you. Some of the people involved have since walked their language back, but it was still a rather concerning demonstration of mind-killing in action. People took “sides,” people became upset at people based on their “sides” rather than their actual opinions or behavior, and so on.

Unless this article refers specifically to electoral politics and Democrats and Republicans and things (not clear from the wording), “politics” is such a frightfully broad category of human experience that writing it off entirely as a mind-killer that cannot be discussed or else all rationality flies out the window effectively prohibits a large number of important issues from being discussed, by the very people who can, in theory, be counted upon to discuss them better than most. Is it “politics” for me to talk about my experience as a woman in gatherings that are predominantly composed of men? Many would say it is. But I’m sure that these groups of men stand to gain from hearing about my experiences, since some of them are concerned that so few women attend their events.

In this article, Eliezer notes, “Politics is an important domain to which we should individually apply our rationality — but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.” But that means that we all have to individually, privately apply rationality to politics without consulting anyone who can help us do this well. After all, there is no such thing as a discussant who is “rational”; there is a reason the website is called “Less Wrong” rather than “Not At All Wrong” or “Always 100% Right.” Assuming that we are all trying to be more rational, there is nobody better to discuss politics with than each other.

The rest of my objection to this meme has little to do with this article, which I think raises lots of great points, and more to do with the response that I’ve seen to it — an eye-rolling, condescending dismissal of politics itself and of anyone who cares about it. Of course, I’m totally fine if a given person isn’t interested in politics and doesn’t want to discuss it, but then they should say, “I’m not interested in this and would rather not discuss it,” or “I don’t think I can be rational in this discussion so I’d rather avoid it,” rather than sneeringly reminding me “You know, politics is the mind-killer,” as though I am an errant child. I’m well-aware of the dangers of politics to good thinking. I am also aware of the benefits of good thinking to politics. So I’ve decided to accept the risk and to try to apply good thinking there. [...]

I’m sure there are also people who disagree with the article itself, but I don’t think I know those people personally. And to add a political dimension (heh), it’s relevant that most non-LW people (like me) initially encounter “politics is the mind-killer” being thrown out in comment threads, not through reading the original article. My opinion of the concept improved a lot once I read the article.

In the same thread, Andrew Mahone added, “Using it in that sneering way, Miri, seems just like a faux-rationalist version of ‘Oh, I don’t bother with politics.’ It’s just another way of looking down on any concerns larger than oneself as somehow dirty, only now, you know, rationalist dirty.” To which Miri replied: “Yeah, and what’s weird is that that really doesn’t seem to be Eliezer’s intent, judging by the eponymous article.”

Eliezer replied briefly, to clarify that he wasn't generally thinking of problems that can be directly addressed in local groups (but happen to be politically charged) as "politics":

Hanson’s “Tug the Rope Sideways” principle, combined with the fact that large communities are hard to personally influence, explains a lot in practice about what I find suspicious about someone who claims that conventional national politics are the top priority to discuss. Obviously local community matters are exempt from that critique! I think if I’d substituted ‘national politics as seen on TV’ in a lot of the cases where I said ‘politics’ it would have more precisely conveyed what I was trying to say.

But that doesn't resolve the issue. Even if local politics is more instrumentally tractable, the worry about polarization and factionalization can still apply, and may still make it a poor epistemic training ground.

A subtler problem with banning “political” discussions on a blog or at a meet-up is that it’s hard to do fairly, because our snap judgments about what counts as “political” may themselves be affected by partisan divides. In many cases the status quo is thought of as apolitical, even though objections to the status quo are ‘political.’ (Shades of Pretending to be Wise.)

Because politics gets personal fast, it’s hard to talk about it successfully. But if you’re trying to build a community, build friendships, or build a movement, you can’t outlaw everything ‘personal.’

And selectively outlawing personal stuff gets even messier. Last year, daenerys shared anonymized stories from women, including several that discussed past experiences where the writer had been attacked or made to feel unsafe. If those discussions are made off-limits because they relate to gender and are therefore ‘political,’ some folks may take away the message that they aren’t allowed to talk about, e.g., some harmful or alienating norm they see at meet-ups. I haven’t seen enough discussions of this failure mode to feel super confident people know how to avoid it.

Since this is one of the LessWrong memes that’s most likely to pop up in cross-subcultural dialogues (along with the even more ripe-for-misinterpretation “policy debates should not appear one-sided“…), as a first (very small) step, my action proposal is to obsolete the ‘mind-killer’ framing. A better phrase for getting the same work done would be ‘politics is hard mode’:

1. ‘Politics is hard mode’ emphasizes that ‘mind-killing’ (= epistemic difficulty) is quantitative, not qualitative. Some things might instead fall under Middlingly Hard Mode, or under Nightmare Mode…

2. ‘Hard’ invites the question ‘hard for whom?’, more so than ‘mind-killer’ does. We’re used to the fact that some people and some contexts change what’s ‘hard’, so it’s a little less likely we’ll universally generalize.

3. ‘Mindkill’ connotes contamination, sickness, failure, weakness. In contrast, ‘Hard Mode’ doesn’t imply that a thing is low-status or unworthy. As a result, it’s less likely to create the impression (or reality) that LessWrongers or Effective Altruists dismiss out-of-hand the idea of hypothetical-political-intervention-that-isn’t-a-terrible-idea. Maybe some people do want to argue for the thesis that politics is always useless or icky, but if so it should be done in those terms, explicitly — not snuck in as a connotation.

4. ‘Hard Mode’ can’t readily be perceived as a personal attack. If you accuse someone of being ‘mindkilled’, with no context provided, that smacks of insult — you appear to be calling them stupid, irrational, deluded, or the like. If you tell someone they’re playing on ‘Hard Mode,’ that’s very nearly a compliment, which makes your advice that they change behaviors a lot likelier to go over well.

5. ‘Hard Mode’ doesn’t risk bringing to mind (e.g., gendered) stereotypes about communities of political activists being dumb, irrational, or overemotional.

6. ‘Hard Mode’ encourages a growth mindset. Maybe some topics are too hard to ever be discussed. Even so, ranking topics by difficulty encourages an approach where you try to do better, rather than merely withdrawing. It may be wise to eschew politics, but we should not fear it. (Fear is the mind-killer.)

7. Edit: One of the larger engines of conflict is that people are so much worse at noticing their own faults and biases than noticing others'. People will be relatively quick to dismiss others as 'mindkilled,' while frequently flinching away from or just-not-thinking 'maybe I'm a bit mindkilled about this.' Framing the problem as a challenge rather than as a failing might make it easier to be reflective and even-handed.

This is not an attempt to get more people to talk about politics. I think this is a better framing whether or not you trust others (or yourself) to have productive political conversations.

When I playtested this post, Ciphergoth raised the worry that 'hard mode' isn't scary-sounding enough. As dire warnings go, it's light-hearted—exciting, even. To which I say: good. Counter-intuitive fears should usually be argued into people (e.g., via Eliezer's politics sequence), not connotation-ninja'd or chanted at them. The cognitive content is more clearly conveyed by 'hard mode,' and if some group (people who love politics) stands to gain the most from internalizing this message, the message shouldn't cast that very group (people who love politics) in an obviously unflattering light. LW seems fairly memetically stable, so the main issue is what would make this meme infect friends and acquaintances who haven't read the sequences. (Or Dune.)

If you just want a scary personal mantra to remind yourself of the risks, I propose 'politics is SPIDERS'. Though 'politics is the mind-killer' is fine there too.

If you and your co-conversationalists haven’t yet built up a lot of trust and rapport, or if tempers are already flaring, conveying the message ‘I’m too rational to discuss politics’ or ‘You’re too irrational to discuss politics’ can make things worse. In that context, ‘politics is the mind-killer’ is the mind-killer. At least, it’s a needlessly mind-killing way of warning people about epistemic hazards.

‘Hard Mode’ lets you speak as the Humble Aspirant rather than the Aloof Superior. Strive to convey: ‘I’m worried I’m too low-level to participate in this discussion; could you have it somewhere else?’ Or: ‘Could we talk about something closer to Easy Mode, so we can level up together?’ More generally: If you’re worried that what you talk about will impact group epistemology, you should be even more worried about how you talk about it.

A Parable of Elites and Takeoffs

23 gwern 30 June 2014 11:04PM

Let me tell you a parable of the future. Let’s say, 70 years from now, in a large Western country we’ll call Nacirema.

One day far from now: scientific development has continued apace, and a large government project (with, unsurprisingly, a lot of military funding) has taken the scattered pieces of cutting-edge research and put them together into a single awesome technology, which could revolutionize (or at least, vastly improve) all sectors of the economy. Leading thinkers had long forecast that this area of science’s mysteries would eventually yield to progress, despite theoretical confusion and perhaps-disappointing initial results and the scorn of more conservative types and the incomprehension (or outright disgust, for ‘playing god’) of the general population, and at last - it had! The future was bright.

Unfortunately, it was hurriedly decided to use an early prototype outside the lab in an impoverished foreign country. Whether out of arrogance, bureaucratic inertia, overconfidence on the part of the involved researchers, condescending racism, the need to justify the billions of grant-dollars that cumulative went into the project over the years by showing some use of it - whatever, the reasons no longer mattered after the final order was signed. The technology was used, but the consequences turned out to be horrific: over a brief period of what seemed like mere days, entire cities collapsed and scores - hundreds - of thousands of people died. (Modern economies are extremely interdependent and fragile, and small disruptions can have large consequences; more people died in the chaos of the evacuation of the areas around Fukushima than will die of the radiation.)

continue reading »

Democracy and individual liberty; decentralised prediction markets

-1 Chrysophylax 15 March 2014 12:27PM

A pair of links I found recently (via Marginal Revolution) and haven't found on LW:





The former discusses liberty in the context of clannish behaviour, arguing that it is the existence of the institutions of modern democracies that allows people individual liberty, as it precludes the need for clan structures (extended family groups, crime syndicates, patronage networks and such).

The latter is a author's summary of a white paper on the subject of decentralised Bitcoin prediction markets with a link to the paper.

[LINK] Joseph Bottum on Politics as the Mindkiller

2 Salemicus 27 February 2014 07:40PM

One of my favourite Less Wrong articles is Politics is the mindkiller. Part of the reason that political discussion so bad is the poor incentives - if you have little chance to change the outcome, then there is little reason to strive for truth or accuracy - but a large part of the reason is our pre-political attitudes and dispositions. I don't mean to suggest that there is a neat divide; clearly, there is a reflexive relation between the incentives within political discussion and our view of the appropriate purpose and scope of politics. Nevertheless, I think it's a useful distinction to make, and so I applaud the fact that Eliezer doesn't start his essays on the subject by talking about incentives, feedback or rational irrationality - instead he starts with the fact that our approach to politics is instinctively tribal.

This brings me to Joseph Bottum's excellent recent article in The American, The Post-Protestant Ethic and Spirit of America. This charts what he sees as the tribal changes within America that have shaped current attitudes to politics. I think it's best seen in conjunction with Arnold Kling's excellent The Three Languages of Politics; while Kling talks about the political language and rhetoric of modern American political groupings, Bottum's essay is more about the social changes that have led to these kinds of language and rhetoric.

We live in what can only be called a spiritual age, swayed by its metaphysical fears and hungers, when we imagine that our ordinary political opponents are not merely mistaken, but actually evil. When we assume that past ages, and the people who lived in them, are defined by the systematic crimes of history. When we suppose that some vast ethical miasma, racism, radicalism, cultural self-hatred, selfish blindness, determines the beliefs of classes other than our own. When we can make no rhetorical distinction between absolute wickedness and the people with whom we disagree. The Republican Congress is the Taliban. President Obama is a Communist. Wisconsin’s governor is a Nazi.


The real question, of course, is how and why this happened. How and why politics became a mode of spiritual redemption for nearly everyone in America, but especially for the college-educated upper-middle class, who are probably best understood not as the elite, but as the elect, people who know themselves as good, as relieved of their spiritual anxieties by their attitudes toward social problems.

Video of a related lecture can also be found here.

Link: Poking the Bear (Podcast)

0 James_Miller 27 February 2014 03:43PM

A Dan Carlin Podcast about how the United States is foolishly antagonizing the Russians over Ukraine.  Carlin makes an analogy as to how the United States would feel if Russia helped overthrow the government of Mexico to install an anti-American government under conditions that might result in a Mexican civil war.  Because of the Russian nuclear arsenal, even a tiny chance of a war between the United States and Russia has a huge negative expected value.

How big of an impact would cleaner political debates have on society?

4 adamzerner 06 February 2014 12:24AM

See this Newsroom clip.

Basically, their news network is trying to change the way political debates work by having the moderator force the candidates to answer the questions that are asked of them, not interrupt each other, justify arguments that are based on obvious falsehoods etc.

How big of a positive impact do you guys think that this would have on society?

My initial thoughts are that it would be huge. It would lead to better politicians, which would be a high level of action. The positive effects would trickle down into many aspects of our society.

The question then becomes, "can we make this happen?". I don't see a way right now, but the idea has enough upside to me that I keep it in the back of my mind in case I come up with a plausible way of implementing the change.


Democracy and rationality

8 homunq 30 October 2013 12:07PM

Note: This is a draft; so far, about the first half is complete. I'm posting it to Discussion for now; when it's finished, I'll move it to Main. In the mean time, I'd appreciate comments, including suggestions on style and/or format. In particular, if you think I should(n't) try to post this as a sequence of separate sections, let me know.

Summary: You want to find the truth? You want to win? You're gonna have to learn the right way to vote. Plurality voting sucks; better voting systems are built from the blocks of approval, medians (Bucklin cutoffs), delegation, and pairwise opposition. I'm working to promote these systems and I want your help.

Contents: 1. Overblown¹ rhetorical setup ... 2. Condorcet's ideals and Arrow's problem ... 3. Further issues for politics ... 4. Rating versus ranking; a solution? ... 5. Delegation and SODA ... 6. Criteria and pathologies ... 7. Representation, Proportional representation, and Sortition ... 8. What I'm doing about it and what you can ... 9. Conclusions and future directions ... 10. Appendix: voting systems table ... 11. Footnotes


This is a website focused on becoming more rational. But that can't just mean getting a black belt in individual epistemic rationality. In a situation where you're not the one making the decision, that black belt is just a recipe for frustration.

Of course, there's also plenty of content here about how to interact rationally; how to argue for truth, including both hacking yourself to give in when you're wrong and hacking others to give in when they are. You can learn plenty here about Aumann's Agreement Theorem on how two rational Bayesians should never knowingly disagree.

But "two rational Bayesians" isn't a whole lot better as a model for society than "one rational Bayesian". Aspiring to be rational is well and good, but the Socratic ideal of a world tied together by two-person dialogue alone is as unrealistic as the sociopath's ideal of a world where their own voice rules alone. Society needs structures for more than two people to interact. And just as we need techniques for checking irrationality in one- and two-person contexts, we need them, perhaps all the more, in multi-person contexts.

Most of the basic individual and dialogical rationality techniques carry over. Things like noticing when you are confused, or making your opponent's arguments into a steel man, are still perfectly applicable. But there's also a new set of issues when n>2: the issues of democracy and voting. For a group of aspiring rationalists to come to a working consensus, of course they need to begin by evaluating and discussing the evidence, but eventually it will be time to cut off the discussion and just vote. When they do so, they should understand the strengths and pitfalls of voting in general and of their chosen voting method in particular.

And voting's not just useful for an aspiring rationalist community. As it happens, it's an important part of how governments are run. Discussing politics may be a mind-killer in many contexts, but there are an awful lot of domains where politics is a part of the road to winning.² Understanding voting processes a little bit can help you navigate that road; understanding them deeply opens the possibility of improving that road and thus winning more often.

2. Collective rationality: Condorcet's ideals and Arrow's problem

Imagine it's 1785, and you're a member of the French Academy of Sciences. You're rubbing elbows with most of the giants of science and mathematics of your day: Coulomb, Fourier, Lalande, Lagrange, Laplace, Lavoisier, Monge; even the odd foreign notable like Franklin with his ideas to unify electrostatics and electric flow.

They'll remember your names

One day, they'll put your names in front of lots of cameras (even though that foreign yokel Franklin will be in more pictures)

And this academy, with many of the smartest people in the world, has votes on stuff. Who will be our next president; who should edit and schedule our publications; etc. You're sure that if you all could just find the right way to do the voting, you'd get the right answer. In fact, you can easily prove that, or something like it: if a group is deciding between one right and one wrong option, and each member is independently more than 50% likely to get it right, then as the group size grows the chance of a majority vote choosing the right option goes to 1.

But somehow, there's still annoying politics getting in the way. Some people seem to win the elections simply because everyone expects them to win. So last year, the academy decided on a new election system to use, proposed by your rival, Charles de Borda, in which candidates get different points for being a voters first, second, or third choice, and the one with the most points wins. But you're convinced that this new system will lead to the opposite problem: people who win the election precisely because nobody expected them to win, by getting the points that voters strategically don't want to give to a strong rival. But when people point that possibility out to Borda, he only huffs that "my system is meant for honest men!"

So with your proof of the above intuitive, useful result about two-way elections, you try to figure out how to reduce an n-way election to the two-candidate case. Clearly, you can show that Borda's system will frequently give the wrong results from that perspective. But frustratingly, you find that there could sometimes be no right answer; that there will be no candidate who would beat all the others in one-on-one races. A crack has opened up; could it be that the collective decisions of intelligent individual rational agents could be irrational?

Of course, the "you" in this story is the Marquis de Condorcet, and the year 1785 is when he published his Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix, a work devoted to the question of how to acheive collective rationality. The theorem referenced above is Condorcet's Jury Theorem, which seems to offer hope that democracy can point the way from individually-imperfect rationality towards an ever-more-perfect collective rationality. Just as Aumann's Agreement Theorem shows that two rational agents should always move towards consensus, the Condorcet Jury Theorem apparently shows that if you have enough rational agents, the resulting consensus will be correct.

But as I said, Condorcet also opened a crack in that hope: the possibility that collective preferences will be cyclical. If the assumptions of the jury theorem don't hold — if each voter doesn't have a >50% chance of being right on a randomly-selected question, OR if the correctness of two randomly-selected voters is not independent and uncorrelated — then individually-sensible choices can lead to collectively-ridiculous ones. 

What do I mean by "collectively-ridiculous"? Let's imagine that the Rationalist Marching Band is choosing the colors for their summer, winter, and spring uniforms, and that they all agree that the only goal is to have as much as possible of the best possible colors. The summer-style uniforms come in red or blue, and they vote and pick blue; the winter-style ones come in blue or green, and they pick green; and the spring ones come in green or red, and they pick red.

Obviously, this makes us doubt their collective rationality. If, as they all agree they should, they had a consistent favorite color, they should have chosen that color both times that it was available, rather than choosing three different colors in the three cases. Theoretically, the salesperson could use such a fact to pump money out of them; for instance, offering to let them "trade up" their spring uniform from red to blue, then to green, then back to red, charging them a small fee each time; if they voted consistently as above, they would agree to each trade (though of course in reality human voters would probably catch on to the trick pretty soon, so the abstract ideal of an unending circular money pump wouldn't work).

This is the kind of irrationality that Condorcet showed was possible in collective decisionmaking. He also realized that there was a related issue with logical inconsistencies. If you were take a vote on 3 logically related propositions — say, "Should we have a Minister of Silly Walks, to be appointed by the Chancellor of the Excalibur", "Should we have a Minister of Silly Walks, but not appointed by the Chancellor of the Excalibur", and "Should we in fact have a Minister of Silly Walks at all", where the third cannot be true unless one of the first is — then you could easily get majority votes for inconsistent results — in this case, no, no, and yes, respectively. Obviously, there are many ways to fix the problem in this simple case — probably many less-wrong'ers would suggest some Bayesian tricks related to logical networks and treating votes as evidence⁸ — but it's a tough problem in general even today, especially when the logical relationships can be complex, and Condorcet was quite right to be worried about its implications for collective rationality.³

And that's not the only tough problem he correctly foresaw. Nearly 200 years later and an ocean away, in the 1960s, Kenneth Arrow showed that it was impossible for a preferential voting system to avoid the problem of a "Condorcet cycle" of preferences. Arrows theorem shows that any voting system which can consistently give the same winner (or, in ties, winners) for the same voter preferences; which does not make one voter the effective dictator; which is sure to elect a candidate if all voters prefer them; and which will switch the results for two candidates if you switch their names on all the votes... must exhibit, in at least some situation, the pathology that befell the Rationalist Marching Band above, or in other words, must fail "independence of irrelevant alternatives".

Arrow's theorem is far from obvious a priori, but proof is not hard to understand intuitively using Condorcet's insight. Say that there are three candidates, X, Y, and Z, with roughly equal bases of support; and that they form a Condorcet cycle, because in two-way races, X would beat Y with help from Z supporters, Y would beat Z with help from X supporters, and Z would beat X with help from Y supporters. So whoever wins in the three-way race — say, X — just remove the one who would have lost to them — Y in this case — and that "irrelevant" change will change the winner to be the third — Z in this case.

Summary of above: Collective rationality is harder than individual or two-way rationality. Condorcet saw the problem and tried to solve it, but Arrow saw that Condorcet had been doomed to fail.

3. Further issues for politics

So Condorcet's ideals of better rationality through voting appear to be in ruins. But at least we can hope that voting is a good way to do politics, right?

Not so fast. Arrow's theorem quickly led to further disturbing results. Alan Gibbard (and also Mark Satterthwaite) extended that there is no voting system which doesn't encourage voting strategy. That is, if you view an voting system as a class of games where the finite players and finite available strategies are fixed, no player is effectively a dictator, and the only thing that varies are the payoffs for each player from each outcome, there is no voting system where you can derive your best strategic vote purely by looking "honestly" at your own preferences; there is always the possibility of situations where you have to second-guess what others will do.

Amartya Sen piled on with another depressing extension of Arrows' logic. He showed that there is no possible way of aggregating individual choices into collective choice that satisfies two simple criteria. First, it shouldn't choose pareto-dominated outcomes; if everyone prefers situation XYZ to ABC, that they don't do XYZ. Second, it is "minimally liberal"; that is, there are at least two people who each get to freely make their own decision on at least one specific issue each, no matter what, so for instance I always get to decide between X and A (in Gibbard's⁴ example, colors for my house), and you always get to decide between Y and B (colors for your own house). The problem is that if you nosily care more about my house's color, the decision that should have been mine, and I nosily care about yours, more than we each care about our own, then the pareto-dominant situation is the one where we don't decide our own houses; and that nosiness could, in theory, be the case for any specific choice that, a priori, someone might have labelled as our Inalienable Right. It's not such a surprising result when you think about it that way, but it does clearly show that unswerving ideals of Democracy and Liberty will never truly be compatible.

Meanwhile, "public choice" theorists⁵ like Duncan Black, James Buchanan, etc. were busy undermining the idea of democratic government from another direction: the motivations of the politicians and bureaucrats who are supposed to keep it running. They showed that various incentives, including the strange voting scenarios explored by Condorcet and Arrow, would tend open a gap between the motives of the people and those of the government, and that strategic voting and agenda-setting within a legislature would tend to extend the impact of that gap. Where Gibbard and Sen had proved general results, these theorists worked from specific examples. And in one aspect, at least, their analysis is devastatingly unanswerable: the near-ubiquitous "democratic" system of plurality voting, also known as first-past-the-post or vote-for-one or biggest-minority-wins, is terrible in both theory and practice.

So, by the 1980s, things looked pretty depressing for the theory of democracy. Politics, the theory went, was doomed forever to be a worse than sausage factory; disgusting on the inside and distasteful even from outside.

Should an ethical rationalist just give up on politics, then? Of course not. As long as the results it produces are important, it's worth trying to optimize. And as soon as you take the engineer's attitude of optimizing, instead of dogmatically searching for perfection or uselessly whining about the problems, the results above don't seem nearly as bad.

From this engineer's perspective, public choice theory serves as an unsurprising warning that tradeoffs are necessary, but more usefully, as a map of where those tradeoffs can go particularly wrong. In particular, its clearest lesson, in all-caps bold with a blink tag, that PLURALITY IS BAD, can be seen as a hopeful suggestion that other voting systems may be better. Meanwhile, the logic of both Sen's and Gibbard's theorems are built on Arrow's earlier result. So if we could find a way around Arrow, it might help resolve the whole issue.

Summary of above: Democracy is the worst political system... (...except for all the others?) But perhaps it doesn't have to be quite so bad as it is today.

4. Rating versus ranking

So finding a way around Arrow's theorem could be key to this whole matter. As a mathematical theorem, of course, the logic is bulletproof. But it does make one crucial assumption: that the only inputs to a voting system are rankings, that is, voters' ordinal preference orders for the candidates. No distinctions can be made using ratings or grades; that is, as long as you prefer X to Y to Z, the strength of those preferences can't matter. Whether you put Y almost up near X or way down next to Z, the result must be the same.

Relax that assumption, and it's easy to create a voting system which meets Arrow's criteria. It's called Score voting⁶, and it just means rating each candidate with a number from some fixed interval (abstractly speaking, a real number; but in practice, usually an integer); the scores are added up and the highest total or average wins. (Unless there are missing values, of course, total or average amount to the same thing.) You've probably used it yourself on Yelp, IMDB, or similar sites. And it clearly passes all of Arrow's criteria. Non-dictatorship? Check. Unanimity? Check. Symmetry over switching candidate names? Check. Independence of irrelevant alternatives? In the mathematical sense — that is, as long as the scores for other candidates are unchanged — check.

So score voting is an ideal system? Well, it's certainly a far sight better than plurality. But let's check it against Sen and against Gibbard.

Sen's theorem was based on a logic similar to Arrow. However, while Arrow's theorem deals with broad outcomes like which candidate wins, Sen's deals with finely-grained outcomes like (in the example we discussed) how each separate house should be painted. Extending the cardinal numerical logic of score voting to such finely-grained outcomes, we find we've simply reinvented markets. While markets can be great things and often work well in practice, Sen's result still holds in this case; if everything is on the market, then there is no decision which is always yours to make. But since, in practice, as long as you aren't destitute, you tend to be able to make the decisions you care the most about, Sen's theorem seems to have lost its bite in this context.

What about Gibbard's theorem on strategy? Here, things are not so easy. Yes, Gibbard, like Sen, parallels Arrow. But while Arrow deals with what's written on the ballot, Gibbard deals with what's in the voters head. In particular, if a voter prefers X to Y by even the tiniest margin, Gibbard assumes (not unreasonably) that they may be willing to vote however they need to, if by doing so they can ensure X wins instead of Y. Thus, the internal preferences Gibbard treats are, effectively, just ordinal rankings; and the cardinal trick by which score voting avoided Arrovian problems no longer works.

How does score voting deal with strategic issues in practice? The answer to that has two sides. On the one hand, score never requires voters to be actually dishonest. Unlike the situation in a ranked system such as plurality, where we all know that the strategic vote may be to dishonestly ignore your true favorite and vote for a "lesser evil" among the two frontrunners, in score voting you never need to vote a less-preferred option above a more-preferred option. At worst, all you have to do is exaggerate some distinctions and minimize others, so that you might end giving equal votes to less- and more-preferred options.

Did I say "at worst"? I meant, "almost always". Voting strategy only matters to the result when, aside from your vote, two or more candidates are within one vote of being tied for first. Except in unrealistic, perfectly-balanced conditions, as the number of voters rises, the probability that anyone but the two a priori frontrunner candidates is in on this tie falls to zero.⁷ Thus, in score voting, the optimal strategy is nearly always to vote your preferred frontrunner and all candidate above at the maximum, and your less-preferred frontrunner and all candidates below at the minimum. In other words, strategic score voting is basically equivalent to approval voting, where you give each candidate a 1 or 0 and the highest total wins.

In one sense, score voting reducing to approval OK. Approval voting is not a bad system at all. For instance, if there's a known majority Condorcet winner — a candidate who could beat any other by a majority in a one-on-one race — and voters are strategic — they anticipate the unique strong Nash equilibrium, the situation where no group of voters could improve the outcome for all its members by changing their votes, whenever such a unique equilibrium exists — then the Condorcet winner will win under approval. That's a lot of words to say that approval will get the "democratic" results you'd expect in most cases.

But in another sense, it's a problem. If one side of an issue is more inclined to be strategic than the other side, the more-strategic faction could win even if it's a minority. That clashes with many people's ideals of democracy; and worse, it encourages mind-killing political attitudes, where arguments are used as soldiers rather than as ways to seek the truth.

But score and approval voting are not the only systems which escape Arrow's theorem through the trapdoor of ratings. If score voting, using the average of voter ratings, too-strongly encourages voters to strategically seek extreme ratings, then why not use the median rating instead? We know that medians are less sensitive to outliers than averages. And indeed, median-based systems are more resistant to one-sided strategy than average-based ones, giving better hope for reasonable discussion to prosper. That is to say, in a simple model, a minority would need twice as much strategic coordination under median as under average, in order to overcome a majority; and there's good reason to believe that, because of natural factional separation, reality is even more favorable to median systems than that model.

There are several different median systems available. In the US during the 1910-1925 Progressive Era, early versions collectively called "Bucklin voting" were used briefly in over a dozen cities. These reforms, based on counting all top preferences, then adding lower preferences one level at a time until some candidate(s) reach a majority, were all rolled back soon after, principally by party machines upset at upstart challenges or victories. The possibility of multiple, simultaneous majorities is a principal reason for the variety of Bucklin/Median systems. Modern proposals of median systems include Majority Approval Voting, Majority Judgment, and Graduated Majority Judgment, which would probably give the same winners almost all of the time. An important detail is that most median system ballots use verbal or letter grades rather than numeric scores. This is justifiable because the median is preserved under any monotonic transformation, and studies suggest that it would help discourage strategic voting.

Serious attention to rated systems like approval, score, and median systems barely began in the 1980s, and didn't really pick up until 2000. Meanwhile, the increased amateur interest in voting systems in this period — perhaps partially attributable to the anomalous 2000 US presidential election, or to more-recent anomalies in the UK, Canada, and Australia — has led to new discoveries in ranked systems as well. Though such systems are still clearly subject to Arrow's theorem, new "improved Condorcet" methods which use certain tricks to count a voter's equal preferences between to candidates on either side of the ledger depending on the strategic needs, seem to offer promise that Arrovian pathologies can be kept to a minimum.

With this embarrassment of riches of systems to choose from, how should we evaluate which is best? Well, at least one thing is a clear consensus: plurality is a horrible system. Beyond that, things are more controversial; there are dozens of possible objective criteria one could formulate, and any system's inventor and/or supporters can usually formulate some criterion by which it shines.

Ideally, we'd like to measure the utility of each voting system in the real world. Since that's impossible — it would take not just a statistically-significant sample of large-scale real-world elections for each system, but also some way to measure the true internal utility of a result in situations where voters are inevitably strategically motivated to lie about that utility — we must do the next best thing, and measure it in a computer, with simulated voters whose utilities are assigned measurable values. Unfortunately, that requires assumptions about how those utilities are distributed, how voter turnout is decided, and how and whether voters strategize. At best, those assumptions can be varied, to see if findings are robust.

In 2000, Warren Smith performed such simulations for a number of voting systems. He found that score voting had, very robustly, one of the top expected social utilities (or, as he termed it, lowest Bayesian regret). Close on its heels were a median system and approval voting. Unfortunately, though he explored a wide parameter space in terms of voter utility models and inherent strategic inclination of the voters, his simulations did not include voters who were more inclined to be strategic when strategy was more effective. His strategic assumptions were also unfavorable to ranked systems, and slightly unrealistic in other ways. Still, though certain of his numbers must be taken with a grain of salt, some of his results were large and robust enough to be trusted. For instance, he found that plurality voting and instant runoff voting were clearly inferior to rated systems; and that approval voting, even at its worst, offered over half the benefits compared to plurality of any other system.

Summary of above: Rated systems, such as approval voting, score voting, and Majority Approval Voting, can avoid the problems of Arrow's theorem. Though they are certainly not immune to issues of strategic voting, they are a clear step up from plurality. Starting with this section, the opinions are my own; the two prior sections were based on general expert views on the topic.

5. Delegation and SODA

Rated systems are not the only way to try to beat the problems of Arrow and Gibbard (/Satterthwaite).

Summary of above:

6. Criteria and pathologies


Summary of above:

7. Representation, proportionality, and sortition


Summary of above:

8. What I'm doing about it and what you can


Summary of above:

9. Conclusions and future directions


Summary of above:

10. Appendix: voting systems table

Compliance of selected systems (table)

The following table shows which of the above criteria are met by several single-winner systems. Note: contains some errors; I'll carefully vet this when I'm finished with the writing. Still generally reliable though.

Majority Condorcet


Equal rankings
Approval[nb 1] Ambig­uous NoStrategic yes[nb 2] No Yes Yes[nb 2] Yes Ambig­uous Ambig.­[nb 3] Yes O(N) Yes No [nb 4] Yes
Borda count No No Yes Yes Yes Yes No No (teaming) Yes O(N) No Yes No No
Copeland Yes Yes Yes Yes No Yes No (but ISDA) No (crowding) Yes/No O(N2) Yes Yes No No
IRV (AV) Yes No Yes No No No No Yes Yes O(N!)­[nb 5] No Yes Yes No
Kemeny-Young Yes Yes Yes Yes No Yes No (but ISDA) No (teaming) No/Yes O(N2[nb 6] Yes Yes No No
Majority Judg­ment[nb 7] Yes[nb 8] NoStrategic yes[nb 2] No[nb 9] Yes No[nb 10] No[nb 11] Yes Yes Yes O(N)­[nb 12] Yes Yes No[nb 13] Yes Yes
Minimax Yes/No Yes[nb 14] No Yes No No No No (spoilers) Yes O(N2) Some variants Yes No[nb 14] No
Plurality Yes/No No No Yes Yes No No No (spoilers) Yes O(N) No No [nb 4] No
Range voting[nb 1] No NoStrategic yes[nb 2] No Yes Yes[nb 2] Yes Yes[nb 15] Ambig.­[nb 3] Yes O(N) Yes Yes No Yes
Ranked pairs Yes Yes Yes Yes No Yes No (but ISDA) Yes Yes O(N2) Yes Yes No No
Runoff voting Yes/No No Yes No No No No No (spoilers) Yes O(N)­[nb 16] No No[nb 17] Yes[nb 18] No
Schulze Yes Yes Yes Yes No Yes No (but ISDA) Yes Yes O(N2) Yes Yes No No
SODA voting[nb 19] Yes Strategic yes/yes Yes Ambig­uous[nb 20] Yes/Up to 4 cand. [nb 21] Yes[nb 22] Up to 4 candidates[nb 21] Up to 4 cand. (then crowds) [nb 21] Yes[nb 23] O(N) Yes Limited[nb 24] Yes Yes
Random winner/
arbitrary winner[nb 25]
No No No NA No Yes Yes NA Yes/No O(1) No No   Yes
Random ballot[nb 26] No No No Yes Yes Yes Yes Yes Yes/No O(N) No No   Yes

"Yes/No", in a column which covers two related criteria, signifies that the given system passes the first criterion and not the second one.

  1. Jump up to:a b These criteria assume that all voters vote their true preference order. This is problematic for Approval and Range, where various votes are consistent with the same order. See approval voting for compliance under various voter models.
  2. Jump up to:a b c d e In Approval, Range, and Majority Judgment, if all voters have perfect information about each other's true preferences and use rational strategy, any Majority Condorcet or Majority winner will be strategically forced – that is, win in the unique Strong Nash equilibrium. In particular if every voter knows that "A or B are the two most-likely to win" and places their "approval threshold" between the two, then the Condorcet winner, if one exists and is in the set {A,B}, will always win. These systems also satisfy the majority criterion in the weaker sense that any majority can force their candidate to win, if it so desires. (However, as the Condorcet criterion is incompatible with the participation criterion and the consistency criterion, these systems cannot satisfy these criteria in this Nash-equilibrium sense. Laslier, J.-F. (2006) "Strategic approval voting in a large electorate,"IDEP Working Papers No. 405 (Marseille, France: Institut D'Economie Publique).)
  3. Jump up to:a b The original independence of clones criterion applied only to ranked voting methods. (T. Nicolaus Tideman, "Independence of clones as a criterion for voting rules", Social Choice and Welfare Vol. 4, No. 3 (1987), pp. 185–206.) There is some disagreement about how to extend it to unranked methods, and this disagreement affects whether approval and range voting are considered independent of clones. If the definition of "clones" is that "every voter scores them within ±ε in the limit ε→0+", then range voting is immune to clones.
  4. Jump up to:a b Approval and Plurality do not allow later preferences. Technically speaking, this means that they pass the technical definition of the LNH criteria - if later preferences or ratings are impossible, then such preferences can not help or harm. However, from the perspective of a voter, these systems do not pass these criteria. Approval, in particular, encourages the voter to give the same ballot rating to a candidate who, in another voting system, would get a later rating or ranking. Thus, for approval, the practically meaningful criterion would be not "later-no-harm" but "same-no-harm" - something neither approval nor any other system satisfies.
  5. Jump up^ The number of piles that can be summed from various precincts is floor((e-1) N!) - 1.
  6. Jump up^ Each prospective Kemeny-Young ordering has score equal to the sum of the pairwise entries that agree with it, and so the best ordering can be found using the pairwise matrix.
  7. Jump up^ Bucklin voting, with skipped and equal-rankings allowed, meets the same criteria as Majority Judgment; in fact, Majority Judgment may be considered a form of Bucklin voting. Without allowing equal rankings, Bucklin's criteria compliance is worse; in particular, it fails Independence of Irrelevant Alternatives, which for a ranked method like this variant is incompatible with the Majority Criterion.
  8. Jump up^ Majority judgment passes the rated majority criterion (a candidate rated solo-top by a majority must win). It does not pass the ranked majority criterion, which is incompatible with Independence of Irrelevant Alternatives.
  9. Jump up^ Majority judgment passes the "majority condorcet loser" criterion; that is, a candidate who loses to all others by a majority cannot win. However, if some of the losses are not by a majority (including equal-rankings), the Condorcet loser can, theoretically, win in MJ, although such scenarios are rare.
  10. Jump up^ Balinski and Laraki, Majority Judgment's inventors, point out that it meets a weaker criterion they call "grade consistency": if two electorates give the same rating for a candidate, then so will the combined electorate. Majority Judgment explicitly requires that ratings be expressed in a "common language", that is, that each rating have an absolute meaning. They claim that this is what makes "grade consistency" significant. MJ. Balinski M. and R. Laraki (2007) «A theory of measuring, electing and ranking». Proceedings of the National Academy of Sciences USA, vol. 104, no. 21, 8720-8725.
  11. Jump up^ Majority judgment can actually pass or fail reversal symmetry depending on the rounding method used to find the median when there are even numbers of voters. For instance, in a two-candidate, two-voter race, if the ratings are converted to numbers and the two central ratings are averaged, then MJ meets reversal symmetry; but if the lower one is taken, it does not, because a candidate with ["fair","fair"] would beat a candidate with ["good","poor"] with or without reversal. However, for rounding methods which do not meet reversal symmetry, the chances of breaking it are on the order of the inverse of the number of voters; this is comparable with the probability of an exact tie in a two-candidate race, and when there's a tie, any method can break reversal symmetry.
  12. Jump up^ Majority Judgment is summable at order KN, where K, the number of ranking categories, is set beforehand.
  13. Jump up^ Majority judgment meets a related, weaker criterion: ranking an additional candidate below the median grade (rather than your own grade) of your favorite candidate, cannot harm your favorite.
  14. Jump up to:a b A variant of Minimax that counts only pairwise opposition, not opposition minus support, fails the Condorcet criterion and meets later-no-harm.
  15. Jump up^ Range satisfies the mathematical definition of IIA, that is, if each voter scores each candidate independently of which other candidates are in the race. However, since a given range score has no agreed-upon meaning, it is thought that most voters would either "normalize" or exaggerate their vote such that it votes at least one candidate each at the top and bottom possible ratings. In this case, Range would not be independent of irrelevant alternatives. Balinski M. and R. Laraki (2007) «A theory of measuring, electing and ranking». Proceedings of the National Academy of Sciences USA, vol. 104, no. 21, 8720-8725.
  16. Jump up^ Once for each round.
  17. Jump up^ Later preferences are only possible between the two candidates who make it to the second round.
  18. Jump up^ That is, second-round votes cannot harm candidates already eliminated.
  19. Jump up^ Unless otherwise noted, for SODA's compliances:
    • Delegated votes are considered to be equivalent to voting the candidate's predeclared preferences.
    • Ballots only are considered (In other words, voters are assumed not to have preferences that cannot be expressed by a delegated or approval vote.)
    • Since at the time of assigning approvals on delegated votes there is always enough information to find an optimum strategy, candidates are assumed to use such a strategy.
  20. Jump up^ For up to 4 candidates, SODA is monotonic. For more than 4 candidates, it is monotonic for adding an approval, for changing from an approval to a delegation ballot, and for changes in a candidate's preferences. However, if changes in a voter's preferences are executed as changes from a delegation to an approval ballot, such changes are not necessarily monotonic with more than 4 candidates.
  21. Jump up to:a b c For up to 4 candidates, SODA meets the Participation, IIA, and Cloneproof criteria. It can fail these criteria in certain rare cases with more than 4 candidates. This is considered here as a qualified success for the Consistency and Participation criteria, which do not intrinsically have to do with numerous candidates, and as a qualified failure for the IIA and Cloneproof criteria, which do.
  22. Jump up^ SODA voting passes reversal symmetry for all scenarios that are reversible under SODA; that is, if each delegated ballot has a unique last choice. In other situations, it is not clear what it would mean to reverse the ballots, but there is always some possible interpretation under which SODA would pass the criterion.
  23. Jump up^ SODA voting is always polytime computable. There are some cases where the optimal strategy for a candidate assigning delegated votes may not be polytime computable; however, such cases are entirely implausible for a real-world election.
  24. Jump up^ Later preferences are only possible through delegation, that is, if they agree with the predeclared preferences of the favorite.
  25. Jump up^ Random winner: Uniformly randomly chosen candidate is winner. Arbitrary winner: some external entity, not a voter, chooses the winner. These systems are not, properly speaking, voting systems at all, but are included to show that even a horrible system can still pass some of the criteria.
  26. Jump up^ Random ballot: Uniformly random-chosen ballot determines winner. This and closely related systems are of mathematical interest because they are the only possible systems which are truly strategy-free, that is, your best vote will never depend on anything about the other voters. They also satisfy both consistency and IIA, which is impossible for a deterministic ranked system. However, this system is not generally considered as a serious proposal for a practical method.

11. Footnotes

¹ When I call my introduction "overblown", I mean that I reserve the right to make broad generalizations there, without getting distracted by caveats. If you don't like this style, feel free to skip to section 2.


² Of course, the original "politics is a mind killer" sequence was perfectly clear about this: "Politics is an important domain to which we should individually apply our rationality—but it's a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational." The focus here is on the first part of that quote, because I think Less Wrong as a whole has moved too far in the direction of avoiding politics as not a domain for rationalists.


³ Bayes developed his theorem decades before Condorcet's Essai, but Condorcet probably didn't know of it, as it wasn't popularized by Laplace until about 30 years later, after Condorcet was dead.


⁴ Yes, this happens to be the same Alan Gibbard from the previous paragraph.


⁵ Confusingly, "public choice" refers to a school of thought, while "social choice" is the name for the broader domain of study. Stop reading this footnote now if you don't want to hear mind-killing partisan identification. "Public choice" theorists are generally seen as politically conservative in the solutions they suggest. It seems to me that the broader "social choice" has avoided taking on a partisan connotation in this sense.


⁶ Score voting is also called "range voting" by some. It is not a particularly new idea — for instance, the "loudest cheer wins" rule of ancient Sparta, and even aspects of honeybees' process for choosing new hives, can be seen as score voting — but it was first analyzed theoretically around 2000. Approval voting, which can be seen as a form of score voting where the scores are restricted to 0 and 1, had entered theory only about two decades earlier, though it too has a history of practical use back to antiquity.


⁷ OK, fine, this is a simplification. As a voter, you have imperfect information about the true level of support and propensity to vote in the superpopulation of eligible voters, so in reality the chances of a decisive tie between other than your two expected frontrunners is non-zero. Still, in most cases, it's utterly negligible.


⁸ This article will focus more on the literature on multi-player strategic voting (competing boundedly-instrumentally-rational agents) than on multi-player Aumann (cooperating boundedly-epistemically-rational agents). If you're interested in the latter, here are some starting points: Scott Aaronson's work is, as far as I know, the state of the art on 2-player Aumann, but its framework assumes that the players have a sophisticated ability to empathize and reason about each others' internal knowledge, and the problems with this that Aaronson plausibly handwaves away in the 2-player case are probably less tractable in the multi-player one. Dalkiran et al deal with an Aumann-like problem over a social network; they find that attempts to "jump ahead" to a final consensus value instead of simply dumbly approaching it asymptotically can lead to failure to converge. And Kanoria et al have perhaps the most interesting result from the perspective of this article; they use the convergence of agents using a naive voting-based algorithm to give a nice upper bound on the difficulty of full Bayesian reasoning itself. None of these papers explicitly considers the problem of coming to consensus on more than one logically-related question at once, though Aaronson's work at least would clearly be easy to extend in that direction, and I think such extensions would be unsurprisingly Bayesian.

Less Wrong’s political bias

-6 Sophronius 25 October 2013 04:38PM

(Disclaimer: This post refers to a certain political party as being somewhat crazy, which got some people upset, so sorry about that. That is not what this post is *about*, however. The article is instead about Less Wrong's social norms against pointing certain things out. I have edited it a bit to try and make it less provocative.)


A well-known post around these parts is Yudkowski’s “politics is the mind killer”. This article proffers an important point: People tend to go funny in the head when discussing politics, as politics is largely about signalling tribal affiliation. The conclusion drawn from this by the Less Wrong crowd seems simple: Don’t discuss political issues, or at least keep it as fair and balanced as possible when you do. However, I feel that there is a very real downside to treating political issues in this way, which I shall try to explain here. Since this post is (indirectly) about politics, I will try to bring this as gently as possible so as to avoid mind-kill. As a result this post is a bit lengthier than I would like it to be, so I apologize for that in advance.

I find that a good way to examine the value of a policy is to ask in which of all possible worlds this policy would work, and in which worlds it would not. So let’s start by imagining a perfectly convenient world: In a universe whose politics are entirely reasonable and fair, people start political parties to represent certain interests and preferences. For example, you might have the kitten party for people who like kittens, and the puppy party for people who favour puppies. In this world Less Wrong’s unofficial policy is entirely reasonable: There is no sense in discussing politics, since politics is only about personal preferences, and any discussion of this can only lead to a “Jay kittens, boo dogs!” emotivism contest. At best you can do a poll now and again to see what people currently favour.

Now let’s imagine a less reasonable world, where things don’t have to happen for good reasons and the universe doesn’t give a crap about what’s fair. In this unreasonable world, you can get a “Thrives through Bribes” party or an “Appeal to emotions” party or a “Do stupid things for stupid reasons” party as well as more reasonable parties that actually try to be about something. In this world it makes no sense to pretend that all parties are equal, because there is really no reason to believe that they are.

As you might have guessed, I believe that we live in the second world. As a result, I do not believe that all parties are equally valid/crazy/corrupt, and as such I like to be able to identify which are the most crazy/corrupt/stupid. Now I happen to be fairly happy with the political system where I live. We have a good number of more-or-less reasonable parties here, and only one major crazy party that gives me the creeps. The advantage of this is that whenever I am in a room with intelligent people, I can safely say something like “That crazy racist party sure is crazy and racist”, and everyone will go “Yup, they sure are, now do you want to talk about something of substance?” This seems to me the only reasonable reply.

The problem is that Less Wrong seems primarily US-based, and in the US… things do not go like this. In the US, it seems to me that there are only two significant parties, one of which is flawed and which I do not agree with on many points, while the other is, well… can I just say that some of the things they profess do not so much sound wrong as they sound crazy? And yet, it seems to me that everyone here is being very careful to not point this out, because doing so would necessarily be favouring one party over the other, and why, that’s politics! That’s not what we do here on Less Wrong!

And from what I can tell, based on the discussion I have seen so far and participated in on Less Wrong, this introduces a major bias. Pick any major issue of contention, and chances are that the two major parties will tend to have opposing views on the subject. And naturally, the saner party of the two tends to hold a more reasonable view, because they are less crazy. But you can’t defend the more reasonable point of view now, because then you’re defending the less-crazy party, and that’s politics. Instead, you can get free karma just by saying something trite like “well, both sides have important points on the matter” or “both parties have their own flaws” or “politics in general are messed up”, because that just sounds so reasonable and fair who doesn’t like things to be reasonable and fair? But I don’t think we live in a reasonable and fair world.

It’s hard to prove the existence of such a bias and so this is mostly just an impression I have. But I can give a couple of points in support of this impression. Firstly there are the frequent accusations of group think towards Less Wrong, which I am increasingly though reluctantly prone to agree with. I can’t help but notice that posts which remark on for example *retracted* being a thing tend to get quite a few downvotes while posts that take care to express the nuance of the issue get massive upvotes regardless of whether really are two sides on the issue. Then there are the community poll results, which show that for example 30% of Less Wrongers favour a particular political allegiance even though only 1% of voters vote for the most closely corresponding party. I sincerely doubt that this skewed representation is the result of honest and reasonable discussion on Less Wrong that has convinced members to follow what is otherwise a minority view, since I have never seen any such discussion. So without necessarily criticizing the position itself, I have to wonder what causes this skewed representation. I fear that this “let’s not criticize political views” stance is causing Less Wrong to shift towards holding more and more eccentric views, since a lack of criticism can be taken as tacit approval. What especially worries me is that giving the impression that all sides are equal automatically lends credibility to the craziest viewpoint, as proponents of that side can now say that sceptics take their views seriously which benefits them the most. This seems to me literally the worst possible outcome of any politics debate.

I find that the same rule holds for politics as for life in general: You can try to win or you can give up and lose by default, but you can’t choose not to play.

Dark Arts 101: Winning via destruction and dualism

-13 PhilGoetz 21 September 2013 01:53AM

Recalling first that life is a zero-sum game, it is immediately obvious that the quickest and easiest path to success is not to accomplish things yourself—that's a game for heroes and other suckers—but to tear down the accomplishments and reputations of others. Destruction is easy. The difficulty lies in constructing a situation so that that destruction is to your net benefit.

continue reading »

Another way our brains betray us

3 polymathwannabe 17 September 2013 01:56PM

This appeared in the news yesterday.


It turns out that in the public realm, a lack of information isn’t the real problem. The hurdle is how our minds work, no matter how smart we think we are. We want to believe we’re rational, but reason turns out to be the ex post facto way we rationalize what our emotions already want to believe.


The bleakest finding was that the more advanced that people’s math skills were, the more likely it was that their political views, whether liberal or conservative, made them less able to solve the math problem. [...] what these studies of how our minds work suggest is that the political judgments we’ve already made are impervious to facts that contradict us.


Denial is business-as-usual for our brains. More and better facts don’t turn low-information voters into well-equipped citizens. It just makes them more committed to their misperceptions.


When there’s a conflict between partisan beliefs and plain evidence, it’s the beliefs that win. The power of emotion over reason isn’t a bug in our human operating systems, it’s a feature.

Consider the Most Important Facts

-9 CarlJ 22 July 2013 08:39PM

Followup to: Choose that which is most important to you

When you have written down what your own fundamental political values are, the next step is to get an understanding of all possible societies so you can see which one is best. And by best I mean that society which comes closest to meeting your criteria of what you find most valuable.

So, to construct a model for thinking about this issue two things are needed. First, a list of all possible societies. And then some lists of those facts which would seem to rule out the largest number of possible societies as not being best; it would close in on the best society. The important point for this post regards the second list, but I still have a little discussion on the scope of the first list. If it seems obvious to, more or less, look at variants of economic systems, you can skip the next section and go straight to Facts which rule out and points toward certain societies.

A list of all possible societies – How long and exhaustive should it be?
I don't know if anyone has made such an exhaustive list. One might be constructed if one takes the list of economic systems (which regards laws, institutions, and how they are produced, and some culture) from Wikipedia and imagines that each of those systems may vary somewhat by different cultural norms. Not all cultural norms are compatible with every economic systems (objectivist virtue ethics with central planning), but every system would seem to allow some variation.This means 54 broad economic systems with, let's just say, ten broad cultural variations of these. So there's approximately 500 types of societies that people discuss today to take into account.

There's an obvious limitation to all this, which is that for every type of system, that system may vary in five million ways regarding certain laws. So, the Nordic model have changed a lot during the last 25 years. And if you take each law and consider a society of this type to be able to switch that on or off, there's, from that period alone, enough laws to be changed that the total combination exceeds five million. Many of the laws are however interdependent on one another, but there's still room for enormous configuration to ”construct” different societies.

So, maybe there are around a billion to a trillion possible societies. Now, it seems obviously clear that it is wrong to start discussing what, of two quite similar possible societies are better than the other – even if each society can have one million variations.1 That is because each are highly unlikely to be the best society.

If we can make one assumption, this will be much more easy. And that is that societies which we today would consider to be more similar than others would produce more or less the same results relative to other societies. There are some areas where every society would change drastically with just a small change in that area since it would lead to drastic change in the rest of the society. These areas are of great importance when we come to changing systems, but for now I assume these areas are too few in number to be of any importance.

With this assumption we can return to look at broad systems, because if societies of one category would seem to be better than other societies, we do not need to look more closely at that sort of society. If one type of mercantilistic society looks bad compared to a free-trade economy, any other type of the former are not worth looking at again.

Again, societies have these fundamental attributes (i) some general rules regarding how their laws are structured, (ii) some definitive rules on how these rules should be changed, and (iii) cultural norms. This model is still somewhat limiting, however. It seems to assume that a society can only have only one law and so on. But that problem disappears if we assume they can be different for different time, places and people. In all, this means we're back to some 500 possible societies.

Facts which rule out and points toward certain societies
Before considering any facts that has an impact on how you view a society, all societies should appear to be equally probable of being the best. This starting point may seem strange to some. It means that one should not dismiss even the policies of Nazi Germany out of hand. That is just the starting point however. After one accumulates more and more data some societies will appear less and less probable to be on that best fulfill your criteria.

But, since you don't have time to read everything, it is necessary to construct a model of how humans (and other beings, for post-singularity issues2) function and interact, that first only considers the most important facts. This could be done in several ways.

One could begin by just following normal science and ask what general facts can explain most of observed behavior and then see what those facts would predict about all societies. That seems wise to do, in and of itself, because it forces the discussion (which will ensue with others who follow the same method) to be very methodical and well grounded in a rich theory. This can be called the general method.

But this path is not the quickest, since these general facts would probably not damn enough societies to be unsuitable to your goals. A much faster way, but which will paint a more sketchy painting, is to just list those facts which will rule out the most societies. This is quicker since it will go straight to the chase. These facts may be thought of by thinking on what assumptions certain systems rely on to work adequately and trying to figure out what facts disprove most of these assumptions. This can be called the specific method.

Then there are statements which you are uncertain about but if they were true, it would become really obvious what society is best. So, not facts actually, but those ideas which you believe are worth learning more about. These potential facts should be the ones you are pondering or those which are the root cause of many debates among those with similar goals. This can be called the search method.

Here's an illustration of all three methods. Except for the last illustration, I write my own views, but these are not my own most important facts but the 11th to 20th.

The general method:

  1. People tend to conform to popular opinion.
  2. Societies become wealthier with extended markets, more savings, gaining better knowledge, producing more advanced technology, peace, and institutions which support these activities.
  3. Man is not a perfectly rational creature but has the possibility to correct his mistakes
  4. To wield power over others one generally need superior military strength.
  5. Most people fear being ostracised.
  6. Ideologies are usually formed by the social structure, and the social structure can be changed by those ideologies.
  7. People tend to enjoy the company of those who they are similar to.
  8. On markets with freedom of entry, prices for reproducible goods tends to be as low as their cost of production.
  9. Producers who don't sell what the customers want tend to receive lower earnings.
  10. Most people are adept at spotting others mistakes, but do quite poorly on noticing their own.

The specific method:

  1. All or almost all states today have tariffs to protect a certain industry or firm from competition.
  2. Generally, to know for sure if one possible society is better than another, one must be able to discuss their respective merits and demerits.
  3. The leaders of large governments tend to have less incentive to produce collective goods, rather than private goods, relative to leaders of smaller states.
  4. Most people today in democratic states give in to pressure to support policies which they are unable to know if they actually are for their own good or not.
  5. Children can be indoctrinated to glorify mass-murderers and to want to join them as soldiers, asking nothing about the justice of their cause.
  6. People are disposed to believe that the society they grow up in is good.
  7. Most people are conservative; they dislike change.
  8. All centrally planned economies perform less well than market based economic systems.
  9. Firms tend to invest money in rent-seeking if it's profitable until the expected return is similar to normal investments.
  10. Generally, it's difficult for new facts to overturn one's ideology without a contrasting ideology and it is difficult to come up with a new one by oneself.

The search method:

  1. Political system X will best achieve my goals.
  2. Political system X leads to the best incentives for everyone to produce the most important collective goods.

Now, these facts are not simply facts. They are the tip of a theoretical ice-berg; they are interpretation of reality. As such they will not by themselves explicate what system they damn. For oneself they should be clear what they mean, but if one should discuss it with others it might be necessary to write down the points and their theoretical point of view explicitly.

In any case, if you've followed my steps you should have one candidate which seems to be best. This step might, of course, take years, but if you're confident you should next estimate how much a political action towards these societies might cost.

[1] It might seem that I'd imply that that is what most people do today when they discuss politics – which, by its nature, is usually limited to tweaking the existing system one small way here and there, instead of looking at larger changes to be made. That implication is tempting to make, but most people seem to be more engaged in a ideological debate. I'd guess, anyway – I do not know for sure.

[2] They are too hard to predict so I'll skip discussing them.

Choose that which is most important to you

-4 CarlJ 21 July 2013 10:38PM

Followup to: The Domain of Politics

To create your own political world view you need to know about societies and your own political goals/values. In this post I'll discuss the latter, and in the next post the former.

What sort of goals? Those which you wish to achieve for their own sake, and not because they simply are a means to an end. That is, those goals you value intrinsically. Or, if you believe that there exists only one ultimate goal or value, then think of those means which are not that far removed from being intrinsic goal. That is, a birthday party might be just of instrumental value but most would agree that it is more far away from the intrinsic value than, say, good tires. I will for the rest of the post assume that most people value a lot of things intrinsically, and by values I will denote intrinsic values.

So, I'd like to draw a line between values and that which achieve those values. The latter is what we're trying to figure out what they are, without first proposing what they are. Those are political systems, or parts of them; they are institutions and laws. This is not to say that these things cannot be valued for their own sake – I put value on a system, possibly for aesthetic reasons – but those values should be disentangled from the other benefit a system produces.

With that in mind, you should now list all the things you value in ranking order. To rank them is necessary since we live in a world of scarce resources, so you won't necessarily achieve all your goals, but you will want to achieve those that are most important to you.

Now, what one values may change over time, so naturally what seems to be most important may also change. That which was on place #7 may go to #1 and vice versa. That is, values are changing with new information and a change in one's condition. That said, one's political values don't probably shift all that much. And even if they do, if you can't predict how they will change, you still need them to be able to know what political system is good for you.

There are many ways to get a feel of what your most highly valued political values are. Introspection, discussing with friends, think through a number of thought experiments, read the literature on what makes most people happy, listen to what experiences have been most horrible or pleasurable to others, etc.. In any case, here's a thought experiment to help with finding your ideological preferences, should you need it:

A genie appears and it says that it will make ten wishes come true and then it will be gone forever. As this genie will make more than three wishes come true it has an added restriction: all wishes need to be political in nature. By luck you get to make the wishes – what do you wish for?

The important thing to remember is that, if you should lose one wish, you will be less sorry to give up your tenth wish than any other. And less sorry to give up the ninth wish than the eight if you lost two wishes, and so on.

To make it clearer what I mean I'll write down some of the things I value. Not my most preferred goals, but those on 11th to 20th place:

  1. Those who have trouble excelling in life should receive whatever help can be given so they may become better.
  2. If someone comes up with a previously unknown idea for improving the world, and if three knowledgeable and unrelated individuals believe the idea is very good, it should only take some hours for everyone to be able to know that this matter is of importance.
  3. Everyone should have access to some means of totally private communication.
  4. There should be no infringement on the right to develop one's mind, whatever technology one uses.
  5. All animals should be, if the technology ever becomes available, sufficiently mentally enhanced to be given the choice of whether or not to become as intelligent (or more) as humans.
  6. If it ever seems likely to be possible, we should strive towards creating a technology to resurrect the dead sooner rather than later.
  7. The civilization should be able to co-exist with other peaceful civilizations.
  8. There shouldn't be any ultimate certainty on the nature of existence or in any one reality tunnel; some balkanization of epistemology is good.
  9. Everyone who share these values should know or learn the art of creating sustainable groups for collective action.
  10. The civilization which embodies these values should continue indefinitely.

EDIT: DanielLC notes that this simple ranking wouldn't give you any information on how valuable a 90% completion of one goal is relative to a 95% completion of another goal. That information will however be important when you have to choose between incremental steps towards several different goals.

To create a ranking which displays that information, imagine that each goal you have written down can be in progress in five stages - 0%, 25%, 50%, 75%, 100% - so that it is possible to be 75% or 0% on the way to achieve any particular goal. So, for instance, the goal of having private communication for everyone might be 50% completed if half the population have access to secret communication channels, but the other half doesn't.

Next, assume your one wish (in the scenario) is divided into five parts, one for each stage. And then rank every wish again following the same rule. This will look something like this:

  1. 100% of my first goal.
  2. 100% of my second goal.
  3. 100% of my third goal.
  4. 100% of my fourth goal.
  5. 75% of my first goal.
  6. 100% of my fifth goal.
  7. 50% of my first goal.
  8. 75% of my second goal.

(This was made purely for illustrative purposes. I haven't thought the matter through completely on how much I value these incremental parts.)

Another option is to do these more fine-tuned rankings on a gut level. Just having an imprecise feeling that, somewhere being closer to goal A stops being as important as being closer to B. This should be appropriate for those areas where your uncertainty about your preferences is high or where you don't care that much about which goal gets satisfied.

Next post: "Consider the Most Important Facts"

The Domain of Politics

0 CarlJ 21 July 2013 06:30PM

Followup to: How To Construct a Political Ideology

Related to: What Do We Mean By "Rationality"?

Politics is the art of the possible.

The word 'politics' is derived from the Greek word 'poly', meaning many, and the English word 'ticks', meaning blood sucking parasites.

Politics can be inspiring; there have been several groups that have organized to achieve wonderful ends now and in the past. Such as ending slavery, the subjugation of women, and the censorship of ideas. (None of these have, however, been brought to their full completion yet.)

Politics can also be irritating. As when some politician or bureaucrat wastes money or lies in a particularly annoying way, or when the supporters of that politician or that bureau talk about the wonders of politics while ignoring all its bad parts. (Politics can also be horrible and devastating.)

Predictably, some of us who find politics today to be more irritating than inspiring will define politics somewhat differently. For some, politics is ”a relic of a barbaric past” because politics always entails the threat of violence, and if we should ever find ourselves in a better state of affairs, politics will have had nothing to do with it. But many others would contend that wherever there's civic life there's politics – for some that's true even in a stateless society.

So, there's a little disagreement on the definition of politics. For my part, I will use the latter definition, which contends that politics deals with certain areas of life – regarding civic life, elections, war, fund-raising for a cause, influencing cultural norms, establishing alliances and so on. This is almost the same as the definition used by Wiktionary, but it seems to have a broader focus than the one used by Wikipedia. The goal of political action can then be said to be to act rationally in this domain, just as one would act rationally in any other domain.

That definition isn't too detailed, so let me try and give a fuller definition. I will do that by introducing a hypothetical scenario which explores some fundamental political strategies:

You live in a village by a river, and you are interested in building a bridge across it.  But a fisherman also lives in the village and if you'd build the bridge it would make it difficult for him to fish during that time. No one else will be directly effected by this project. You bring up the issue with the fisherman and ask what he thinks about all this.

The fisherman could then have two basic attitudes towards your project: it would  either be a concern for him or it wouldn't. If it is the latter, then you are not in any conflict, but have a (weak) harmonious relationship. All that remains is for you to build the bridge, which I'll discuss later.

First, let's assume that the fisherman opposes your plans. Let us assume that he is willing to physically prevent you from building the bridge. What can you do then, given that you still want to construct the bridge? It seems only these six general strategies are available:

Persuasion  – You can try to convince the fisherman that it is in his best interest that the bridge be built, or that the construction will not disturb him so much as he believes. That is, convince him that the project will not become problematic for him.

Deceit – You can try to convince the fisherman that the construction won't be problematic, while lying.

Trade - You take his stated preferences, true or not, as given and you offer him something in return for letting you build the bridge.

Threat – You offer to give him something/do something to him which he does not want, if he doesn't let you build the bridge.

Bypass – You ignore the fisherman and try to build the bridge without him knowing about it.

Force – You can try to physically stop him from preventing you to build the bridge. As in, hitting him on the head, poisoning him or locking him up. 

(There might be other strategies I've missed, but for now it's not necessary to know all fundamental strategies.)

Suppose now that the fisherman doesn't mind at all that you build the bridge. Well then, what happens  now?

Well, either you  want the help of others in doing this or not. If not, there's no more politics. If you do want the help of others, and they are willing to help you, then everything is also settled. But, if they do not want to help you right away, then you can use persuasion, deceit, exchange, threats and force. Bypassing is not an option here, since that would be pointless.

Each option entails costs, and they could all have too high a cost so that there's no point in going forth with anything.  In that case, it's time to do something else. On the other hand, the cost for each mode of action might be so low that any option is advantageous. In that case the only prudent move is to choose whatever has the lowest cost, the one which let's you pursue and reach the largest number of your most highly valued ends. The point is that not only does an option have costs in money and time but it can also affect any further actions in, at least, two ways. First off, if the action should fail, some, or all, of the other options might become totally improbable to succeed. And secondly, even if the action succeeds, it might have some negative effects in other non-political circumstances, making it less likely to achieve your goals. Thus, the costs worth pondering are the opportunity costs of an action - the loss is what you otherwise could have achieved.

It seems  that every political problem can be seen through the lens of this framework. Both for, loosely speaking, dealing with conflicts and producing values. What about upholding laws that support certain property rights? Well, you can persuade or force those who disagree with the norms to accept them. You can even bargain with them. What about helping those who are addicted to drugs? Same thing, you can either get their consent or choose to force them. Everything can ultimately be seen as how you interact with others.

What does this then tell us about the goal of political action? Well suppose you need to interact with others regarding the bridge-project (either with the fisherman or someone else). You will need to perceive the effects of each path and compare their effects to choose whichever is most beneficial to you.  After that has been solved, that should be the end of politics. But, what does it mean to solve the problem? Well, what goals will be harder to reach if you choose to trick the fisherman into letting you build the bridge? That depends on a lot of circumstances, but, for most villages, I'd guess you lose any chance of being on really good terms with the fisherman, and you'd lose favour with most people in the village (if you weren't already dominant in the village). And what if you'd traded with others to get their help in constructing the bridge? You'd only lost the money, probably.

Now, maybe this doesn't feel like that hard of a problem. But let's suppose that you will face one thousand such scenarios in your life, every one of which are intertwined with each other. That is, you will want to build a bridge, but you may also want to be friends to friends of the fisherman, be on good terms with everyone in the village, be secure in your property rights, help fund the building of a local town hall, change the current law on building-restrictions, support the abolishment of the Bakers' guild, do a whole lot of ordinary things and so on. Now your choice in one area will have to fit with every other area. Or, at least those you care about the most.

All of this calls for you to create a meta-strategy; a grand plan plan so all those small plans are compatible with each other and will produce the most benefit to you. How to make that plan and follow it through is the essence of political choice, it's an essential part of your goal in politics.

To know what plan to choose you need to know two things: (1) what your political values/goals are and (2) what sort of political system (society) would be best in promoting your goals.

If you know everything about your preferences, but nothing about societies, then you can't support any complex system without running the risk of supporting something which is totally detrimental to your values. If you, on the other hand, know everything about how societies function but are, somehow, unable to know what you really want, then you cannot decide what society to strive towards.

The next two posts will discuss these two issues - first goals and thereafter means.

Next post: "Choose that which is most important to you"

How To Construct a Political Ideology

-2 CarlJ 21 July 2013 03:00PM

Related to: Hold Off On Proposing Solutions, Logical Rudeness

Politics is sometimes hard to discuss. Partly since most of us seem to unconsciously take political matters with the same degree of seriousness as our forefathers used to, because we use the same mode of thought as they used to. Back then, a bad political choice or alliance, could mean death, while the normal cost today in a democratic society might be ridicule for having supported the losing team or position.

Nevertheless, politics should be taken seriously. Bad politics means that it'll take longer for us humans to reach world peace, an end to hunger and disease, and favourable conditions so that no one will create an unfriendly AI. Therefore, discussing  politics is vital so that, someday, some collective actions could be performed to alter the political course for the better.

But what should that collective action be? - what should the new course(s) be? - and who should do it? - and what does "for the better" imply? To engage in politics one needs to be able to give some (implicit or explicit) answers to these questions. This can be done, and in so doing one has constructed a political ideology - which might be similar to existing ideologies or it might be different.

A political ideology might be constructed in various ways. In this and a few more posts I will propose one way of doing that. These posts might be seen as a tutorial in constructing a political ideology. In these posts I will not suggest an answer to what the best political system should be, nor will I follow my own instructions. But if one should follow these instructions I believe that one can answer the questions mentioned above.

Political ideologies might be constructed in various other ways. The one I discuss in my following posts is based on two principles: (1) that one should not propose an answer until one has thought about the question extensively, and (2) that one should consider the most important questions first.

Before writing the next post, here are the points I will discuss in each of them - I will write the posts as an instruction manual so I'll address you, dear reader, through them out:

  • what is politics, what is the goal of engaging in politics?
  • what are your most highly valued political goals?
  • what facts (and interpretations) can explain most societal features, what facts/interpretations will damn most societies as not ideal?
  • how much does it cost to engage in political action?
  • what are the most important facts concerning political strategies?
  • some thoughts on alliances, representatives and conspiracies.
  • some thoughts on discussing politics generally.

Next post "The Domain of Politics"

How should negative externalities be handled? (Warning: politics)

-5 nigerweiss 08 May 2013 09:40PM

Politics ahead!  Read at your own risk, mind killers, etc.  Let all caveats be well and thoroughly emptored.

It seems reasonably clear to me that, from a computational perspective, functional central planning is not practically possible.  Resource allocation among many agents looks an awful lot like an exponential time problem, and the world market is quite an efficient approximation.  In the real world, markets, regulated to preclude blackmail, theft, and slavery, will tend to provide a better approximation of "correct" resource allocation between free agents than a central resource allocation algorithm could plausibly achieve without a tremendous, invasive amount of information about the desires of every market participant, and quite a lot of computing power (within a few orders of magnitude of the combined computational budget of the human species).  

It would be naive to say that we'd need exactly the computational power of the human species in order to achieve it: we can imagine how we might optimize the resource allocation scheme by quite a lot.  Populations are (at least somewhat) compressible, in that there are a number of groups of individual people who optimize for similar things, allowing you to save on simulating all of them.  Additionally, a decent chunk of human neurological and intellectual activity is not dedicated to economic optimization of any kind, which saves you some computing time there as well.  And, of course, humans are not rational, and the homunculi representing them in the optimized market simulation could be, giving them substantially more bang for their cognitive buck - we can imagine, for instance, that this market simulation would not sink billions of dollars into lotteries each year!  It may also be that the behavior of the market itself, on some level, is lawful, and a sufficiently intelligent agent could find general-case solutions that are less expensive than market simulation.    

Still, though, the amount of information and raw processing power needed to pull off central planning competitive with the market approximation seems to be out of our reach for the time being.  As a result of this, and a few other factors, my own politics tend to lean Libertarian / minarchist, and I'm aware that there is some of this sentiment in circulation on this site, though generally not explicitly.  I'm trying to refine my beliefs surrounding some of the sticky issues in Libertarian philosophy (mostly related to children and extreme policy cases), and I thought I'd ask LW what they thought about one issue in particular.  

I have been wondering whether or not there are any interventions in the economy that can have a positive expected benefit.  I honestly don't know if this is the case: put another way, the question is really asking if there are any characteristic behaviors of markets that are undesirable in some sense, and can be corrected by the application of an external law.  Furthermore, such things cannot be profitable to correct for any participant or plausibly-sized collection of participants in the market, but must be good for the market as a whole, or must be something that requires regulatory power to fix.  

An obvious example of this sort of thing is the tragedy of the commons and negative externalities.  The most pressing case study would be climate change: the science suggests, fairly firmly, that human CO2 emissions are causing long-term shifts in global climate.  How disastrous these shifts will actually be is less well settled, but there is at least a reasonable probability that it will be fairly unpleasant, in the long term.  Personally, I feel that we are likely to run into much bigger problems much sooner than the 50-200 year timescales these disasters seem to expected on.  However, were this not the case, I find that I'm not quite sure how my ideal government, run by a few thousand much smarter and better informed copies of me, ought to respond to the issue.  I don't know what I think the ideal policy for dealing with these sorts of externalities is, and I thought I'd ask for LessWrong's thoughts on the matter.

In my own mind, I think that as light a touch as possible is probably desirable.  Law is a very blunt instrument, and crude legislation like a carbon tax could easily have its own serious negative implications (driving industry to countries that simply don't care about CO2 emissions, for example).  However, actions like subsidizing and partially deregulating nuclear power plants could help a lot by making coal-fired power plants noncompetitive.  We could also declare a policy of slowly withdrawing any government involvement in overseas oil acquisition, which would drive up the price of petroleum products and make electric cars a more appealing alternative.  However, I don't know if there would be horrifying consequences to any of these actions: this is the underlying problem - I am not as smart as the market, and guessing its moods is not something that I, or any human is going to be very good at.  However, it seems clear that some intervention is necessary in this sort of case.  Rock, hard place, you are here.  


What truths are actually taboo?

4 sunflowers 16 April 2013 11:40PM

LessWrong has been having fun lately with posts about sexism, racism, and academic openness.   And here just like everywhere else, somebody inevitably claims taboo status for any number of entirely obvious truths, e.g. "top level mathematicians and physicists are almost invariably male," "black people have lower IQ scores than white people," and "black people are statistically more criminal than whites."  In my experience, these are not actually taboo, and I think my experience is generalizable.  I'll illustrate.

You're at a bar and you meet a fellow named Bill.  Bill's a nice guy, but somehow the conversation strayed Hitler-game style to World War II.  Bill thinks the war was avoidable.  Bill thinks the Holocaust would not have happened were it not for the war, and that some of the Holocaust was a reaction to actual Jewish subterfuge and abuse.  Bill thinks that the Holocaust was not an essential, early plan of the Nazis, because it only happened after the war began.  Bill thinks that the number of casualties has been overestimated.  Bill claims that Allied abuses, e.g. the bombing of Dresden, have been glossed over and ignored, while fantastic lies about Jews being systematically turned into soap have propagated.  Bill thinks that the Holocaust has become a sort of national religion, abused by self-interested Jews and defenders of Zionist foreign policy, and that the freedom of those who doubt it is under serious attack. Bill starts listing other things he's not allowed to say. Bill doesn't think that the end of slavery was all that good for "the blacks," and that the negatives of busing and forced integration have often outweighed the positives.  Bill has personally been the victim of black-on-white crimes and racism.  Bill is a hereditarian.  Bill doesn't think that dropping an n-bomb should ruin a public career.

Here's the problem:  everything Bill has said is either true, a matter of serious debate, or otherwise a matter of high likelihood and reasonableness.  Yet you feel nervous.  Perhaps you're upset.  That's the power of taboo, right?  Society is punishing truth-telling!  First they came for the realists... Rationalists, to arms!


We can recognize that statements like these correlate with certain false beliefs and nasty sentiments of the sort that actually are taboo.  It's just like when somebody says, "well science doesn't know everything."  To this, I think, "duh, and you're probably a creationist or medical quack or something similarly credible."  Or when somebody says, "the government lies to us."  To this, I think, "obviously, and you're likely a Truther or something."  Bill is probably an anti-Semite, but Bill doesn't just say, "I'm an anti-Semite," because that really is taboo.  He might even believe that he shouldn't be considered something awful like an anti-Semite.  Bill probably doesn't think Bill so unpleasant.

That's the paradox:  "taboo" statements like black crime statistics are to some extent "taboo" for sound, rationalist reasons. But "taboo" is not taboo:  it's about context.  People who think that such statements are taboo are probably bad at communicating, and people often think they're racists and misogynists because they probably are on good rationalist grounds.  If you want to talk about statistical representatives on the topic of race, be ready to understand that those who are listening will have background knowledge about the other views you might hold.

All this is the leadup to my question:  what highly probable or effectively certain truths are genuinely taboo?  I'm trying to avoid answers like "there are fewer women in mathematics" or "the size of my penis," since these are context sensitive, but not really taboo within a reasonable range of circumstances.  I'm also not particularly interested in value commitments or ideologies.  Yes, employers will punish labor organizers and radical political views can get you filtered.  But these aren't clear matters of fact.  I also don't mean sensitive topics like abortion or religion, nor do I mean "taboo within a political party."

Is there really anything true that we simply cannot say?  I have the US in mind especially, but I'm interested in other countries as well.  I'm sure there are things that deserve the label, but I've found that the most frequently given examples don't hold water.  I think hereditarianism is a close contender, but it's not an "obvious truth."  Rather, my understanding is that it is a serious position.  It's also only contextually taboo.  If it were a definitive finding, it could perhaps become taboo, though I think it more likely that it would be somewhat reluctantly accepted.

Any suggestions?  If we find some really serious examples, we might figure out a way to talk about them.

[Link] Diversity and Academic Open Mindedness

3 GLaDOS 04 April 2013 12:31PM
David Friedman writes on his blog.

I had an interesting recent conversation with a fellow academic that I think worth a blog post. It started with my commenting that I thought support for "diversity" in the sense in which the term is usually used in the academic context—having students or faculty from particular groups, in particular blacks but also, in some contexts, gays, perhaps hispanics, perhaps women—in practice anticorrelated with support for the sort of diversity, diversity of ideas, that ought to matter to a university.

I offered my standard example. Imagine that a university department has an opening and is down to two or three well qualified candidates. They learn that one of them is an articulate supporter of South African Apartheid. Does the chance of hiring him go up or down? If the university is actually committed to intellectual diversity, the chance should go up—it is, after all, a position that neither faculty nor students are likely to have been exposed to. In fact, in any university I am familiar with, it would go sharply down.

The response was that that he considered himself very open minded, getting along with people across the political spectrum, but that that position was so obviously beyond the bounds of reasonable discourse that refusing to hire the candidate was the correct response. 

The question I should have asked and didn't was whether he had ever been exposed to an intelligent and articulate defense of apartheid. Having spent my life in the same general environment—American academia—as he spent his, I think the odds are pretty high that he had not been. If so, he was in the position of a judge who, having heard the case for the prosecution, convicted the defendant without bothering to hear the defense. Worse still, he was not only concluding that the position was wrong—we all have limited time and energy, and so must often reach such conclusions on an inadequate basis—he was concluding it with a level of certainty so high that he was willing to rule out the possibility that the argument on the other side might be worth listening to.

An alternative question I might have put to him was whether he could make the argument for apartheid about as well as a competent defender of that system could. That, I think, is a pretty good test of whether one has an adequate basis to reject a position—if you don't know the arguments for it, you probably don't know whether those arguments are wrong, although there might be exceptions. I doubt that he could have. At least, in the case of political controversies where I have been a supporter of the less popular side, my experience is that those on the other side considerably overestimate their knowledge of the arguments they reject.

Which reminds me of something that happened to me almost fifty years ago—in 1964, when Barry Goldwater was running for President. I got into a friendly conversation with a stranger, probably set off by my wearing a Goldwater pin and his curiosity as to how someone could possibly support that position. 

We ran through a series of issues. In each case, it was clear that he had never heard the arguments I was offering in defense of Goldwater's position and had no immediate rebuttal. At the end he asked me, in a don't-want-to-offend-you tone of voice, whether I was taking all of these positions as a joke. 

I interpreted it, and still do, as the intellectual equivalent of "what is a nice girl like you doing in a place like this?" How could I be intelligent enough to make what seemed like convincing arguments for positions he knew were wrong, and yet stupid enough to believe them?

Yup. (Q_Q)

Why Politics are Important to Less Wrong...

6 OrphanWilde 21 February 2013 04:24PM

...and no, it's not because of potential political impact on its goals.  Although that's also a thing.

The Politics problem is, at its root, about forming a workable set of rules by which society can operate, which society can agree with.

The Friendliness Problem is, at its root, about forming a workable set of values which are acceptable to society.

Politics as a process (I will use "politics" to refer to the process of politics henceforth) doesn't generate values; they're strictly an input, by which the values of society are converted into rules which are intended to maximize them.  While this is true, it is value agnostic; it doesn't care what the values are, or where they come from.  Which is to say, provided you solve the Friendliness Problem, it provides a valuable input into politics.

Politics is also an intelligence.  Not in the "self aware" sense, or even in the "capable of making good judgments" sense, but in the sense of an optimization process.  We're each nodes in this alien intelligence, and we form what looks, to me, suspiciously like a neural network.

The Friendliness Problem is equally applicable to Politics as it is to any other intelligence.  Indeed, provided we can provably solve the Friendliness Problem, we should be capable of creating Friendly Politics.  Friendliness should, in principle, be equally applicable to both.  Now, there are some issues with this - politics is composed of unpredictable hardware, namely, people.  And it may be that the neural architecture is fundamentally incompatible with Friendliness.  But that is discussing the -output- of the process.  Friendliness is first an input, before it can be an output.

More, we already have various political formations, and can assess their Friendliness levels, merely in terms of the values that went -into- them.

Which is where I think politics offers a pretty strong hint to the possibility that the Friendliness Problem has no resolution:

We can't agree on which political formations are more Friendly.  That's what "Politics is the Mindkiller" is all about; our inability to come to an agreement on political matters.  It's not merely a matter of the rules - which is to say, it's not a matter of the output: We can't even come to an agreement about which values should be used to form the rules.

This is why I think political discussion is valuable here, incidentally.  Less Wrong, by and large, has been avoiding the hard problem of Friendliness, by labeling its primary functional outlet in reality as a mindkiller, not to be discussed.

Either we can agree on what constitutes Friendly Politics, or not.  If we can't, I don't see much hope of arriving at a Friendliness solution more broadly.  Friendly to -whom- becomes the question, if it was ever anything else.  Which suggests a division in types of Friendliness; Strong Friendliness, which is a fully generalized set of human values, and acceptable to just about everyone; and Weak Friendliness, which isn't fully generalized, and perhaps acceptable merely to a plurality.  Weak Friendliness survives the political question.  I do not see that Strong Friendliness can.

(Exemplified: When I imagine a Friendly AI, I imagine a hands-off benefactor who permits people to do anything they wish to which won't result in harm to others.  Why, look, a libertarian/libertine dictator.  Does anybody envisage a Friendly AI which doesn't correspond more or less directly with their own political beliefs?)

[Link] False memories of fabricated political events

17 gjm 10 February 2013 10:25PM

Another one for the memory-is-really-unreliable file. Some researchers at UC Irvine (one of them is Elizabeth Loftus, whose name I've seen attached to other fake-memory studies) asked about 5000 subjects about their recollection of four political events. One of the political events never actually happened. About half the subjects said they remembered the fake event. Subjects were more likely to pseudo-remember events congruent with their political preferences (e.g., Bush or Obama doing something embarrassing).

Link to papers.ssrn.com (paper is freely downloadable).

The subjects were recruited from the readership of Slate, which unsurprisingly means they aren't a very representative sample of the US population (never mind the rest of the world). In particular, about 5% identified as conservative and about 60% as progressive.

Each real event was remembered by 90-98% of subjects. Self-identified conservatives remembered the real events a little less well. Self-identified progressives were much more likely to "remember" a fake event in which G W Bush took a vacation in Texas while Hurricane Katrina was devastating New Orleans. Self-identified conservatives were somewhat more likely to "remember" a fake event in which Barack Obama shook the hand of Mahmoud Ahmedinejad.

About half of the subjects who "remembered" fake events were unable to identify the fake event correctly when they were told that one of the events in the study was fake.

Politics Discussion Thread February 2013

1 OrphanWilde 06 February 2013 09:33PM


  1. Top-level comments should introduce arguments; responses should be responses to those arguments. 
  2. Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised.  This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it.  If it's a convincing argument, and the counterargument is also convincing, upvote both.  If both arguments are unconvincing, downvote both. 
  3. A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.
  4. In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.

As Multiheaded added, "Personal is Political" stuff like gender relations, etc also may belong here.


On private marriage contracts

8 [deleted] 12 January 2013 02:53PM
Warning: First Read Everything Here, only participate or read on if you are sure you understand the risks.

Based on the commentary and excerpt Federico made on studiolo, I've added the book Nudge: Improving Decisions about Health, Wealth, and Happiness by Thaler and Sunstein to my reading list. Since the book seems relevant enough to this site and has been mentioned before, I may eventually write a review. The post by Federico is made up mostly of excerpts from Chapter 15 of the book "Privatizing Marriage".

In addition to this I recommend reading the following excellent essays:

The reason the particular topic he talks about caught my interest is because the proposed solutions by Thaler and Sunstein seem somewhat similar to the one I argued for...

Marriage is a personal or religious arrangement, it is only the states business as far as it is also a legally enforceable contract. It is fundamentally unfair that people agree to a set of legal terms and cultural expectations that ideally are aimed to last a lifetime yet the state messes with the contract beyond recognition in just a few decades without their consent.

Consider a couple marrying in 1930s or 1940s that died or divorced in the 1980s. Did they even end their marriage in the same institution they started in? Consider how divorce laws and practice had changed. Ridiculous. People should have the right to sign an explicit, customisable contract governing their rights and duties as well as terms of dissolution in it. Beyond that the state should have no say, also such contracts should supersede any legislation the state has on child custody, though perhaps some limits on what exactly they can agree on would be in order.

Such a contract has no good reason to be limited to just describing traditional marriage or even having that much to do with sex or even raising children, it can and should be used to help people formalize platonic and non-sexual relationships as well. It should also be used for various kinds of non-traditional (for Western civ) marriage like polygamy or other kinds of polyamours arrangements and naturally homosexual unions.

...and Vladimir_M argued convincingly against. Here...

However, are you sure that you understand just how radical the above statement is? The libertarian theory of contracts -- that you should have full freedom to enter any voluntary contract as far as your own property and rights are concerned -- sounds appealing in the abstract. (Robin Hanson would probably say "in far mode.") Yet on closer consideration, it implies all sorts of possible (and plausible) arrangements that would make most people scream with horror.

In any realistic human society, there are huge limitations on what sorts of contracts you are allowed to enter, much narrower than what any simple quasi-libertarian theory would imply. Except for a handful of real honest libertarians, who are inevitably marginal and without influence, whenever you see someone make a libertarian argument that some arrangement should be permitted, it is nearly always part of an underhanded rhetorical ploy in which the underlying libertarian principle is switched on and off depending on whether its application is some particular case produces a conclusion favorable to the speaker's ideology.

...and here.

I think this would be a genuine cause for concern, not because I don't think that people should be able to enter whatever relationships please them in principle, but because in practice I'm concerned about people being coerced into signing contracts harmful to themselves. Not sure where I'd draw the line exactly; this is probably a Hard Problem.

The speaker has an ideological vision of what the society should look like, and in particular, what the government-dictated universal terms of marriage should be (both with regards to the institution of marriage itself and its tremendous implications on all the other social institutions). He uses the libertarian argument because its implications happen to coincide with his ideological position in this particular situation, but he would never accept a libertarian argument in any other situation in which it would imply something disfavored by his ideology.

Well, there you go. Any restriction on freedom of contract can be rationalized as preventing something "harmful," one way or another.

And it's not a hard problem at all. It is in fact very simple: when people like something for ideological reasons, they will use the libertarian argument to support its legality, and when they dislike something ideologically, they will invent rationalizations for why the libertarian argument doesn't apply in this particular case. The only exceptions are actual libertarians, for whom the libertarian argument itself carries ideological weight, but they are an insignificant fringe minority. For everyone else, the libertarian argument is just a useful rhetorical tool to be employed and recognized only when it produces favorable conclusions.

In particular, when it comes to marriage, outside of the aforementioned libertarian fringe, there is a total and unanimous agreement that marriage is not a contract whose terms can be set freely, but rather an institution that is entered voluntarily, but whose terms are dictated (and can be changed at any subsequent time) by the state. (Even the prenuptial agreements allow only very limited and uncertain flexibility.) Therefore, when I hear a libertarian argument applied to marriage, I conclude that there are only two possibilities:

  1. The speaker is an honest libertarian. However, this means either that he doesn't realize how wildly radical the implications of the libertarian position are, or that he actually supports these wild radical implications. (Suppose for example that a couple voluntarily sign a marriage contract stipulating death penalty, or even just flogging, for adultery. How can one oppose the enforcement of this contract without renouncing the libertarian principle?)

  2. The speaker has an ideological vision of what the society should look like, and in particular, what the government-dictated universal terms of marriage should be (both with regards to the institution of marriage itself and its tremendous implications on all the other social institutions). He uses the libertarian argument because its implications happen to coincide with his ideological position in this particular situation, but he would never accept a libertarian argument in any other situation in which it would imply something disfavored by his ideology.

[Link] St. Paul: memetic engineer

2 [deleted] 12 January 2013 01:21PM
New post by Federico on his new blog that I mentioned earlier. Worth reading for those interested in the memetics of religion, politics, Christianity or Islam. The cited material also led my mind to some related questions.

AnnoDomini suggests I write about “St. Paul the social engineer!”

“Social engineering” is coercive. Saint Paul was a missionary, not a law-maker; I would call him a memetic engineer.

Like any ingeniarius, a memetic engineer takes elements at his disposal, makes one or two small changes, synthesises, and sells his product. The product is designed to fulfil a personal end; if it endures, this is most likely incidental. Few engineers care if their creation outlasts them.

The elements at this memetic engineer’s disposal are an ethnic-supremacist religion, a popular dead Messiah, and a cunning intellect.

Saul of Tarsus spends his twenties persecuting Christians. In his own words:

13 For ye have heard of my conversation in time past in the Jews’ religion, how that beyond measure I persecuted the church of God, and wasted it: 14 And profited in the Jews’ religion above many my equals in mine own nation, being more exceedingly zealous of the traditions of my fathers.

Galatians 1:13–14

He even participates in the racist murder of a naive idealist called Stephen, in a scene echoed many centuries later by Sacha Baron Cohen.

55 But he, being full of the Holy Ghost, looked up stedfastly into heaven, and saw the glory of God, and Jesus standing on the right hand of God, 56 And said, Behold, I see the heavens opened, and the Son of man standing on the right hand of God. 57 Then they cried out with a loud voice, and stopped their ears, and ran upon him with one accord, 58 And cast him out of the city, and stoned him: and the witnesses laid down their clothes at a young man’s feet, whose name was Saul. 59 And they stoned Stephen, calling upon God, and saying, Lord Jesus, receive my spirit. 60 And he kneeled down, and cried with a loud voice, Lord, lay not this sin to their charge. And when he had said this, he fell asleep.

Acts 7:55–60

But what he sees afterwards gives him pause.

1 And Saul was consenting unto his death. And at that time there was a great persecution against the church which was at Jerusalem; and they were all scattered abroad throughout the regions of Judaea and Samaria, except the apostles. 2 And devout men carried Stephen to his burial, and made great lamentation over him. 3 As for Saul, he made havock of the church, entering into every house, and haling men and women committed them to prison. 4 Therefore they that were scattered abroad went every where preaching the word. 5 Then Philip went down to the city of Samaria, and preached Christ unto them. 6 And the people with one accord gave heed unto those things which Philip spake, hearing and seeing the miracles which he did. 7 For unclean spirits, crying with loud voice, came out of many that were possessed with them: and many taken with palsies, and that were lame, were healed. 8 And there was great joy in that city.

Acts 8:1–8

Saul realises that he can do better as a Christian. All that joy to be had in all those cities. The problem is, he never met Jesus. So he spins an absurd yarn about Jesus’s ghost.

13 At midday, O king, I saw in the way a light from heaven, above the brightness of the sun, shining round about me and them which journeyed with me. 14 And when we were all fallen to the earth, I heard a voice speaking unto me, and saying in the Hebrew tongue, Saul, Saul, why persecutest thou me? it is hard for thee to kick against the pricks. 15 And I said, Who art thou, Lord? And he said, I am Jesus whom thou persecutest. 16 But rise, and stand upon thy feet: for I have appeared unto thee for this purpose, to make thee a minister and a witness both of these things which thou hast seen, and of those things in the which I will appear unto thee; 17 Delivering thee from the people, and from the Gentiles, unto whom now I send thee, 18 To open their eyes, and to turn them from darkness to light, and from the power of Satan unto God, that they may receive forgiveness of sins, and inheritance among them which are sanctified by faith that is in me.

Acts 26:13–18

Christians are not popular with the Jews. Therefore, Saul Paul won’t risk preaching to them. Here is his first innovation:

1 I say the truth in Christ, I lie not, my conscience also bearing me witness in the Holy Ghost, 2 That I have great heaviness and continual sorrow in my heart. 3 For I could wish that myself were accursed from Christ for my brethren, my kinsmen according to the flesh: 4 Who are Israelites; to whom pertaineth the adoption, and the glory, and the covenants, and the giving of the law, and the service of God, and the promises; 5 Whose are the fathers, and of whom as concerning the flesh Christ came, who is over all, God blessed for ever. Amen. 6 Not as though the word of God hath taken none effect. For they are not all Israel, which are of Israel: 7 Neither, because they are the seed of Abraham, are they all children: but, In Isaac shall thy seed be called. 8 That is, They which are the children of the flesh, these are not the children of God: but the children of the promise are counted for the seed.

Romans 9:1–8

In other words, Yahweh, God of the Israelites, who was complicit in the genocide of Amalekites, Canaanites, Midianites, Gibeonites, Libnahites, Eglonites, Debirites, Moabites, Benjamites, Ammonites, Edomites, Egyptians, Syrians, Philistines and anyone else who got in the way of his favourite ethnic group…is now God of Everyone. “Israel” is just a metaphor, decides Paul.

Paul now has license to go on a world tour; but he mustn’t upset the local rulers. The Romans are touchy about rabble-rousers. Paul has heard of Christ’s cryptic comment:

15 Then went the Pharisees, and took counsel how they might entangle him in his talk. 16 And they sent out unto him their disciples with the Herodians, saying, Master, we know that thou art true, and teachest the way of God in truth, neither carest thou for any man: for thou regardest not the person of men. 17 Tell us therefore, What thinkest thou? Is it lawful to give tribute unto Caesar, or not? 18 But Jesus perceived their wickedness, and said, Why tempt ye me, ye hypocrites? 19 Shew me the tribute money. And they brought unto him a penny. 20 And he saith unto them, Whose is this image and superscription? 21 They say unto him, Caesar’s. Then saith he unto them, Render therefore unto Caesar the things which are Caesar’s; and unto God the things that are God’s. 22 When they had heard these words, they marvelled, and left him, and went their way.

Matthew 22:15–22

So Paul invents “separation of Church and State”. This makes his exotic new religion seem inoffensive, although the Romans end up killing him anyway.

1 Let every soul be subject unto the higher powers. For there is no power but of God: the powers that be are ordained of God. 2 Whosoever therefore resisteth the power, resisteth the ordinance of God: and they that resist shall receive to themselves damnation. 3 For rulers are not a terror to good works, but to the evil. Wilt thou then not be afraid of the power? do that which is good, and thou shalt have praise of the same: 4 For he is the minister of God to thee for good. But if thou do that which is evil, be afraid; for he beareth not the sword in vain: for he is the minister of God, a revenger to execute wrath upon him that doeth evil. 5 Wherefore ye must needs be subject, not only for wrath, but also for conscience sake. 6 For for this cause pay ye tribute also: for they are God’s ministers, attending continually upon this very thing. 7 Render therefore to all their dues: tribute to whom tribute is due; custom to whom custom; fear to whom fear; honour to whom honour.

Romans 13:1–7

Leo Tolstoy points out:

Not only the complete misunderstanding of Christ’s teaching, but also a complete unwillingness to understand it could have admitted that striking misinterpretation, according to which the words, “To Caesar the things which are Caesar’s,” signify the necessity of obeying Caesar. In the first place, there is no mention there of obedience; in the second place, if Christ recognized the obligatoriness of paying tribute, and so of obedience, He would have said directly, “Yes, it should be paid;” but He says, “Give to Caesar what is his, that is, the money, and give your life to God,” and with these latter words He not only does not encourage any obedience to power, but, on the contrary, points out that in everything which belongs to God it is not right to obey Caesar.

But the deed was done.

Paul is set to have fun in his middle age. He isn’t married, and all his expenses are paid.

So, too, in his last speech to the Ephesian elders he lays great stress on the fact that he had not made money by his preaching, but had supported himself by the labour of his hands. ‘I coveted no man’s gold or apparel. Ye yourselves know that these hands ministered unto my necessities.’

Yet St. Paul did receive gifts from his converts. He speaks of the Philippians as having sent once and again unto his necessity, and he tells the Corinthians that he ‘robbed other churches, taking wages of them, that he might minister to them’. He does not seem to have felt any unwillingness to receive help; he rather welcomed it. He was not an ascetic. He saw no particular virtue in suffering privations. The account of his journeys always gives us the impression that he was poor, never that he was poverty-stricken. He said indeed that he knew how ‘to be in want’, ‘to be filled, and to be hungry’. But this does not imply more than that he was in occasional need. Later, he certainly must have had considerable resources, for he was able to maintain a long and expensive judicial process, to travel with ministers, to gain a respectful hearing from provincial governors, and to excite their cupidity. We have no means of knowing whence he obtained such large supplies; but if he received them from his converts there would be nothing here contrary to his earlier practice. He received money; but not from those to whom he was preaching. He refused to do anything from which it might appear that he came to receive, that his object was to make money.

Paul’s epistle to the Romans holds a clue to the source of his mysterious wealth.

19 Through mighty signs and wonders, by the power of the Spirit of God; so that from Jerusalem, and round about unto Illyricum, I have fully preached the gospel of Christ. 20 Yea, so have I strived to preach the gospel, not where Christ was named, lest I should build upon another man’s foundation: 21 But as it is written, To whom he was not spoken of, they shall see: and they that have not heard shall understand. 22 For which cause also I have been much hindered from coming to you. 23 But now having no more place in these parts, and having a great desire these many years to come unto you; 24 Whensoever I take my journey into Spain, I will come to you: for I trust to see you in my journey, and to be brought on my way thitherward by you, if first I be somewhat filled with your company. 25 But now I go unto Jerusalem to minister unto the saints. 26 For it hath pleased them of Macedonia and Achaia to make a certain contribution for the poor saints which are at Jerusalem. 27 It hath pleased them verily; and their debtors they are. For if the Gentiles have been made partakers of their spiritual things, their duty is also to minister unto them in carnal things. 28 When therefore I have performed this, and have sealed to them this fruit, I will come by you into Spain. 29 And I am sure that, when I come unto you, I shall come in the fulness of the blessing of the gospel of Christ.

Romans 15:19–29

Scholars are puzzled by this excerpt.

He is a person who is somehow a city person, and he sees that the cities are the key to the rapid spread of this new message. . . . At one point he can write to the Roman Christians, I have filled up the gospel in the East, I have no more room to work here. What could he possibly mean? There are only a handful of Christians in each of several major cities in the Eastern Empire. What does he mean, that he has filled up all of the Eastern Empire with the gospel?

He had merely filled up his coffers. Those burgeoning trade centres, bustling with merchants and artisans…

Paul’s final stroke of genius is to dumb down the gospel.

8 Owe no man any thing, but to love one another: for he that loveth another hath fulfilled the law. 9 For this, Thou shalt not commit adultery, Thou shalt not kill, Thou shalt not steal, Thou shalt not bear false witness, Thou shalt not covet; and if there be any other commandment, it is briefly comprehended in this saying, namely, Thou shalt love thy neighbour as thyself. 10 Love worketh no ill to his neighbour: therefore love is the fulfilling of the law.

Romans 13:8–10

“The law” means the Decalogue, or the parts of it Paul can remember. This is another gross misinterpretation of Jesus and his disciples’ teaching. Yahweh says in Leviticus:

18 Thou shalt not avenge, nor bear any grudge against the children of thy people, but thou shalt love thy neighbour as thyself: I am the LORD.

Leviticus 19:18

Jesus, like any hipster, uses this obscure reference to put a Pharisee in his place:

34 But when the Pharisees had heard that he had put the Sadducees to silence, they were gathered together. 35 Then one of them, which was a lawyer, asked him a question, tempting him, and saying, 36 Master, which is the great commandment in the law? 37 Jesus said unto him, Thou shalt love the Lord thy God with all thy heart, and with all thy soul, and with all thy mind. 38 This is the first and great commandment. 39 And the second is like unto it, Thou shalt love thy neighbour as thyself. 40 On these two commandments hang all the law and the prophets.

Matthew 22:34–40

This doesn’t mean that Christians can dispense with the law! James the Just concurs:

8 If ye fulfil the royal law according to the scripture, Thou shalt love thy neighbour as thyself, ye do well: 9 But if ye have respect to persons, ye commit sin, and are convinced of the law as transgressors. 10 For whosoever shall keep the whole law, and yet offend in one point, he is guilty of all. 11 For he that said, Do not commit adultery, said also, Do not kill. Now if thou commit no adultery, yet if thou kill, thou art become a transgressor of the law.

James 2:8–11

Paul not only tells his converts that God’s single law is “be nice”, but he abolishes all of the fiddly rules.

Now the situation seems to be that initially when people were attracted to the Jesus movement, they first became Jews and they had to go through all the rituals and rites of conversion to Judaism. But apparently it’s among Paul and some of his close supporters that they began to think that it was okay to become a member of the Christian movement without having to go through all of those rites of conversion to Judaism [...]

Now the other things that one must do in order to convert to Judaism, in addition to circumcision if a male, would be to observe the Torah. That is, the Jewish law and the dietary and other kinds of purity regulations that would have come from the Torah. [...]

Paul’s notion that it was possible for gentiles to enter the congregation of God without some of the rules of Judaism interestingly enough seems to be a conviction on his part that comes from his own interpretation of the Jewish scriptures.

A very convenient interpretation, for someone who is on a whistle-stop tour of Europe’s richest and most cosmopolitan cities. Does a televangelist ask his marks to study ancient Greek, or make a pilgrimage to Jerusalem?

If human nature has changed little in 2000 years, Saint Paul was a con artist. He turned Yahweh into a universalist, Jesus into a lackey, and Christianity into Barney, all because he wanted to live the good life. He also misled the world in general about the plausibility of “Damascene conversion”.

Yet, Christianity prospered. Kenneth Clark thought it essential to Western civilisation. Why is that? One must contrast it with Islam. Roger Scruton explains:

The student of Muslim thought will be struck by how narrowly the classical thinkers pondered the problems of political order, and how sparse and theological are their theories of institutions. Apart from the caliphate—the office of “successor to” or “substitute for” the Prophet—no human institution occupies such thinkers as Al-Mawardi, Al-Ghazali, Ibn Taymiya, or Saif Ibn ‘Umar al-Asadi for long, and discussions of sovereignty—sultan, mulk—tend to be exhortatory, instructions for the ruler that will help him to guide his people in the ways of the faith. [...]

Law is fundamental to Islam, since the religion grew from Muhammad’s attempt to give an abiding code of conduct to his followers. Hence arose the four surviving schools (known as madhahib, or sects) of jurisprudence, with their subtle devices (hila) for discovering creative solutions within the letter (though not always the spirit) of the law. These four schools (Hanafi, Hanbali, Shafi and Maliki, named for their founders) are accepted by each other as legitimate, but may produce conflicting judgements in any particular case. As a result the body of Islamic jurisprudence (the fiqh) is now enormous. Such legal knowledge notwithstanding, discussions of the nature of the law, the grounds of its legitimacy, and the distinguishing marks of legal, as opposed to coercive, social structures are minimalist, Classical Islamic jurisprudence, like classical Islamist philosophy, assumes that law originates in divine command, as revealed through the Koran and the Sunna, and as deduced by analogy (qiyas) or consensus (ijma’). Apart from the four sources (usul) of law, no other source is recognised. Law, in other words, is the will of God, and sovereignty is legitimate only in so far as it upholds God’s will and is authorized through it.

There is nevertheless one great classical thinker who addressed the realities of social order, and the nature of the power exerted through it, in secular rather than theological terms: Ibn Khaldun, the fourteenth-century Tunisian polymath whose Muqaddimah is a kind of prolegomenon to the study of history and offers a general perspective on the rise and decline of human societies. Ibn Khaldun’s primary subject of study had been the Bedouin societies of North Africa; but he generalized also from his knowledge of Muslim history. Societies, he argued, are held together by a cohesive force, which he called ‘asabiya (‘asaba, “to bind,” ‘asab, a “nerve,” “ligament,” or “sinew”—cf. Latin religio). In tribal communities, ‘asabiya is strong, and creates resistance to outside control, to taxation, and to government. In cities, the seat of government, ‘asabiya is weak or non-existent, and society is held together by force exerted by the ruling dynasty. But dynasties too need ‘asabiya if they are to maintain their power. Hence they inevitably decline, softened by the luxury of city life, and within four generations will be conquered by outsiders who enjoy the dynamic cohesion of the tribe.

I'm bolding this just in case you aren't familiar with Ibn Khaldun's theory to emphasise how important this is. I would argue that it is basically correct.

That part of Ibn Khaldun’s theory is still influential: Malise Ruthven, for example, believes that it casts light on the contemporary Muslim world, in which ‘asabiya rather than instituions remains the principal cohesive force. But Ibn Khaldun’s secular theory of society dwells on pre-political unity rather than political order. His actual political theory is far more Islamic in tone. Ibn Khaldun introduces a distinction between two kinds of government—that founded on religion (siyasa diniya) and that founded on reason (siyasa ‘aqliya), echoing the thoughts of the Mu’tazili theologians. The second form of government is more political and less theocratic, since its laws do not rest on divine authority but on rational principles that can be understood and accepted without the benefit of faith. But Ibn Khaldun finds himself unable to approve of this form of politics. Secular law, he argues, leads to a decline of ‘asabiya, such as occurred when the Islamic umma passed from Arab to Persian rule. Moreover the impediment (wazi’) that constrains us to abide by the law is, in the rational state, merely external. In the state founded on the shari’a this impediment is internal, operating directly on the will of the subject. In short, the emergence of secular politics from the prophetic community is a sign not of civilized progress but of moral decline. [...]

At this point I ask my fellow rationalists to consider. If this was the case, what might decline of 'asabiya look like in modern secular societies if it was happening?

For all his subtlety, therefore, Ibn Khaldun ends by endorsing the traditional, static idea of government according to the shari’a. To put in a nutshell what is distinctive about this traditional idea of government: the Muslim conception of law as holy law, pointing the unique way to salvation, and applying to every area of human life, involves a confiscation of the political. Those matters which, in Western societies, are resolved by negotiation, compromise, and the laborious work of offices and committees are the object of immovable and eternal decrees, either laid down explicitly in the holy book, or discerned there by some religious figurehead—whose authority, however, can always be questioned by some rival imam or jurist, since the shari’a recognizes no office or institution as endowed with any independent lawmaking power.

Three features of the original message embodied in the Koran have proved decisive in this respect. First, the Messenger of God was presented with the problem of organizing and leading an autonomous community of followers. Unlike Jesus, he was not a religious visionary operating under an all-embracing imperial law, but a political leader, inspired by a revelation of God’s purpose and determined to assert that purpose against the surrounding world of tribal government and pagan superstition.

Second, the suras of the Koran make no distinction between the public and private spheres: what is commanded to the believers is commanded in response to the many problems, great and small, that emerged during the course of Muhammad’s political mission. But each command issues from the same divine authority. Laws governing marriage, property, usury and commerce occur side-by-side with rules of domestic ritual, good manners, and personal hygiene. The conduct of war and the treatment of criminals are dealt with in the same tone of voice as diet and defecation. The whole life of the community is set out in a disordered, but ultimately consistent, set of absolutes, and it is impossible to judge from the text itself whether any of these laws is more important, more threatening, or more dear to God’s heart than the others. The opportunity never arises, for the student of the Koran, to distinguish those matters which are open to political negotiation from those which are absolute duties to God. In effect, everything is owed to God, with the consequence that nothing is owed to Caesar.

Third, the social vision of the Koran is shaped through and through by the tribal order and commercial dealings of Muhammad’s Arabia. It is a vision of people bound to each other by family ties and tribal loyalties, but answerable for their actions to God alone. No mention is made of institutions, corporations, societies, or procedures with any independent authority. Life, as portrayed in the Koran, is a stark, unmediated confrontation between the individual and his God, in which the threat of punishment and the hope of reward are never far from the thoughts of either party.

Therefore, although the Koran is the record of a political project, it lays no foundations for an impersonal political order, but vests all power and authority in the Messenger of God. [...]

Islamic revivals almost always begin from a sense of the corruption and godlessness of the ruling power, and a desire to rediscover the holy leader who will restore the pure way of life that had been laid down by the Prophet.

If only people commenting on upheavals in the Middle Eastern world actually knew anything about the Middle East, they might actually make usable predictions. Not that punditry is about predictions anyway.

There seems to be no room in Islamic thinking for the idea—vital to the history of Western constitutional government—of an office that works for the benefit of the community, regardless of the virtues and vices of the one who fills it. Spinoza put the point explicitly by arguing that what makes for excellence in the state is not that it should be governed by good men, but that it should be so constituted that it does not matter whether it be governed by good men or bad. This idea goes back to Aristotle, and is the root of political order in the Western tradition—the government of laws, not of men, even though it is men who make the laws. There seems to be no similar idea in Islamic political thinking, since institutions, offices, and collective entities play no part in securing political legitimacy, and all authority stems from God, via the words, deeds, and example of his Messenger.

Islam and Christianity both flourished, once the latter had endured its dormant period on the Celtic fringe. Yet Christendom’s civic evolution, courtesy of “separation of Church and State”, eventually left its rival in the dust.

We mustn’t give Saint Paul too much credit. Jethro Tull surely wasn’t the only person capable of inventing the seed drill. The triumphant religion in Europe could easily have been someone else’s mutated Judaism, Christianity or another Messiah cult.

Facile, universalist religions spread easily within a multi-ethnic empire. Kings and emperors see the benefit to themselves in “Whosoever therefore resisteth the power, resisteth the ordinance of God”. And who would miss circumcision or dietary regulations? Adaptive traits coincide in a product that happened to be useful to the antique version of GodTV.

God-memes like Yahweh (v.1) prosper in more refractory circumstances. A draconian, legislative God supplements the tribal leader’s tenuous monopoly on violence, allowing regimented Israelites to conquer the libertines of Sodom and Gomorrah.

The tragedy of Islam is that it falls between two stools. It is legislative enough to help its adherents conquer other unruly Arab tribes, universalist enough to spread worldwide, and simple enough to go viral: There is no god but God, Muhammad is the messenger of God. But it wasn’t born within an empire, so it lacks “separation of Church and State”. The memeplex persists, but doesn’t avail its bearers.

View more: Next