Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
this post is now crossposted to the EA forum
80,000 hours is a well known Effective Altruism organisation which does "in-depth research alongside academics at Oxford into how graduates can make the biggest difference possible with their careers".
They recently posted a guide to donating which aims, in their words, to (my emphasis)
use evidence and careful reasoning to work out how to best promote the wellbeing of all. To find the highest-impact charities this giving season ... We ... summed up the main recommendations by area below
Looking below, we find a section on the problem area of criminal justice (US-focused). An area where the aim is outlined as follows: (quoting from the Open Philanthropy "problem area" page)
investing in criminal justice policy and practice reforms to substantially reduce incarceration while maintaining public safety.
Reducing incarceration whilst maintaining public safety seems like a reasonable EA cause, if we interpret "pubic safety" in a broad sense - that is, keep fewer people in prison whilst still getting almost all of the benefits of incarceration such as deterrent effects, prevention of crime, etc.
So what are the recommended charities? (my emphasis below)
"The Alliance for Safety and Justice is a US organization that aims to reduce incarceration and racial disparities in incarceration in states across the country, and replace mass incarceration with new safety priorities that prioritize prevention and protect low-income communities of color."
They promote an article on their site called "black wounds matter", as well as how you can "Apply for VOCA Funding: A Toolkit for Organizations Working With Crime Survivors in Communities of Color and Other Underserved Communities"
2. Cosecha - (note that their url is www.lahuelga.com, which means "the strike" in Spanish) (my emphasis below)
"Cosecha is a group organizing undocumented immigrants in 50-60 cities around the country. Its goal is to build mass popular support for undocumented immigrants, in resistance to incarceration/detention, deportation, denigration of rights, and discrimination. The group has become especially active since the Presidential election, given the immediate threat of mass incarceration and deportation of millions of people."
They have the ultimate goal of launching massive civil resistance and non-cooperation to show this country it depends on us ... if they wage a general strike of five to eight million workers for seven days, we think the economy of this country would not be able to sustain itself
The article quotes Carlos Saavedra, who is directly mentioned by Open Philanthropy's Chloe Cockburn:
Carlos Saavedra, who leads Cosecha, stands out as an organizer who is devoted to testing and improving his methods, ... Cosecha can do a lot of good to prevent mass deportations and incarceration, I think his work is a good fit for likely readers of this post."
They mention other charities elsewhere on their site and in their writeup on the subject, such as the conservative Center for Criminal Justice Reform, but Cosecha and the Alliance for Safety and Justice are the ones that were chosen as "highest impact" and featured in the guide to donating.
Sometimes one has to be blunt: 80,000 hours is promoting the financial support of some extremely hot-button political causes, which may not be a good idea. Traditionalists/conservatives and those who are uninitiated to Social Justice ideology might look at The Alliance for Safety and Justice and Cosecha and label them as them racists and criminals, and thereby be turned off by Effective Altruism, or even by the rationality movement as a whole.
There are standard arguments, for example this by Robin Hanson from 10 years ago about why it is not smart or "effective" to get into these political tugs-of-war if one wants to make a genuine difference in the world.
One could also argue that the 80,000 hours' charities go beyond the usual folly of political tugs-of-war. In addition to supporting extremely political causes, 80,000 hours could be accused of being somewhat intellectually dishonest about what goal they are trying to further actually is.
Consider The Alliance for Safety and Justice. 80,000 Hours state that the goal of their work in the criminal justice problem area is to "substantially reduce incarceration while maintaining public safety". This is an abstract goal that has very broad appeal and one that I am sure almost everyone agrees to. But then their more concrete policy in this area is to fund a charity that wants to "reduce racial disparities in incarceration" and "protect low-income communities of color". The latter is significantly different to the former - it isn't even close to being the same thing - and the difference is highly political. One could object that reducing racial disparities in incarceration is merely a means to the end of substantially reducing incarceration while maintaining public safety, since many people in prison in the US are "of color". However this line of argument is a very politicized one and it might be wrong, or at least I don't see strong support for it. "Selectively release people of color and make society safer - endorsed by effective altruists!" struggles against known facts about redictivism rates across races, as well as an objection about the implicit conflation of equality of outcome and equality of opportunity. (and I do not want this to be interpreted as a claim of moral superiority of one race over others - merely a necessary exercise in coming to terms with facts and debunking implicit assumptions). Males are incarcerated much more than women, so what about reducing gender disparities in incarceration, whilst also maintaining public safety? Again, this is all highly political, laden with politicized implicit assumptions and language.
Cosecha is worse! They are actively planning potentially illegal activities like helping illegal immigrants evade the law (though IANAL), as well as activities which potentially harm the majority of US citizens such as a seven day nationwide strike whose intent is to damage the economy. Their URL is "The Strike" in Spanish.
Again, the abstract goal is extremely attractive to almost anyone, but the concrete implementation is highly divisive. If some conservative altruist signed up to financially or morally support the abstract goal of "substantially reducing incarceration while maintaining public safety" and EA organisations that are pursuing that goal without reading the details, and then at a later point they saw the details of Cosecha and The Alliance for Safety and Justice, they would rightly feel cheated. And to the objection that conservative altruists should read the description rather than just the heading - what are we doing writing headings so misleading that you'd feel cheated if you relied on them as summaries of the activity they are mean to summarize?
One possibility would be for 80,000 hours to be much more upfront about what they are trying to achieve here - maybe they like left-wing social justice causes, and want to help like-minded people donate money to such causes and help the particular groups who are favored in those circles. There's almost a nod and a wink to this when Chloe Cockburn says (my paraphrase of Saavedra, and emphasis, below)
I think his [A man who wants to lead a general strike of five to eight million workers for seven days so that the economy of the USA would not be able to sustain itself, in order to help illegal immigrants] work is a good fit for likely readers of this post.
Alternatively, they could try to reinvigorate the idea that their "criminal justice" problem area is politically neutral and beneficial to everyone; the Open Philanthropy issue writeup talks about "conservative interest in what has traditionally been a solely liberal cause" after all. I would advise considering dropping The Alliance for Safety and Justice and Cosecha if they intend to do this. There may not be politically neutral charities in this area, or there may not be enough high quality conservative charities to present a politically balanced set of recommendations. Setting up a growing donor advised fund or a prize for nonpartisan progress that genuinely intends to benefit everyone including conservatives, people opposed to illegal immigration and people who are not "of color" might be an option to consider.
We could examine 80,000 hours' choice to back these organisations from a more overall-utilitarian/overall-effectiveness point of view, rather than limiting the analysis to the specific problem area. These two charities don't pass the smell test for altruistic consequentialism, pulling sideways on ropes, finding hidden levers that others are ignoring, etc. Is the best thing you can do with your smart EA money helping a charity that wants to get stuck into the culture war about which skin color is most over-represented in prisons? What about a second charity that wants to help people illegally immigrate at a time when immigration is the most divisive political topic in the western world?
Furthermore, Cosecha's plans for a nationwide strike and potential civil disobedience/showdown with Trump & co could push an already volatile situation in the US into something extremely ugly. The vast majority of people in the world (present and future) are not the specific group that Cosecha aims to help, but the set of people who could be harmed by the uglier versions of a violent and calamitous showdown in the US is basically the whole world. That means that even if P(Cosecha persuades Trump to do a U-turn on illegals) is 10 or 100 times greater than P(Cosecha precipitates a violent crisis in the USA), they may still be net-negative from an expected utility point of view. EA doesn't usually fund causes whose outcome distribution is heavily left-skewed so this argument is a bit unusual to have to make, but there it is.
Not only is Cosecha a cause that is (a) mind-killing and culture war-ish (b) very tangentially related to the actual problem area it is advertised under by 80,000 hours, but it might also (c) be an anti-charity that produces net disutility (in expectation) in the form of a higher probability a US civil war with money that you donate to it.
Back on the topic of criminal justice and incarceration: opposition to reform often comes from conservative voters and politicians, so it might seem unlikely to a careful thinker that extra money on the left-wing side is going to be highly effective. Some intellectual judo is required; make conservatives think that it was their idea all along. So promoting the Center for Criminal Justice Reform sounds like the kind of smart, against-the-grain idea that might be highly effective! Well done, Open Philanthropy! Also in favor of this org: they don't copiously mention which races or person-categories they think are most important in their articles about criminal justice reform, the only culture war item I could find on them is the world "conservative" (and given the intellectual judo argument above, this counts as a plus), and they're not planning a national strike or other action with a heavy tail risk. But that's the one that didn't make the cut for the 80,000 hours guide to donating!
The fact that they let Cosecha (and to a lesser extent The Alliance for Safety and Justice) through reduces my confidence in 80,000 hours and the EA movement as a whole. Who thought it would be a good idea to get EA into the culture war with these causes, and also thought that they were plausibly among the most effective things you can do with money? Are they taking effectiveness seriously? What does the political diversity of meetings at 80,000 hours look like? Were there no conservative altruists present in discussions surrounding The Alliance for Safety and Justice and Cosecha, and the promotion of them as "beneficial for everyone" and "effective"?
Before we finish, I want to emphasize that this post is not intended to start an object-level discussion about which race, gender, political movement or sexual orientation is cooler, and I would encourage moderators to temp-ban people who try to have that kind of argument in the comments of this post.
I also want to emphasize that criticism of professional altruists is a necessary evil; in an ideal world the only thing I would ever want to say to people who dedicate their lives to helping others (Chloe Cockburn in particular, since I mentioned her name above) is "thank you, you're amazing". Other than that, comments and criticism are welcome, especially anything pointing out any inaccuracies or misunderstandings in this post. Comments from anyone involved in 80,000 hours or Open Philanthropy are welcome.
I got lots of helpful comments in my first post, so I'll try a second: I want to develop a list of criteria by which to evaluate a presidency. Coming up with criteria and metrics on the economy is pretty easy, but I'd like to ask for suggestions on proxies for evaluating:
- Racial relations;
- Gender equality;
- Impact on free trade / protectionism;
- Any other significant factor that would determine whether a president is successful.
Why you should be very careful about trying to openly seek truth in any political discussion
1. Rationality considered harmful for Scott Aaronson in the great gender debate
In 2015, complexity theorist and rationalist Scott Aaronson was foolhardy enough to step into the Gender Politics war on his blog with a comment stating that extreme feminism that he bought into made him hate himself and try to seek ways to chemically castrate himself. The feminist blogoshere got hold of this and crucified him for it, and he has written a few followup blog posts about it. Recently I saw this comment by him on his blog:
As the comment 171 affair blew up last year, one of my female colleagues in quantum computing remarked to me that the real issue had nothing to do with gender politics; it was really just about the commitment to truth regardless of the social costs—a quality that many of the people attacking me (who were overwhelmingly from outside the hard sciences) had perhaps never encountered before in their lives. That remark cheered me more than anything else at the time
2. Rationality considered harmful for Sam Harris in the islamophobia war
I recently heard a very angry, exasperated 2 hour podcast by the new atheist and political commentator Sam Harris about how badly he has been straw-manned, misrepresented and trash talked by his intellectual rivals (who he collectively refers to as the "regressive left"). Sam Harris likes to tackle hard questions such as when torture is justified, which religions are more or less harmful than others, defence of freedom of speech, etc. Several times, Harris goes to the meta-level and sees clearly what is happening:
Rather than a searching and beautiful exercise in human reason to have conversations on these topics [ethics of torture, military intervention, Islam, etc], people are making it just politically so toxic, reputationally so toxic to even raise these issues that smart people, smarter than me, are smart enough not to go near these topics
Everyone on the left at the moment seems to be a mind reader.. no matter how much you try to take their foot out of your mouth, the mere effort itself is going to be counted against you - you're someone who's in denial, or you don't even understand how racist you are, etc
3. Rationality considered harmful when talking to your left-wing friends about genetic modification
In the SlateStarCodex comments I posted complaining that many left-wing people were responding very personally (and negatively) to my political views.
One long term friend openly and pointedly asked whether we should still be friends over the subject of eugenics and genetic engineering, for example altering the human germ-line via genetic engineering to permanently cure a genetic disease. This friend responded to a rational argument about why some modifications of the human germ line may in fact be a good thing by saying that "(s)he was beginning to wonder whether we should still be friends".
A large comment thread ensued, but the best comment I got was this one:
One of the useful things I have found when confused by something my brain does is to ask what it is *for*. For example: I get angry, the anger is counterproductive, but recognizing that doesn’t make it go away. What is anger *for*? Maybe it is to cause me to plausibly signal violence by making my body ready for violence or some such.
Similarly, when I ask myself what moral/political discourse among friends is *for* I get back something like “signal what sort of ally you would be/broadcast what sort of people you want to ally with.” This makes disagreements more sensible. They are trying to signal things about distribution of resources, I am trying to signal things about truth value, others are trying to signal things about what the tribe should hold sacred etc. Feeling strong emotions is just a way of signaling strong precommitments to these positions (i.e. I will follow the morality I am signaling now because I will be wracked by guilt if I do not. I am a reliable/predictable ally.) They aren’t mad at your positions. They are mad that you are signaling that you would defect when push came to shove about things they think are important.
Let me repeat that last one: moral/political discourse among friends is for “signalling what sort of ally you would be/broadcast what sort of people you want to ally with”. Moral/political discourse probably activates specially evolved brainware in human beings; that brainware has a purpose and it isn't truthseeking. Politics is not about policy!
This post is already getting too long so I deleted the section on lessons to be learned, but if there is interest I'll do a followup. Let me know what you think in the comments!
Nassim Taleb recently posted this mathematical draft of election forecasting refinement to his Twitter.
(At the more local level this isn’t always true, due to issues such as incumbent advantage, local party domination, strategic funding choices, and various other issues. The point though is that when those frictions are ameliorated due to the importance of the presidency, we find ourselves in a scenario where the equilibrium tends to be elections very close to 50-50.)
So back to the mechanism of the model, Taleb imposes a no-arbitrage condition (borrowed from options pricing) to impose time-varying consistency on the Brier score. This is a similar concept to financial options, where you can go bankrupt or make money even before the final event. In Taleb's world, if a guy like Nate Silver is creating forecasts that are varying largely over time prior to the election, this suggests he hasn't put any time dynamic constraints on his model.
The math is based on assumptions though that with high uncertainty, far out from the election, the best forecast is 50-50. This set of assumptions would have to be empirically tested. Still, stepping aside from the math, it does feel intuitive that an election forecast with high variation a year away from the event is not worth relying on, that sticking closer to 50-50 would offer a better full-sample Brier score.
I'm not familiar enough in the practical modelling to say whether this is feasible. Sometime the ideal models are too hard to estimate.
I'm interested in hearing any thoughts on this from people who are familiar with forecasting or have an interest in the modelling behind it.
I also have a specific question to tie this back to a rationality based framework: When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with? Or do you do a mental adjustment or discounting based on some factor you think they've left out? Whether it's prediction market variations, or adjustments based on perceiving changes in nationalism or politician specific skills (e.g. Scott Adams claimed to be able to predict that Trump would persuade everyone to vote for him. While it's tempting to write him off as a pundit charlatan, or claim he doesn't have sufficient proof, we also can't prove his model was wrong either.) I'm interested in learning the reasons we may disagree or be reasonably skeptical of polls, knowing it of course must be tested to know the true answer.
This is my first LW discussion post -- open to freedback on how it could be improved
Everyone on this site obviously has an interest in being, on a personal level, more rational. That's, without need for argument, a good thing. (Although, if you do want to argue that, I can't stop you...)
As a society, we're clearly not very rational, and it's becoming a huge problem. Look at any political articles out there, and you'll see the same thing: angry people partitioned into angry groups, yelling at each other and confirming their own biases. The level of discourse is... low, shall we say.
While the obvious facet of rationality is trying to discern the signal above the noise, there's definitely another side: the art of convincing others. That can swing a little too close to Sophistry and putting the emphasis on personal gain, though. What we really need to do is outreach: promote rationality in the world around us. There's probably no-one reading this who hasn't been in an argument where being more rational and right hasn't helped at all, and maybe even made things worse. We've also all probably been on the other side of that, too. Admit it. But possibly the key word in that is 'argument': it frames the discussion as a confrontation, a fight that needs to be won.
Being the calm, rational person in a fight doesn't always work, though. It only takes one party to want a fight to have one, after all. When there's groups involved, the shouty passionate people tend to dominate, too. And they're currently dominating politics, and so all our lives. That's not a status quo any rationalist would be happy with, I think.
One of the problems with political/economic discussions is that we get polarised into taking absurd blanket positions and being unable to admit limitations or counter-arguments. I'm generally pretty far on the Left of the spectrum, but I will freely admit that the Right has both some very good points and a role to play: what is needed is a good dynamic tension between the two sides to ensure we don't go totally doolally either way. (Thesis, Antithesis, Synthesis etc.) And the tension is there, but it's certainly not good. We need to be able to point out failure modes to ourselves and others, encourage constructive criticism.
I think we need ways of both cooling the flames (both 1-on-1 and in groups), and strategies for promoting useful discussion.
So how can we do this? What can we do?
Sorry for the slightly clickbait-y title.
Some commenters have expressed, in the last open thread, their disappointment that figureheads from or near the rationality sphere seemed to have lost their cool when it came to this US election: when they were supposed to be calm and level-headed, they instead campaigned as if Trump was going to be the Basilisk incarnated.
I've not followed many commenters, mainly Scott Alexander and Eliezer Yudkowsky, and they both endorsed Clinton. I'll try to explain what were their arguments, briefly but as faithfully as possible. I'd like to know if you consider them mindkilled and why.
Please notice: I would like this to be a comment on methodology, about if their arguments were sound given what they knew and believed. I most definitely do not want this to decay in a lamentation about the results, or insults to the obviously stupid side, etc.
Yudkowsky made two arguments against Trump: level B incompetence and high variance. Since the second is also more or less the same as Scott's, I'll just go with those.
Level B incompetence
Eliezer attended a pretty serious and wide diplomatic simulation game, that made him appreciate how difficult is to just maintain a global equilibrium between countries and avoid nuclear annihilation. He says that there are three level in politics:
- level 0, where everything that the media report and the politicians say is taken at face value: every drama is true, every problem is important and every cry of outrage deserves consideration;
- level A, where you understand that politics is as much about theatre and emotions as it is about policies: at this level players operate like in pro-wrestling, creating drama and conflict to steer the more gullible viewers towards the preferred direction; at this level cinicism is high and almost every conflict is a farce and probably staged.
But the bucket doesn't stop here. As the diplomacy simulation taught him, there's also:
- level B, where everything becomes serious and important again. At this level, people work very hard at maintaining the status quo (where outside you have mankind extinction), diplomatic relations and subtle international equilibria shield the world from much worse outcomes. Faux pas at this level in the past had resulted in wars, genocides and general widespread badness.
In August fifty Republican security advisors signed a letter condemning Trump for his position on foreign policy: these are, Yudkowsky warned us, exactly those level B player, and they are saying us that Trump is an ill advised choice.
Trump might be a fantastic level A player, but he is an incompetent level B player, and this might very well turn to disaster.
The second argument is a more general version of the first: if you look at a normal distribution, it's easy to mistake only two possibilities: you either can do worst than the average, or better. But in a three dimensional world, things are much more complicated. Status quo is fragile (see the first argument), surrounded not by an equal amount of things being good or being bad. Most substantial variations from the equilibrium are disasters, and if you put a high-variance candidate, someone whose main point is to subvert the status quo, in charge, then with overwhelming probability you're headed off to a cliff.
People who voted for Trump are unrealistically optimists, thinking that civilization is robust, the current state is bad and variations can definitely help with getting away from a state of bad equilibrium.
2016 LessWrong Diaspora Survey Analysis: Part Four (Politics, Calibration & Probability, Futurology, Charity & Effective Altruism)
The LessWrong survey has a very involved section dedicated to politics. In previous analysis the benefits of this weren't fully realized. In the 2016 analysis we can look at not just the political affiliation of a respondent, but what beliefs are associated with a certain affiliation. The charts below summarize most of the results.
Political Opinions By Political Affiliation
There were also some other questions in this section which aren't covered by the above charts.
Calibration And Probability Questions
I just couldn't analyze these, sorry guys. I put many hours into trying to get them into a decent format I could even read and that sucked up an incredible amount of time. It's why this part of the survey took so long to get out. Thankfully another LessWrong user, Houshalter, has kindly done their own analysis.
All my calibration questions were meant to satisfy a few essential properties:
- They should be 'self contained'. I.E, something you can reasonably answer or at least try to answer with a 5th grade science education and normal life experience.
- They should, at least to a certain extent, be Fermi Estimable.
- They should progressively scale in difficulty so you can see whether somebody understands basic probability or not. (eg. In an 'or' question do they put a probability of less than 50% of being right?)
At least one person requested a workbook, so I might write more in the future. I'll obviously write more for the survey.
|Please give the obvious answer to this question, so I can automatically throw away all surveys that don't follow the rules: What is the probability of a fair coin coming up heads?||49.821||50.0||50.0||3.033|
|What is the probability that the Many Worlds interpretation of quantum mechanics is more or less correct?||44.599||50.0||50.0||29.193|
|What is the probability that non-human, non-Earthly intelligent life exists in the observable universe?||75.727||90.0||99.0||31.893|
|...in the Milky Way galaxy?||45.966||50.0||10.0||38.395|
|What is the probability that supernatural events (including God, ghosts, magic, etc) have occurred since the beginning of the universe?||13.575||1.0||1.0||27.576|
|What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe?||15.474||1.0||1.0||27.891|
|What is the probability that any of humankind's revealed religions is more or less correct?||10.624||0.5||1.0||26.257|
|What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then?||21.225||10.0||5.0||26.782|
|What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time?||25.263||10.0||1.0||30.510|
|What is the probability that our universe is a simulation?||25.256||10.0||50.0||28.404|
|What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions?||83.307||90.0||90.0||23.167|
|What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity?||76.310||80.0||80.0||22.933|
Probability questions is probably the area of the survey I put the least effort into. My plan for next year is to overhaul these sections entirely and try including some Tetlock-esque forecasting questions, a link to some advice on how to make good predictions, etc.
This section got a bit of a facelift this year. Including new cryonics questions, genetic engineering, and technological unemployment in addition to the previous years.
Interestingly enough, of those who think it will work with enough confidence to say 'yes', only 14 are actually signed up for cryonics.
sqlite> select count(*) from data where CryonicsNow="Yes" and Cryonics="Yes - signed up or just finishing up paperwork";
sqlite> select count(*) from data where CryonicsNow="Yes" and (Cryonics="Yes - signed up or just finishing up paperwork" OR Cryonics="No - would like to sign up but unavailable in my area" OR "No - would like to sign up but haven't gotten around to it" OR "No - would like to sign up but can't afford it");
LessWrongers seem to be very bullish on the underlying physics of cryonics even if they're not as enthusiastic about current methods in use.
The Brain Preservation Foundation also did an analysis of cryonics responses to the LessWrong Survey.
By what year do you think the Singularity will occur? Answer such that you think, conditional on the Singularity occurring, there is an even chance of the Singularity falling before or after this year. If you think a singularity is so unlikely you don't even want to condition on it, leave this question blank.
Stdev: 2.847858859055733e+18I didn't bother to filter out the silly answers for this.
Obviously it's a bit hard to see without filtering out the uber-large answers, but the median doesn't seem to have changed much from the 2014 survey.
Well that's fairly overwhelming.
I find it amusing how the strict "No" group shrinks considerably after this question.
This question is too important to just not have an answer to so I'll do it manually. Unfortunately I can't easily remove the 'excluded' entries so that we're dealing with the exact same distribution but only 13 or so responses are filtered out anyway.
sqlite> select count(*) from data where GeneticImprovement="Yes";
>>> 1100 + 176 + 262 + 84
>>> 1100 / 1622
67.8% are willing to genetically engineer their children for improvements.
These numbers go about how you would expect, with people being progressively less interested the more 'shallow' a genetic change is seen as.
All three of these seem largely consistent with peoples personal preferences about modification. Were I inclined I could do a deeper analysis that actually takes survey respondents row by row and looks at correlation between preference for ones own children and preference for others.
Do you think the Luddite's Fallacy is an actual fallacy?
Yes: 443 (30.936%)
No: 989 (69.064%)
We can use this as an overall measure of worry about technological unemployment, which would seem to be high among the LW demographic.
By what year do you think the majority of people in your country will have trouble finding employment for automation related reasons? If you think this is something that will never happen leave this question blank.
Stdev: 1180.2342850727339Question is flawed because you can't distinguish answers of "never happen" from people who just didn't see it.
Interesting question that would be fun to take a look at in comparison to the estimates for the singularity.
Do you think the "end of work" would be a good thing?
Yes: 1238 (81.287%)
No: 285 (18.713%)
Fairly overwhelming consensus, but with a significant minority of people who have a dissenting opinion.
If machines end all or almost all employment, what are your biggest worries? Pick two.
|People will just idle about in destructive ways||513||16.71%|
|People need work to be fulfilled and if we eliminate work we'll all feel deep existential angst||543||17.687%|
|The rich are going to take all the resources for themselves and leave the rest of us to starve or live in poverty||1066||34.723%|
|The machines won't need us, and we'll starve to death or be otherwise liquidated||416||13.55%|
The plurality of worries are about elites who refuse to share their wealth.
Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?
Nuclear war: +4.800% 326 (20.6%)
Asteroid strike: -0.200% 64 (4.1%)
Unfriendly AI: +1.000% 271 (17.2%)
Nanotech / grey goo: -2.000% 18 (1.1%)
Pandemic (natural): +0.100% 120 (7.6%)
Pandemic (bioengineered): +1.900% 355 (22.5%)
Environmental collapse (including global warming): +1.500% 252 (16.0%)
Economic / political collapse: -1.400% 136 (8.6%)
Other: 35 (2.217%)
Significantly more people worried about Nuclear War than last year. Effect of new respondents, or geopolitical situation? Who knows.
Charity And Effective Altruism
What is your approximate annual income in US dollars (non-Americans: convert at www.xe.com)? Obviously you don't need to answer this question if you don't want to. Please don't include commas or dollar signs.
How much money, in number of dollars, have you donated to charity over the past year? (non-Americans: convert to dollars at http://www.xe.com/ ). Please don't include commas or dollar signs in your answer. For example, 4000
How much money have you donated to charities aiming to reduce existential risk (other than MIRI/CFAR) in the past year?
How much have you donated in US dollars to the following charities in the past year? (Non-americans: convert to dollars at http://www.xe.com/) Please don't include commas or dollar signs in your answer. Options starting with "any" aren't the name of a charity but a category of charity.
|Against Malaria Foundation||483935.027||1905.256||300.0||None||7216.020|
|Schistosomiasis Control Initiative||47908.0||840.491||200.0||1000.0||1618.785|
|Deworm the World Initiative||28820.0||565.098||150.0||500.0||1432.712|
|Any kind of animal rights charity||83130.47||1093.821||154.235||500.0||2313.493|
|Any kind of bug rights charity||1083.0||270.75||157.5||None||353.396|
|Machine Intelligence Research Institute||141792.5||1417.925||100.0||100.0||5370.485|
|Any charity combating nuclear existential risk||491.0||81.833||75.0||100.0||68.060|
|Any charity combating global warming||13012.0||245.509||100.0||10.0||365.542|
|Center For Applied Rationality||127101.0||3177.525||150.0||100.0||12969.096|
|Strategies for Engineered Negligible Senescence Research Foundation||9429.0||554.647||100.0||20.0||1156.431|
|Any campaign for political office||38443.99||366.133||50.0||50.0||1374.305|
This table is interesting given the recent debates about how much money certain causes are 'taking up' in Effective Altruism.
Do you follow any dietary restrictions related to animal products?
Yes, I am vegan: 54 (3.4%)
Yes, I am vegetarian: 158 (10.0%)
Yes, I restrict meat some other way (pescetarian, flexitarian, try to only eat ethically sourced meat): 375 (23.7%)
No: 996 (62.9%)
Do you know what Effective Altruism is?
Yes: 1562 (89.3%)
No but I've heard of it: 114 (6.5%)
No: 74 (4.2%)
Do you self-identify as an Effective Altruist?
Yes: 665 (39.233%)
No: 1030 (60.767%)
The distribution given by the 2014 survey results does not sum to one, so it's difficult to determine if Effective Altruism's membership actually went up or not but if we take the numbers at face value it experienced an 11.13% increase in membership.
Do you participate in the Effective Altruism community?
Yes: 314 (18.427%)
No: 1390 (81.573%)
Same issue as last, taking the numbers at face value community participation went up by 5.727%
Has Effective Altruism caused you to make donations you otherwise wouldn't?
Yes: 666 (39.269%)
No: 1030 (60.731%)
Effective Altruist Anxiety
Have you ever had any kind of moral anxiety over Effective Altruism?
Yes: 501 (29.6%)
Yes but only because I worry about everything: 184 (10.9%)
No: 1008 (59.5%)
There's an ongoing debate in Effective Altruism about what kind of rhetorical strategy is best for getting people on board and whether Effective Altruism is causing people significant moral anxiety.
It certainly appears to be. But is moral anxiety effective? Let's look:
Sample Size: 244
Average amount of money donated by people anxious about EA who aren't EAs: 257.5409836065574
Sample Size: 679
Average amount of money donated by people who aren't anxious about EA who aren't EAs: 479.7501384388807
Sample Size: 249 Average amount of money donated by EAs anxious about EA: 1841.5292369477913
Sample Size: 314
Average amount of money donated by EAs not anxious about EA: 1837.8248407643312
It seems fairly conclusive that anxiety is not a good way to get people to donate more than they already are, but is it a good way to get people to become Effective Altruists?
Sample Size: 1685
P(Effective Altruist): 0.3940652818991098
P(EA Anxiety): 0.29554896142433235
P(Effective Altruist | EA Anxiety): 0.5
Maybe. There is of course an argument to be made that sufficient good done by causing people anxiety outweighs feeding into peoples scrupulosity, but it can be discussed after I get through explaining it on the phone to wealthy PR-conscious donors and telling the local all-kill shelter where I want my shipment of dead kittens.
What's your overall opinion of Effective Altruism?
Positive: 809 (47.6%)
Mostly Positive: 535 (31.5%)
No strong opinion: 258 (15.2%)
Mostly Negative: 75 (4.4%)
Negative: 24 (1.4%)
EA appears to be doing a pretty good job of getting people to like them.
|Affiliation||Income||Charity Contributions||% Income Donated To Charity||Total Survey Charity %||Sample Size|
|Community||Count||% In Community||Sample Size|
|LessWrong Facebook Group||83||48.256%||172|
|Effective Altruism Hub||86||86.869%||99|
|Good Judgement(TM) Open||23||74.194%||31|
|#lesswrong on freenode||19||24.675%||77|
|#slatestarcodex on freenode||9||24.324%||37|
|#chapelperilous on freenode||2||18.182%||11|
|One or more private 'rationalist' groups||91||47.15%||193|
|Affiliation||EA Income||EA Charity||Sample Size|
In which I examine some of the latest development in automated fact checking, prediction markets for policies and propose we get rich voting for robot politicians.
Yale Assistant Professor of Political Science Allan Dafoe is seeking Research Assistants for a project on the political dimensions of the existential risks posed by advanced artificial intelligence. The project will involve exploring issues related to grand strategy and international politics, reviewing possibilities for social scientific research in this area, and institution building. Familiarity with international relations, existential risk, Effective Altruism, and/or artificial intelligence are a plus but not necessary. The project is done in collaboration with the Future of Humanity Institute, located in the Faculty of Philosophy at the University of Oxford. There are additional career opportunities in this area, including in the coming academic year and in the future at Yale, Oxford, and elsewhere. If interested in the position, please email email@example.com with a copy of your CV, a writing sample, an unofficial copy of your transcript, and a short (200-500 word) statement of interest. Work can be done remotely, though being located in New Haven, CT or Oxford, UK is a plus.
I'm curious about your thoughts on my piece in Salon analyzing Trump's emotional appeal using rationality-informed ideas. My primary aim is using the Trump hook to get readers to consider the broader role of Systems 1 and 2 in politics, the backfire effect, wishful thinking, emotional intelligence, etc.
Suppose, for the purposes of argument, HBD (Human bio-diversity, the claim that distinct populations (I will be avoiding using the word "race" here insomuch as possible) of humans exist and have substantial genetical variance which accounts for some difference in average intelligence from population to population) is true, and that all its proponents are correct in accusing the politicization of science for burying this information.
I seek to ask the more interesting question: Would it matter?
1. Societal Ramifications of HBD: Eugenics
So, we now have some kind of nice, tidy explanation for different characters among different groups of people. Okay. We have a theory. It has explanatory power. What can we do with it?
Unless you're willing to commit to eugenics of some kind (be it restricting reproduction or genetic alteration), not much of anything. And even given you're willing to commit to eugenics, HBD doesn't add anything HBD doesn't actually change any of the arguments for eugenics - below-average people exist in every population group, and insofar as we regard below-average people a problem, the genetic population they happen to belong to doesn't matter. If the point is to raise the average, the population group doesn't matter. If the point is to reduce the number of socially dependent individuals, the population group doesn't matter.
Worse, insofar as we use HBD as a determinant in eugenics, our eugenics are less effective. HBD says your population group has a relationship with intelligence; but if we're interested in intelligence, we have no reason to look at your population group, because we can measure intelligence more directly. There's no reason to use the proxy of population group if we're interested in intelligence, and indeed, every reason not to; it's significantly less accurate and politically and historically problematic.
Yet still worse for our eugenics advocate, insomuch as population groups do have significant genetic diversity, using population groups instead of direct measurements of intelligence is far more likely to cause disease transmission risks. (Genetic diversity is very important for population-level disease resistance. Just look at bananas.)
2. Social Ramifications of HBD: Social Assistance
Let's suppose we're not interested in eugenics. Let's suppose we're interested in maximizing our societal outcomes.
Well, again, HBD doesn't offer us anything new. We can already test intelligence, and insofar as HBD is accurate, intelligence tests are more accurate. So if we aim to streamline society, we don't need HBD to do so. HBD might offer an argument against affirmative action, in that we have different base expectations for different populations, but affirmative action already takes different base expectations into account (if you live in a city of 50% black people and 50% white people, but 10% of local lawyers are black, your local law firm isn't required to have 50% black lawyers, but 10%). We might desire to adjust the way we engage in affirmative action, insofar as affirmative action might not lead to the best results, but if you're interested in the best results, you can argue on the basis of best results without needing HBD.
I have yet to encounter someone who argues HBD who also argues we should do something with regard to HELPING PEOPLE on the basis of this, but that might actually be a more significant argument: If there are populations of people who are going to fall behind, that might be a good argument to provide additional resources to these populations of people, particularly if there are geographic correspondences - that is, if HBD is true, and if population groups are geographically segregated, individuals in these population groups will suffer disproportionately relative to their merits, because they don't have the local geographic social capital that equal-advantage people of other population groups would have. (An average person in a poor region will do worse than an average person in a rich region.) So HBD provides an argument for desegregation.
Curiously, HBD advocates have a tendency to argue that segregation would lead to the best outcome. I'd welcome arguments that concentrating an -absence- of social capital is a good idea.
3. Scientific Ramifications of HBD
Well, if HBD were true, it would mean science is politicized. This might be news to somebody, I guess.
4. Political Ramifications of HBD
We live in a meritocracy. It's actually not an ideal thing, contrary to the views of some people, because it results in a systematic merit segregation that has completely deprived the lower classes of intellectual resources; talk to older people sometime, who remember, when they worked in the coal mines (or whatever), the one guy who you could trust to be able to answer your questions and provide advice. Our meritocracy has advanced to the point where we are systematically stripping everybody of value from the lower classes and redistributing them to the middle and upper classes.
HBD might be meaningful here. Insofar as people take HBD to its absurd extremes, it might actually result in an -improvement- for some lower-class groups, because if we stop taking all the intelligent people out of poor areas, there will still be intelligent people in those poor areas. But racism as a force of utilitarian good isn't something I care to explore in any great detail, mostly because if I'm wrong it would be a very bad thing, and also because none of its advocates actually suggest anything like this, more interesting in promoting segregation than desegregation.
It doesn't change much else, either. With HBD we continually run into the same problem - as a theory, it's the product of measuring individual differences, and as a theory, it doesn't add anything to our information that we don't already have with the individual differences.
5. The Big Problem: Individuality
Which is the crucial fault with HBD, iterated multiple times here, in multiple ways: It literally doesn't matter if HBD is true. All the information it -might- provide us with, we can get with much more accuracy using the same tests we might use to arrive at HBD. Anything we might want to do with the idea, we can do -better- without it.
HBD might predict we get fewer IQ-115, IQ-130, and IQ-145 people from particular population groups, but it doesn't actually rule them out. Insofar as this kind of information is useful, it's -more- useful to have more accurate information. HBD doesn't say "Black people are stupid", instead it says "The average IQ of black people is slightly lower than the average IQ of white people". But since "black people" isn't a thing that exists, but rather an abstract concept referring to a group of "black persons", and HBD doesn't make any predictions at the individual level we couldn't more accurately obtain through listening to a person speak for five seconds, it doesn't actually make any useful predictions. It adds literally nothing to our model of the world.
It's not the most important idea of the century. It's not important at all.
If you think it's true - okay. What does it -add- to your understanding of the world? What useful predictions does it make? How does it permit you to improve society? I've heard people insist it's this majorly important idea that the scientific and political establishment is suppressing. I'd like to introduce you to the aether, another idea that had explanatory power but made no useful predictions, and which was abandoned - not because anybody thought it was wrong, but because it didn't even rise to the level of wrong, because it was useless.
And that's what HBD is. A useless idea.
And even worse, it's a useless idea that's hopelessly politicized.
Trigger warning: politics is hard mode.
"How to you make America safer from terrorists" is the title of my op-ed published in Sun Sentinel, a very prominent newspaper in Florida, one of the most swingiest of the swing states in the US for the presidential election, and the one with the most votes. The maximum length of the op-ed was 450 words, and it was significantly edited by the editor, so it doesn't convey the full message I wanted with all the nuances, but such is life. My primary goal with the piece was to convey methods of thinking more rationally about politics, such as to use probabilistic thinking, evaluating the full consequences of our actions, and avoiding attention bias. I used the example of the proposal to police heavily Muslim neighborhoods as a case study. Hope this helps Floridians think more rationally and raises the sanity waterline regarding politics!
EDIT: To be totally clear, I used guesstimates for the numbers I suggested. Following Yvain/Scott Alexander's advice, I prefer to use guesstimates rather than vague statements.
I've said before that social reform often seems to require lying. Only one-sided narratives offering simple solutions motivate humans to act, so reformers manufacture one-sided narratives such as we find in Marxism or radical feminism, which inspire action through indignation. Suppose you tell someone, "Here's an important problem, but it's difficult and complicated. If we do X and Y, then after five years, I think we'd have a 40% chance of causing a 15% reduction in symptoms." They'd probably think they had something better to do.
But the examples I used in that previous post were all arguably bad social reforms: Christianity, Russian communism, and Cuban communism.
The argument that people need to be deceived into social reform assumes either that they're stupid, or that there's some game-theoretic reason why social reform that's very worthwhile to society as a whole isn't worthwhile to any individual in society.
Is that true? Or are people correct and justified in not making sudden changes until there's a clear problem and a clear solution to it?
Bertrand Russell, well aware there were health risks of smoking, defended his addiction in a videotaped interview. See if you can spot his fallacy!
Today on SBS (radio channel in Australia) I heard reporters breaking the news that Nature article reports that Cancer is largely due to choices. I was shocked by what appeared to be gross violations of cultural norms around the blaming of victims. I wanted to investigate further since science reporting is notoriously inaccurate.
The BBC reports:
Earlier this year, researchers sparked a debate after suggesting two-thirds of cancer types were down to luck rather than factors such as smoking.
Smoking is very harmful and very common. Globally, 21% of people over 15 smoke ( )
Tobacco lawsuits can be hard to win but if you have been injured because of tobacco or smoking or secondary smoke exposure, you should contact an attorney as soon as possible.
The case for it's cause selection: Tobacco control
tobacco is the leading preventable cause of death and disease in both the world (see: http://www.who.int/nmh/publications/fact_sheet_tobacco_en.pdf) and Australia (see: http://www.cancer.org.au/policy-and-advocacy/position-statements/smoking-and-tobacco-control/)
‘Tobacco smoking causes 20% of cancer deaths in Australia, making it the highest individual cancer risk factor. Smoking is a known cause of 16 different cancer types and is the main cause of Australia’s deadliest cancer, lung cancer. Smoking is responsible for 88% of lung cancer deaths in men and 75% of lung cancer cases in women in Australia.’
The World Health Organization’s Framework Convention on Tobacco Control (FCTC) was the first public health treaty ever negotiate.
Based on private information, the balance of healthcare costs against tax revenues according to health advocates compared to treasury estimates in Australia may have been relevant to Australia’s leadership in tobacco regulation. That submission may or may not be adequate in complexity (ie. taking into account reduced lifespans impact on reduced pension payouts for instance). There is a good article about the behavioural economics of tobacco regulation here (http://baselinescenario.com/2011/03/22/incentives-dont-work/)
Room for advocacy: low
There are many hundreds of consumer support and advocacy groups, and cancer charities across Australia.
Room for employment: low?
Room for consulting: high
The rigour of analysis and achievements themselves in the Cancer Council of Australia annual review is underwhelming, as is the Cancer Council of Victoria’s annual report. There is a better organised body of evidence relating to their impact on their Wiki pages about effective interventions and policy priorities. At a glance, there appears to be room for more quantitative, methodologically rigorous and independent evaluation. I will be looking at GiveWell to see what I recommendations can be translated. I will keep records of my findings to formulate draft guidelines for advising organisations in the Cancer Councils’ positions which I estimate by vague memory of GiveWell’s claims are in the majority in the philanthropic space.
Here's my op-ed that uses long-term orientation, probabilistic thinking, numeracy, consider the alternative, reaching our actual goals, avoiding intuitive emotional reactions and attention bias, and other rationality techniques to suggest more rational responses to the Paris attacks and the ISIS threat. It's published in the Sunday edition of The Plain Dealer, a major newspaper (16th in the US). This is part of my broader project, Intentional Insights, of conveying rational thinking, including about politics, to a broad audience to raise the sanity waterline.
Cross-posted from the EA forum. I asked for questions for this test here on LW about a year ago. Thanks to those who contributed.
Rationally, your political values shouldn't affect your factual beliefs. Nevertheless, that often happens. Many factual issues are politically controversial - typically because the true answer makes a certain political course of action more plausible - and on those issues, many partisans tend to disregard politically uncomfortable evidence.
This sort of political bias has been demonstrated in a large number of psychological studies. For instance, Yale professor Dan Kahan and his collaborators showed in a fascinating experiment that on politically controversial questions, people are quite likely to commit mathematical mistakes that help them retain their beliefs, but much less likely to commit mistakes that would force them to give up those belies. Examples like this abound in the literature.
Political bias is likely to be a major cause of misguided policies in democracies (even the main one according to economist Bryan Caplan). If they don’t have any special reason not to, people without special knowledge defer to the scientific consensus on technical issues. Thus, they do not interfere with the experts, who normally get things right. On politically controversial issues, however, they often let their political bias win over science and evidence, which means they’ll end up with false beliefs. And, in a democracy voters having systematically false beliefs obviously more often than not translates into misguided policy.
Can we reduce this kind of political bias? I’m fairly hopeful. One reason for optimism is that debiasing generally seems to be possible to at least some extent. This optimism of mine was strengthened by participating in a CFAR workshop last year. Political bias seems not to be fundamentally different from other kinds of biases and should thus be reducible too. But obviously one could argue against this view of mine. I’m happy to discuss this issue further.
Another reason for optimism is that it seems that the level of political bias is actually lower today than it was historically. People are better at judging politically controversial issues in a detached, scientific way today than they were in, say, the 14th century. This shows that progress is possible. There seems to be no reason to believe it couldn’t continue.
A third reason for optimism is that there seems to be a strong norm against political bias. Few people are consciously and intentionally politically biased. Instead most people seem to believe themselves to be politically rational, and hold that as a very important value (or so I believe). They fail to see their own biases due to the bias blind spot (which disables us from seeing our own biases).
Thus if you could somehow make it salient to people that they are biased, they would actually want to change. And if others saw how biased they are, the incentives to debias would be even stronger.
There are many ways in which you could make political bias salient. For instance, you could meticulously go through political debaters’ arguments and point out fallacies, like I have done on my blog. I will post more about that later. Here I want to focus on another method, however, namely a political bias test which I have constructed with ClearerThinking, run by EA-member Spencer Greenberg. Since learning how the test works might make you answer a bit differently, I will not explain how the test works here, but instead refer either to the explanatory sections of the test, or to Jess Whittlestone’s (also an EA member) Vox.com-article.
Our hope is of course that people taking the test might start thinking more both about their own biases, and about the problem of political bias in general. We want this important topic to be discussed more. Our test is produced for the American market, but hopefully, it could work as a generic template for bias tests in other countries (akin to the Political Compass or Voting Advice Applications).
Here is a guide for making new bias tests (where the main criticisms of our test are also discussed). Also, we hope that the test could inspire academic psychologists and political scientists to construct full-blown scientific political bias tests.
This does not mean, however, that we think that such bias tests in themselves will get rid of the problem of political bias. We need to attack the problem of political bias from many other angles as well.
Follow-up to Reverse Engineering of Belief Structures
Pro-con-lists of arguments such as ProCon.org and BalancedPolitics.org fill a useful purpose. They give an overview over complex debates, and arguably foster nuance. My network for evidence-based policy is currently in the process of constructing a similar site in Swedish.
I'm thinking it might be interesting to add more features to such a site. You could let people create a profile on the site. Then you would let them fill in whether they agree or disagree with the theses under discussion (cannabis legalization, GM foods legalization, etc), and also whether they agree or disagree with the different argument for and against these theses (alternatively, you could let them rate the arguments from 1-5).
Once you have this data, you could use them to give people different kinds of statistics. The most straightforward statistic would be their degree of "onesidedness". If you think that all of the arguments for the theses you believe in are good, and all the arguments against them are bad, then you're defined as onesided. If you, on the other hand, believe that some of your own side's arguments are bad, whereas some of the opponents' arguments are good, you're defined as not being onesided. (The exact mathematical function you would choose could be discussed.)
Once you've told people how one-sided they are, according to the test, you would discuss what might explain onesidedness. My hunch is that the most plausible explanation normally is different kinds of bias. Instead of reviewing new arguments impartially, people treat arguments for their views more leniently than arguments against their views. Hence they end up being onesided, according to the test.
There are other possible explanations, though. One is that all of the arguments against the thesis in question actually are bad. That might happen occassionally, but I don't think that's very common. As Eliezer Yudkowsky says in "Policy Debates Should Not Appear One-sided":
On questions of simple fact (for example, whether Earthly life arose by natural selection) there's a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called "balance of evidence" should reflect this. Indeed, under the Bayesian definition of evidence, "strong evidence" is just that sort of evidence which we only expect to find on one side of an argument.
But there is no reason for complex actions with many consequences to exhibit this onesidedness property.
Instead, the reason why people end up with one-sided beliefs is bias, Yudkowsky argues:
Why do people seem to want their policy debates to be one-sided?
Politics is the mind-killer. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it's like stabbing your soldiers in the back. If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary.
Especially if you're consistently one-sided in lots of different debates, it's hard to see that any other hypothesis besides bias is plausible. It depends a bit on what kinds of arguments you include in the list, though. In our lists we haven't really checked the quality of the arguments (our purpose is to summarize the debate, rather than to judge it), but you could also do that, of course.
My hope is that such a test would make people more aware both of their own biases, and of the problem of political bias in general. I'm thinking that is the first step towards debiasing. I've also constructed a political bias test with similar methods and purposes together with ClearerThinking, which should be released soon.
You could also add other features to a pro-con-list. For instance, you could classify arguments in different ways: ad hominem-arguments, consequentialist arguments, rights-based arguments, etc. (Some arguments might be hard to classify, and then you just wouldn't do that. You wouldn't necessarily have to classify every argument.) Using this info, you could give people a profile: e.g., what kinds of arguments do they find most persuasive? That could make them reflect more on what kinds of arguments really are valid.
You could also combine these two features. For instance, some people might accept ad hominem-arguments when they support their views, but not when they contradict them. That would make your use of ad hominem-arguments onesided.
Yet another feature that could be added is a standard political compass. Since people fill in what theses they believe in (cannabis legalization, GM goods legalization, etc) you could calcluate what party is closest to them, based on the parties' stances on these issues. That could potentially make the test more attractive to take.
Suggestions of more possible features are welcome, as well as general comments - especially about implementation.
Politics is the mind-killer. Politics IS really the mind-killer. Please meditate on this until politics flows over you like butter on hot teflon, and your neurons stops fibrillating and resume their normal operations.
I've always found silly that LW, one of the best and most focused group of rationalists on the web isn't able to talk evenly about politics. It's true that we are still human, but can't we just make an effort at being calm and level-headed? I think we can. Does gradual exposure works on group, too? Maybe a little bit of effort combined with a little bit of exposure will work as a vaccine.
And maybe tomorrow a beautiful naked valkyrie will bring me to utopia on her flying unicorn...
Anyway, I want to try. Let's see what happens.
Two recent events has prompted me to make this post: I'm reading "The rise of the Islamic State" by Patrick Coburn, which I think does a good job in presenting fairly the very recent history surrounding ISIS, and the terrorist attack in Tunis by the same group, which resulted in 18 foreigners killed.
I believe that their presence in the region is now definitive: they control an area that is wider than Great Britain, with a population tallying over six millions, not counting the territories controlled by affiliate group like Boko Haram. Their influence is also expanding, and the attack in Tunis shows that this entity is not going to stay confined between the borders of Syria and Iraq.
It may well be the case that in the next ten years or so, this will be an international entity which will bring ideas and mores predating the Middle Age back on the Mediterranean Sea.
A new kind of existential threat
To a mildly rational person, the conflict fueling the rise of the Islamic State, namely the doctrinal differences between Sunni and Shia Islam, is the worst kind of Blue/Green division. A separation that causes hundreds of billions of dollars (read that again) to be wasted trying kill each other. But here it is, and the world must deal with it.
In comparison, Democrats and Republicans are so close that they could be mistaken for Aumann agreeing.
I fear that ISIS is bringing a new kind of existential threat: one where is not the existence of humankind at risks, but the existence of the idea of rationality.
The funny thing is that while people can be extremely irrational, they can still work on technology to discover new things. Fundamentalism has never stopped a country to achieve technological progress: think about the wonderful skyscrapers and green patches in the desert of the Arab Emirates or the nuclear weapons of Pakistan. So it might well be the case that in the future some scientist will start a seed AI believing that Allah will guide it to evolve in the best way. But it also might be that in the future, African, Asian and maybe European (gasp!) rationalists will be hunted down and killed like rats.
It might be the very meme of rationality to be erased from existence.
I'll close with a bunch of questions, both strictly and loosely related. Mainly, I'm asking you to refrain from proposing a solution. Let's assess the situation first.
- Do you think that the Islamic State is an entity which will vanish in the future or not?
- Do you think that their particularly violent brand of jihadism is a worse menace to the sanity waterline than say, other kind of religious movements, past or present?
- Do you buy the idea that fundamentalism can be coupled with technological advancement, so that the future will presents us with Islamic AI's?
- Do you think that the very same idea of rationality can be the subject of existential risk?
- What do Neoreactionaries think of the Islamic State? After all, it's an exemplar case of the reactionaries in those areas winning big. I know it's only a surface comparison, I'm sincerely curious about what a NR think of the situation.
I found Scott Alexander's steelmanning of the NRx critique to be an interesting, even persuassive critique of modern progressivism, having not been exposed to this movement prior to today. However I am also equally confused at the jump from "modern liberal democracies are flawed" to "restore the devine-right-of-kings!" I've always hated the quip "democracy is the worst form of government, except for all the others" (we've yet tried), but I think it applies here.
Of course, with the prompting to state my own thoughts, I simply had to go and start typing them out. The following contains obvious traces of my own political leanings and philosophy (in short summary: if "Cthulhu only swims left", then I AM CTHULHU... at least until someone explains to me what a Great Old One is doing out of R'lyeh and in West Coast-flavored American politics), but those traces should be taken as evidence of what I believe rather than statements about it.
Because what I was actually trying to talk about, is rationality in politics. Because in fact, while it is hard, while it is spiders, all the normal techniques work on it. There is only one real Cardinal Sin of Attempting to be Rational in Politics, and it is the following argument, stated in generic form that I might capture it from the ether and bury it: "You only believe what you believe for political reasons!" It does not matter if those "reasons" are signaling, privilege, hegemony, or having an invisible devil on your shoulder whispering into your bloody ear: to impugn someone else's epistemology entirely at the meta-level without saying a thing against their object-level claims is anti-epistemology.
Now, on to the ranting! The following are more-or-less a semi-random collection of tips I vomited out for trying to deal with politics rationally. I hope they help. This is a Discussion post because Mark said that might be a good idea.
- Dissolve "democracy", and not just in the philosophical sense, but in the sense that there have been many different kinds of actually existing democracies. There are always multiple object-level implementations of any meta-level idea, and most political ideas are sufficiently abstract to count as meta-level. Even if, for purposes of a thought experiment, you find yourself saying, "I WILL ONLY EVER CONSIDER SYSTEMS THAT COUNT AS DEMOCRACY ACCORDING TO MY INTUITIVE DEMOCRACY-P() PREDICATE!", one can easily debate whether a mixed-member proportional Parliament performs better than a district-based bicameral Congress, or whether a pure Westminster system beats them both, or whether a Presidential system works better, or whatever. Particular institutional designs yield particular institutional behaviors, and successfully inducing complex generalizations across large categories of institutional designs requires large amounts of evidence -- just as it does in any other form of hierarchical probabilistic reasoning.
- Dissolve words like "democracy", "capitalism", "socialism", and "government" in the philosophical sense, and ask: what are the terminal goals democracy serves? How much do we support those goals, and how much do current democratic systems suffer approximation error by forcing our terminal goals to fit inside the hypothesis space our actual institutions instantiate? For however much we do support those goals, why do we shape these particular institutions to serve those goals, and not other institutions? For all values of X, mah nishtana ha-X hazeh mikol ha-X-im? is a fundamental question of correct reasoning. (Asking the question of why we instantiate particular institutions in particular places, when one believes in democratic states, is the core issue of democratic socialism, and I would indeed count myself a democratic socialist. But you get different answers and inferences if you ask about schools or churches, don't you?)
- Learn first to explicitly identify yourself with a political "tribe", and next to consider political ideas individually, as questions of fact and value subject to investigation via epistemology and moral epistemology, rather than treating politics as "tribal". Tribalism is the mind-killer: keeping your own explicit tribal identification in mind helps you notice when you're being tribalist, and helps you distinguish your own tribe's customs from universal truths -- both aids to your political rationality. And yes, while politics has always been at least a little tribal, the particular form the tribes take varies through time and space: the division of society into a "blue tribe" and a "red tribe" (as oft-described by Yvain on Slate Star Codex), for example, is peculiar to late-20th-century and early-21st-century USA. Those colors didn't even come into usage until the 2000 Presidential election, and hadn't firmly solidified as describing seemingly separate nationalities until 2004! Other countries, and other times, have significantly different arrangements of tribes, so if you don't learn to distinguish between ideas and tribes, you'll not only fail at political rationality, you'll give yourself severe culture shock the first time you go abroad.
- General rule: you often think things are general rules of the world not because you have the large amount of evidence necessary to reason that they really are, but because you've seen so few alternatives that your subjective distribution over models contains only one or two models, both coarse-grained. Unquestioned assumptions always feel like universal truths from the inside!
- Learn to check political ideas by looking at the actually-existing implementations, including the ones you currently oppose -- think of yourself as bloody Sauron if you have to! This works, since most political ideas are not particularly original. Commons trusts exist, for example, the "movement" supporting them just wants to scale them up to cover all society's important common assets rather than just tracts of land donated by philanthropists. Universal health care exists in many countries. Monarchy and dictatorship exist in many countries. Religious rule exists in many countries. Free tertiary education exists in some countries, and has previously existed in more. Non-free but subsidized tertiary education exists in many countries. Running the state off oil revenue has been tried in many countries. Centrally-planned economies have been tried in many countries. And it's damn well easier to compare "Canadian health-care" to "American health-care" to "Chinese health-care", all sampled in 2014, using fact-based policy studies, than to argue about the Visions of Human Life represented by each (the welfare state, the Company Man, and the Lone Fox, let's say) -- which of course assumes consequentialism. In fact, I should issue a much stronger warning here: argumentation is an utterly unreliable guide to truth compared to data, and all these meta-level political conclusions require vast amounts of object-level data to induce correct causal models of the world that allow for proper planning and policy.
- This means that while the Soviet Union is not evidence for the total failure of "socialism" as I use the word, that's because I define socialism as a larger category of possible economies that strictly contains centralized state planning -- centralized state planning really was, by and large, a total fucking failure. But there's a rationality lesson here: in politics, all opponents of an idea will have their own definition for it, but the supporters will only have one. Learn to identify political terminology with the definitions advanced by supporters: these definitions might contain applause lights, but at least they pick out one single spot in policy-space or society-space (or, hopefully, a reasonably small subset of that space), while opponents don't generally agree on which precise point in policy-space or society-space they're actually attacking (because they're all opposed for their own reasons and thus not coordinating with each-other).
- This also means that if someone wants to talk about monarchies that rule by religious right, or even about absolute monarchies in general, they do have to account for the behavior of the Arab monarchies today, for example. Or if they want to talk about religious rule in general (which very few do, to my knowledge, but hey, let's go with it), they actually do have to account for the behavior of Da3esh/ISIS. Of course, they might do so by endorsing such regimes, just as some members of Western Communist Parties endorsed the Soviet Union -- and this can happen by lack of knowledge, by failure of rationality, or by difference of goals.
- And then of course, there are the complications of the real world: in the real world, neither perfect steelman-level central planning nor perfect steelman-level markets have ever been implemented, anywhere, with the result that once upon a time, the Soviet economy was allocatively efficient and prices in capitalist West Germany were just as bad at reflecting relative scarcities as those in centrally-planned East Germany! The real advantage of market systems has ended up being the autonomy of firms, not allocative optimality (and that's being argued, right there, in the single most left-wing magazine I know of!). Which leads us to repeat the warning: correct conclusions are induced from real-world data, not argued from a priori principles that usually turn out to be wildly mis-emphasized if not entirely wrong.
- Learn to notice when otherwise uninformed people are adopting political ideas as attire to gain status by joining a fashionable cause. Keep in mind that what constitutes "fashionable" depends on the joiner's own place in society, not on your opinions about them. For some people, things you and I find low-status (certain clothes or haircuts) are, in fact, high-status. See Yvain's "Republicans are Douchebags" post for an example in a Western context: names that the American Red Tribe considers solid and respectable are viewed by the American Blue Tribe as "douchebag names".
- A heuristic that tends to immunize against certain failures of political rationality: if an argument does not base itself at all in facts external to itself or to the listener, but instead concentrates entirely on reinterpreting evidence, then it is probably either an argument about definitions, or sheer nonsense. This is related to my comments on hierarchical reasoning above, and also to the general sense in which trying to refute an object-level claim by meta-level argumentation is not even wrong, but in fact anti-epistemology.
- A further heuristic, usable on actual electioneering campaigns the world over: whenever someone says "values", he is lying, and you should reach for your gun. The word "values" is the single most overused, drained, meaningless word in politics. It is a normative pronoun: it directs the listener to fill in warm fuzzy things here without concentrating the speaker and the listener on the same point in policy-space at all. All over the world, politicians routinely seek power on phrases like "I have values", or "My opponent has no values", or "our values" or "our $TRIBE values", or "$APPLAUSE_LIGHT values". Just cross those phrases and their entire containing sentences out with a big black marker, and then see what the speaker is actually saying. Sometimes, if you're lucky (ie: voting for a Democrat), they're saying absolutely nothing. Often, however, the word "values" means, "Good thing I'm here to tell you that you want this brand new oppressive/exploitative power elite, since you didn't even know!"
- As mentioned above, be very, very sure about what ethical framework you're working within before having a political discussion. A consequentialist and a virtue-ethicist will often take completely different policy positions on, say, healthcare, and have absolutely nothing to talk about with each-other. The consequentialist can point out the utilitarian gains of universal single-payer care, and the virtue-ethicist can point out the incentive structure of corporate-sponsored group plans for promoting hard work and loyalty to employers, but they are fundamentally talking past each-other.
- Often, the core matter of politics is how to trade off between ethical ideals that are otherwise left talking past each-other, because society has finite material resources, human morals are very complex, and real policies have unintended consequences. For example, if we enact Victorian-style "poor laws" that penalize poverty for virtue-ethical reasons, the proponents of those laws need to be held accountable for accepting the unintended consequences of those laws, including higher crime rates, a less educated workforce, etc. (This is a broad point in favor of consequentialism: a rational consequentialist always considers consequences, intended and unintended, or he fails at consequentialism. A deontologist or virtue-ethicist, on the other hand, has license from his own ethics algorithm to not care about unintended consequences at all, provided the rules get followed or the rules or rulers are virtuous.)
- Almost all policies can be enacted more effectively with state power, and almost no policies can "take over the world" by sheer superiority of the idea all by themselves. Demanding that a successful policy should "take over the world" by itself, as everyone naturally turns to the One True Path, is intellectually dishonest, and so is demanding that a policy should be maximally effective in miniature (when tried without the state, or in a small state, or in a weak state) before it is justified for the state to experiment with it. Remember: the overwhelming majority of journals and conferences in professional science still employ frequentist statistics rather than Bayesianism, and this is 20 years after the PC revolution and the World Wide Web, and 40 years after computers became widespread in universities. Human beings are utility-satisficing, adaptation-executing creatures with mostly-unknown utility functions: expecting them to adopt more effective policies quickly by mere effectiveness of the policy is downright unrealistic.
- The Appeal to Preconceptions is probably the single Darkest form of Dark Arts, and it's used everywhere in politics. When someone says something to you that "stands to reason" or "sounds right", which genuinely seems quite plausible, actually, but without actually providing evidence, you need to interrogate your own beliefs and find the Equivalent Sample Size of the informative prior generating that subjective plausibility before you let yourself get talked into anything. This applies triply in philosophy.
I've had several political arguments about That Which Must Not Be Named in the past few days with people of a wide variety of... strong opinions. I'm rather doubtful I've changed anyone's mind about anything, but I've spent a lot of time trying to do so. I also seem to have offended one person I know rather severely. Also, even if I have managed to change someone's mind about something through argument, it feels as though someone will end up having to argue with them later down the line when the next controversy happens.
It's very discouraging to feel this way. It is frustrating when making an argument is taken as a reason for personal attack. And it's annoying to me to feel like I'm being forced into something by the disapproval of others. I'm tempted to just retreat from democratic engagement entirely. But there are disadvantages to this, for example it makes it easier to maintain irrational beliefs if you never talk to people who disagree with you.
I think a big part of the problem is that I have an irrational alief that makes me feel like my opinions are uniquely valuable and important to share with others. I do think I'm smarter, more moderate, and more creative than most. But the feeling's magnitude and influence over my behavior is far greater than what's justified by the facts.
How do I destroy this feeling? Indulging it satisfies some competitive urges of mine and boosts my self-esteem. But I think it's bad overall despite this, because it makes evaluating the social consequences of my choices more difficult. It's like a small addiction, and I have no idea how to get over it.
Does anyone else here have an opinion on any of this? Advice from your own lives, perhaps?
Say that you want to change some social or political institution: the educational system, the monetary system, research on AGI safety, or what not. When trying to reach this goal, you may use one of the following broad strategies (or some combination of them):
1) You may directly try to lobby (i.e. influence) politicians to implement this change, or try to influence voters to vote for parties that promise to implement these changes.
2) You may try to build an alternative system and hope that it eventually becomes so popular so that it replaces the existing system.
3) You may try to develop tools that a) appeal to users of existing systems and b) whose widespread use is bound to change those existing systems.
Let me give some examples of what I mean. Trying to persuade politicians that we should replace conventional currencies by a private currency or, for that matter, starting a pro-Bitcoin party, fall under 1), whereas starting a private currency and hope that it spreads falls under 2). (This post was inspired by a great comment by Gunnar Zarncke on precisely this topic. I take it that he was there talking of strategy 2.) Similarly, trying to lobby politicians to reform the academia falls under 1) whereas starting new research institutions which use new and hopefully more effective methods falls under 2). I take it that this is what, e.g. Leverage Research is trying to do, in part. Similarly, libertarians who vote for Ron Paul are taking the first course, while at least one possible motivation for the Seasteading Institute is to construct an alternative system that proves to be more efficient than existing governments.
Efficient Voting Advice Applications (VAA's), which advice you to vote on the basis of your views on different policy matters, can be an example of 3) (they are discussed here). Suppose that voters started to use them on a grand scale. This could potentially force politicians to adhere very closely to the views of the voters on each particular issue, since if you failed to do this you would stand little chance of winning. This may or may not be a good thing, but the point is that it would be a change that would not be caused by lobbying of politicians or by building an alternative system, but simply by constructing a tool whose widespread use could change the existing system.
Another similar tool is reputation or user review systems. Suppose that you're dissatisfied with the general standards of some institution: say university education, medical care, or what not. You may attain this by lobbying politicians to implement new regulations intended to ensure quality (1), or by starting your own, superior, universities or hospitals (2), hoping that others will follow. Another method is, however, to create a reliable reputation/review system which, if they became widely used, would guide students and patients to the best universities and hospitals, thereby incentivizing to improve.
Now of course, when you're trying to get people to use such review systems, you are, in effect, building an evaluation system that competes with existing systems (e.g. the Guardian university ranking), so on one level you are using the second strategy. Your ultimate goal is, however, to create better universities, to which better evaluation systems, is just a means (as a tool). Hence you're following the third strategy here, in my terms.
Strategy 1) is of course a "statist" one, since what you're doing here is that you're trying to get the government to change the institution in question for you. Strategies 2) and 3) are, in contrast, both "non-statist", since when you use them you're not directly trying to implement the change through the political system. Hence libertarians and other anti-statists should prefer them.
My hunch is that when people are trying to change things, many of them unthinkingly go for 1), even regarding issues where it is unlikely that they are going to succeed that way. (For instance, it seems to me that advocates for direct democracy who try to persuade voters to vote for direct democratic parties are unlikely to succeed, but that widespread of VAA's might get us considerably closer to their ideal, and that they therefore should opt for the third strategy.) A plausible explanation of this is availability bias; our tendency to focus on what we most often see around us. Attempts to change social institutions through politics get a lot of attention, which makes people think of this strategy first. Even though this strategy is often efficient, I'd guess it is, for this reason, generally overused and that people sometimes instead should go for 2) or 3). (Possibly, Europeans have an even stronger bias in favour of this strategy than Americans.)
I also suspect, though, that people go for 2) a bit too often relative to 3). I think that people find it appealing, for its own sake, to create an entirely alternatively structure. If you're a perfectionist, it might be satisfying to see what you consider "the perfect institution", even if it is very small and has little impact on society. Also, sometimes small groups of devotees flock to these alternatives, and a strong group identity is therefore created. Moreover, I think that availability bias may play a role here, also. Even though this sort of strategy gets less attention than lobbying, most people know what it is. It is quite clear what it means to do something like this, and being part of a project like this therefore gives you a clear identity. For these reasons, I think that we might sometimes fool ourselves into believing that these alternative structures are more likely to be succesful than they actually are.
Conversely, people might be biased against the third strategy because it's less obvious. Also, it has perhaps something vaguely manipulative over it which might bias idealistic people against it. What you're typically trying to do is to get people to use a tool (say VAA's) a side-effect of which is the change you wish to attain (in this case, correspondence between voters' views and actual policies). I don't think that this kind of manipulation is necessarily vicious (but it would need to be discussed on a case-by-case-basis) but the point is that people tend to think that it is. Also, even those who don't think that it is manipulative in an unethical sense would still think that it is somehow "unheroic". Starting your own environmental party or creating your own artifical libertarian island clearly has something heroic over it, but developing efficient VAA's, which as a side-effect changes the political landscape, does not.
I'd thus argue that people should start looking more closely at the third strategy. A group that does use a strategy similar to this is of course for-profit companies. They try to analyze what products would appeal to people, and in so doing, carefully consider how existing institutions shape people's preferences. For instance, companies like Uber, AirBnB and LinkedIn have been succesful because they realized that given the structure of the taxi, the hotel and the recruitment businesses, their products would be appealing.
Of course, these companies primary goal, profit, is very different from the political goals I'm talking about here. At the same time, I think it is useful to compare the two cases. I think that generally, when we're trying to attain political change, we're not "actually trying" (in CFAR's terminology) as hard as we do when we're trying to maximize profit . It is very easy to fall into a mode where you're focusing on making symbolic gestures (which express your identity) rather than on trying to change things in politics. (This is, in effect, what many traditional charities are doing, if the EA movement is right.)
Instead, we should think as hard as profit-maximizing companies what new tools are likely to catch on. Any kind of tools could in principle be used, but the ones that seem most obvious are various kind of social media and other internet based tools (such as those mentioned in this post). The technical progress gives us enormous opportunities to costruct new tools that could re-shape people's behaviour in a way that would impact existing social and political institutions on a large scale.
Developing such tools is not easy. Even very succesful companies again and again fail to predict what new products will appeal to people. Not the least, you need a profound understanding of human psychology in order to succeed. That said, political organizations have certain advantages visavi for-profit companies. More often than not, they might develop ideas publically, whereas for-profit companies often have to keep them secret until they product is launched. This facilitates wisdom of the crowd-reasoning, where many different kinds of people come up with solutions together. Such methods can, in my opinion, be very powerful.
Any input regarding, e.g. the taxonomy of methods, my speculations about biases, and, in particular, examples of institution changing tools are welcome. I'm also interested in comments on efficient methods for coming up with useful tools (e.g. tests of them). Finally, if anything's unclear I'd be happy to provide clarifications (it's a very complex topic).
My take on some historical religious/social/political movements:
- Jesus taught a radical and highly impractical doctrine of love and disregard for one's own welfare. Paul took control of much of the church that Jesus' charisma had built, and reworked this into something that could function in a real community, re-emphasizing the social mores and connections that Jesus had spent so much effort denigrating, and converting Jesus' emphasis on radical social action into an emphasis on theology and salvation.
- Marx taught a radical and highly impractical theory of how workers could take over the means of production and create a state-free Utopia. Lenin and Stalin took control of the organizations built around those theories, and reworked them into a strong, centrally-controlled state.
- Che Guevara (I'm ignorant here and relying on Wikipedia; forgive me) joined Castro's rebel group early on, rose to the position of second in command, was largely responsible for the military success of the revolution, and had great motivating influence due to his charisma and his unyielding, idealistic, impractical ideas. It turned out his idealism prevented him from effectively running government institutions, so he had to go looking for other revolutions to fight in while Castro ran Cuba.
The best strategy for complex social movements is not honest rationality, because rational, practical approaches don't generate enthusiasm. A radical social movement needs one charismatic radical who enunciates appealing, impractical ideas, and another figure who can appropriate all of the energy and devotion generated by the first figure's idealism, yet not be held to their impractical ideals. It's a two-step process that is almost necessary, to protect the pretty ideals that generate popular enthusiasm from the grit and grease of institution and government. Someone needs to do a bait-and-switch. Either the original vision must be appropriated and bent to a different purpose by someone practical, or the original visionary must be dishonest or self-deceiving.
Summary: I don't think 'politics is the mind-killer' works well rthetorically. I suggest 'politics is hard mode' instead.
My usual first objection is that it seems odd to single politics out as a “mind-killer” when there’s plenty of evidence that tribalism happens everywhere. Recently, there has been a whole kerfuffle within the field of psychology about replication of studies. Of course, some key studies have failed to replicate, leading to accusations of “bullying” and “witch-hunts” and what have you. Some of the people involved have since walked their language back, but it was still a rather concerning demonstration of mind-killing in action. People took “sides,” people became upset at people based on their “sides” rather than their actual opinions or behavior, and so on.
Unless this article refers specifically to electoral politics and Democrats and Republicans and things (not clear from the wording), “politics” is such a frightfully broad category of human experience that writing it off entirely as a mind-killer that cannot be discussed or else all rationality flies out the window effectively prohibits a large number of important issues from being discussed, by the very people who can, in theory, be counted upon to discuss them better than most. Is it “politics” for me to talk about my experience as a woman in gatherings that are predominantly composed of men? Many would say it is. But I’m sure that these groups of men stand to gain from hearing about my experiences, since some of them are concerned that so few women attend their events.
In this article, Eliezer notes, “Politics is an important domain to which we should individually apply our rationality — but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.” But that means that we all have to individually, privately apply rationality to politics without consulting anyone who can help us do this well. After all, there is no such thing as a discussant who is “rational”; there is a reason the website is called “Less Wrong” rather than “Not At All Wrong” or “Always 100% Right.” Assuming that we are all trying to be more rational, there is nobody better to discuss politics with than each other.
The rest of my objection to this meme has little to do with this article, which I think raises lots of great points, and more to do with the response that I’ve seen to it — an eye-rolling, condescending dismissal of politics itself and of anyone who cares about it. Of course, I’m totally fine if a given person isn’t interested in politics and doesn’t want to discuss it, but then they should say, “I’m not interested in this and would rather not discuss it,” or “I don’t think I can be rational in this discussion so I’d rather avoid it,” rather than sneeringly reminding me “You know, politics is the mind-killer,” as though I am an errant child. I’m well-aware of the dangers of politics to good thinking. I am also aware of the benefits of good thinking to politics. So I’ve decided to accept the risk and to try to apply good thinking there. [...]
I’m sure there are also people who disagree with the article itself, but I don’t think I know those people personally. And to add a political dimension (heh), it’s relevant that most non-LW people (like me) initially encounter “politics is the mind-killer” being thrown out in comment threads, not through reading the original article. My opinion of the concept improved a lot once I read the article.
In the same thread, Andrew Mahone added, “Using it in that sneering way, Miri, seems just like a faux-rationalist version of ‘Oh, I don’t bother with politics.’ It’s just another way of looking down on any concerns larger than oneself as somehow dirty, only now, you know, rationalist dirty.” To which Miri replied: “Yeah, and what’s weird is that that really doesn’t seem to be Eliezer’s intent, judging by the eponymous article.”
Eliezer replied briefly, to clarify that he wasn't generally thinking of problems that can be directly addressed in local groups (but happen to be politically charged) as "politics":
Hanson’s “Tug the Rope Sideways” principle, combined with the fact that large communities are hard to personally influence, explains a lot in practice about what I find suspicious about someone who claims that conventional national politics are the top priority to discuss. Obviously local community matters are exempt from that critique! I think if I’d substituted ‘national politics as seen on TV’ in a lot of the cases where I said ‘politics’ it would have more precisely conveyed what I was trying to say.
But that doesn't resolve the issue. Even if local politics is more instrumentally tractable, the worry about polarization and factionalization can still apply, and may still make it a poor epistemic training ground.
A subtler problem with banning “political” discussions on a blog or at a meet-up is that it’s hard to do fairly, because our snap judgments about what counts as “political” may themselves be affected by partisan divides. In many cases the status quo is thought of as apolitical, even though objections to the status quo are ‘political.’ (Shades of Pretending to be Wise.)
Because politics gets personal fast, it’s hard to talk about it successfully. But if you’re trying to build a community, build friendships, or build a movement, you can’t outlaw everything ‘personal.’
And selectively outlawing personal stuff gets even messier. Last year, daenerys shared anonymized stories from women, including several that discussed past experiences where the writer had been attacked or made to feel unsafe. If those discussions are made off-limits because they relate to gender and are therefore ‘political,’ some folks may take away the message that they aren’t allowed to talk about, e.g., some harmful or alienating norm they see at meet-ups. I haven’t seen enough discussions of this failure mode to feel super confident people know how to avoid it.
Since this is one of the LessWrong memes that’s most likely to pop up in cross-subcultural dialogues (along with the even more ripe-for-misinterpretation “policy debates should not appear one-sided“…), as a first (very small) step, my action proposal is to obsolete the ‘mind-killer’ framing. A better phrase for getting the same work done would be ‘politics is hard mode’:
1. ‘Politics is hard mode’ emphasizes that ‘mind-killing’ (= epistemic difficulty) is quantitative, not qualitative. Some things might instead fall under Middlingly Hard Mode, or under Nightmare Mode…
2. ‘Hard’ invites the question ‘hard for whom?’, more so than ‘mind-killer’ does. We’re used to the fact that some people and some contexts change what’s ‘hard’, so it’s a little less likely we’ll universally generalize.
3. ‘Mindkill’ connotes contamination, sickness, failure, weakness. In contrast, ‘Hard Mode’ doesn’t imply that a thing is low-status or unworthy. As a result, it’s less likely to create the impression (or reality) that LessWrongers or Effective Altruists dismiss out-of-hand the idea of hypothetical-political-intervention-that-isn’t-a-terrible-idea. Maybe some people do want to argue for the thesis that politics is always useless or icky, but if so it should be done in those terms, explicitly — not snuck in as a connotation.
4. ‘Hard Mode’ can’t readily be perceived as a personal attack. If you accuse someone of being ‘mindkilled’, with no context provided, that smacks of insult — you appear to be calling them stupid, irrational, deluded, or the like. If you tell someone they’re playing on ‘Hard Mode,’ that’s very nearly a compliment, which makes your advice that they change behaviors a lot likelier to go over well.
5. ‘Hard Mode’ doesn’t risk bringing to mind (e.g., gendered) stereotypes about communities of political activists being dumb, irrational, or overemotional.
6. ‘Hard Mode’ encourages a growth mindset. Maybe some topics are too hard to ever be discussed. Even so, ranking topics by difficulty encourages an approach where you try to do better, rather than merely withdrawing. It may be wise to eschew politics, but we should not fear it. (Fear is the mind-killer.)
7. Edit: One of the larger engines of conflict is that people are so much worse at noticing their own faults and biases than noticing others'. People will be relatively quick to dismiss others as 'mindkilled,' while frequently flinching away from or just-not-thinking 'maybe I'm a bit mindkilled about this.' Framing the problem as a challenge rather than as a failing might make it easier to be reflective and even-handed.
This is not an attempt to get more people to talk about politics. I think this is a better framing whether or not you trust others (or yourself) to have productive political conversations.
When I playtested this post, Ciphergoth raised the worry that 'hard mode' isn't scary-sounding enough. As dire warnings go, it's light-hearted—exciting, even. To which I say: good. Counter-intuitive fears should usually be argued into people (e.g., via Eliezer's politics sequence), not connotation-ninja'd or chanted at them. The cognitive content is more clearly conveyed by 'hard mode,' and if some group (people who love politics) stands to gain the most from internalizing this message, the message shouldn't cast that very group (people who love politics) in an obviously unflattering light. LW seems fairly memetically stable, so the main issue is what would make this meme infect friends and acquaintances who haven't read the sequences. (Or Dune.)
If you just want a scary personal mantra to remind yourself of the risks, I propose 'politics is SPIDERS'. Though 'politics is the mind-killer' is fine there too.
If you and your co-conversationalists haven’t yet built up a lot of trust and rapport, or if tempers are already flaring, conveying the message ‘I’m too rational to discuss politics’ or ‘You’re too irrational to discuss politics’ can make things worse. In that context, ‘politics is the mind-killer’ is the mind-killer. At least, it’s a needlessly mind-killing way of warning people about epistemic hazards.
‘Hard Mode’ lets you speak as the Humble Aspirant rather than the Aloof Superior. Strive to convey: ‘I’m worried I’m too low-level to participate in this discussion; could you have it somewhere else?’ Or: ‘Could we talk about something closer to Easy Mode, so we can level up together?’ More generally: If you’re worried that what you talk about will impact group epistemology, you should be even more worried about how you talk about it.
View more: Next