Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Regulatory lags for New Technology [2013 notes]

5 gwern 31 May 2017 01:27AM

I found some old notes from June 2013 on time delays in how fast one can expect Western political systems & legislators to respond to new technical developments.

In general, response is slow and on the order of political cycles; one implication I take away is that a takeoff an AI could happen over half a decade or more without any meaningful political control and would effectively be a ‘fast takeoff’, especially if it avoids any obvious mistakes.

1 Regulatory lag

“Regulatory delay” is the delay between the specific action required by regulators or legislatures to permit some new technology or method and the feasibility of the technology or method; “regulatory lag” is the converse, then, and is the gap between feasibility and reactive regulation of new technology. Computer software (and artificial intelligence in particular) is mostly unregulated, so it is subject to lag rather than delay.

Unfortunately almost all research seems to focus on modeling lags in the context of heavily regulated industries (especially natural monopolies like insurance or utilities), and few focus on compiling data on how long a lag can be expected between a new innovation or technology and its regulation. As one would expect, the few results point to lags on the order of years; for example, Ippolito 1979 (“The Effects of Price Regulation in the Automobile Insurance Industry”) finds that the period of price changes goes from 11 months in unregulated US states to 21 months in regulated states, suggesting the price-change framework itself causes a lag of almost a year.

Below, I cover some specific examples, attempting to estimate the lags myself:

(Nuclear weapons would be an interesting example but it’s hard to say what ‘lag’ would be inasmuch as they were born in government control and are subject to no meaningful global control; however, if the early proposals for a world government or unified nuclear weapon organization had gone through, they would also have represented a lag of at least 5 years.)

continue reading »

[Link] Political ideology

0 arunbharatula 24 May 2017 04:27AM

What conservatives and environmentalists agree on

9 PhilGoetz 08 April 2017 12:57AM

Today we had a sudden cold snap here in western Pennsylvania, with the temperature dropping 30 degrees F.  I was walking through a white field that had been green yesterday, looking at daffodils poking up through the snow and feeling irritated that they'd probably die.  It occurred to me that, if we could control the weather, people would probably vote for a smooth transition from winter to summer, and this would wreak some unforeseen environmental catastrophe, because it would suddenly make most survival strategies reliably sub-optimal.

This is typical environmentalist thinking:  Whenever you see something in the environment that you don't like, stop and step back before trying to change it.  Trust nature that there's some reason it is that way.  Interfere as little as possible.

The classic example is forest fires.  Our national park service used to try to stop all forest fires.  This policy changed in the 1960s for several reasons, including the observation that no new Sequoia saplings had sprouted since the beginning of fire suppression in the 19th century.  Fire is dangerous, destructive, and necessary.

It struck me that this cornerstone of environmentalism is also the cornerstone of social conservatism.

continue reading »

[Link] Win $1 Million For Your Bright Idea To Fix The World

0 morganism 18 March 2017 10:12PM

[Link] Dreaming of Political Bayescraft

1 Zack_M_Davis 06 March 2017 08:41PM

[Link] Weaponising Twitter bots and political algos.

1 morganism 05 March 2017 09:39PM

[Link] Towards a Post-Lies Future: Fighting "Alternative Facts" and "Post-Truth" Politics

0 Gleb_Tsipursky 22 February 2017 06:23PM

[Link] "The unrecognised simplicities of effective action #2: 'Systems engineering’ and 'systems management' - ideas from the Apollo programme for a 'systems politics'", Cummings 2017

6 gwern 17 February 2017 12:59AM

[Link] Why election models didn't predict Trump's victory — A primer on how polls and election models work

0 phl43 28 January 2017 07:51PM

[Link] A new blog with analyses of various topics (e. g. slavery and capitalism, why election models didn't predict Trump's victory)

2 phl43 26 January 2017 11:30PM

[Link] A Weird American in Trump's Post-Truth America

0 Gleb_Tsipursky 26 January 2017 10:26PM

80,000 Hours: EA and Highly Political Causes

30 The_Jaded_One 26 January 2017 09:44PM

this post is now crossposted to the EA forum

80,000 hours is a well known Effective Altruism organisation which does "in-depth research alongside academics at Oxford into how graduates can make the biggest difference possible with their careers". 

They recently posted a guide to donating which aims, in their words, to (my emphasis)

use evidence and careful reasoning to work out how to best promote the wellbeing of all. To find the highest-impact charities this giving season ... We ... summed up the main recommendations by area below

Looking below, we find a section on the problem area of criminal justice (US-focused). An area where the aim is outlined as follows: (quoting from the Open Philanthropy "problem area" page)

investing in criminal justice policy and practice reforms to substantially reduce incarceration while maintaining public safety. 

Reducing incarceration whilst maintaining public safety seems like a reasonable EA cause, if we interpret "pubic safety" in a broad sense - that is, keep fewer people in prison whilst still getting almost all of the benefits of incarceration such as deterrent effects, prevention of crime, etc.

So what are the recommended charities? (my emphasis below)

1. Alliance for Safety and Justice 

"The Alliance for Safety and Justice is a US organization that aims to reduce incarceration and racial disparities in incarceration in states across the country, and replace mass incarceration with new safety priorities that prioritize prevention and protect low-income communities of color."  

They promote an article on their site called "black wounds matter", as well as how you can "Apply for VOCA Funding: A Toolkit for Organizations Working With Crime Survivors in Communities of Color and Other Underserved Communities"

2. Cosecha - (note that their url is www.lahuelga.com, which means "the strike" in Spanish) (my emphasis below)

"Cosecha is a group organizing undocumented immigrants in 50-60 cities around the country. Its goal is to build mass popular support for undocumented immigrants, in resistance to incarceration/detention, deportation, denigration of rights, and discrimination. The group has become especially active since the Presidential election, given the immediate threat of mass incarceration and deportation of millions of people."

Cosecha have a footprint in the news, for example this article:

They have the ultimate goal of launching massive civil resistance and non-cooperation to show this country it depends on us ...  if they wage a general strike of five to eight million workers for seven days, we think the economy of this country would not be able to sustain itself 

The article quotes Carlos Saavedra, who is directly mentioned by Open Philanthropy's Chloe Cockburn:

Carlos Saavedra, who leads Cosecha, stands out as an organizer who is devoted to testing and improving his methods, ... Cosecha can do a lot of good to prevent mass deportations and incarceration, I think his work is a good fit for likely readers of this post."

They mention other charities elsewhere on their site and in their writeup on the subject, such as the conservative Center for Criminal Justice Reform, but Cosecha and the Alliance for Safety and Justice are the ones that were chosen as "highest impact" and featured in the guide to donating

 


 

Sometimes one has to be blunt: 80,000 hours is promoting the financial support of some extremely hot-button political causes, which may not be a good idea. Traditionalists/conservatives and those who are uninitiated to Social Justice ideology might look at The Alliance for Safety and Justice and Cosecha and label them as them racists and criminals, and thereby be turned off by Effective Altruism, or even by the rationality movement as a whole. 

There are standard arguments, for example this by Robin Hanson from 10 years ago about why it is not smart or "effective" to get into these political tugs-of-war if one wants to make a genuine difference in the world.

One could also argue that the 80,000 hours' charities go beyond the usual folly of political tugs-of-war. In addition to supporting extremely political causes, 80,000 hours could be accused of being somewhat intellectually dishonest about what goal they are trying to further actually is. 

Consider The Alliance for Safety and Justice. 80,000 Hours state that the goal of their work in the criminal justice problem area is to "substantially reduce incarceration while maintaining public safety". This is an abstract goal that has very broad appeal and one that I am sure almost everyone agrees to. But then their more concrete policy in this area is to fund a charity that wants to "reduce racial disparities in incarceration" and "protect low-income communities of color". The latter is significantly different to the former - it isn't even close to being the same thing - and the difference is highly political. One could object that reducing racial disparities in incarceration is merely a means to the end of substantially reducing incarceration while maintaining public safety, since many people in prison in the US are "of color". However this line of argument is a very politicized one and it might be wrong, or at least I don't see strong support for it. "Selectively release people of color and make society safer - endorsed by effective altruists!" struggles against known facts about redictivism rates across races, as well as an objection about the implicit conflation of equality of outcome and equality of opportunity. (and I do not want this to be interpreted as a claim of moral superiority of one race over others - merely a necessary exercise in coming to terms with facts and debunking implicit assumptions). Males are incarcerated much more than women, so what about reducing gender disparities in incarceration, whilst also maintaining public safety? Again, this is all highly political, laden with politicized implicit assumptions and language.  

Cosecha is worse! They are actively planning potentially illegal activities like helping illegal immigrants evade the law (though IANAL), as well as activities which potentially harm the majority of US citizens such as a seven day nationwide strike whose intent is to damage the economy. Their URL is "The Strike" in Spanish. 

Again, the abstract goal is extremely attractive to almost anyone, but the concrete implementation is highly divisive. If some conservative altruist signed up to financially or morally support the abstract goal of "substantially reducing incarceration while maintaining public safety" and EA organisations that are pursuing that goal without reading the details, and then at a later point they saw the details of Cosecha and The Alliance for Safety and Justice, they would rightly feel cheated. And to the objection that conservative altruists should read the description rather than just the heading - what are we doing writing headings so misleading that you'd feel cheated if you relied on them as summaries of the activity they are mean to summarize? 

 


 

One possibility would be for 80,000 hours to be much more upfront about what they are trying to achieve here - maybe they like left-wing social justice causes, and want to help like-minded people donate money to such causes and help the particular groups who are favored in those circles. There's almost a nod and a wink to this when Chloe Cockburn says (my paraphrase of Saavedra, and emphasis, below)

I think his [A man who wants to lead a general strike of five to eight million workers for seven days so that the economy of the USA would not be able to sustain itself, in order to help illegal immigrants] work is a good fit for likely readers of this post

Alternatively, they could try to reinvigorate the idea that their "criminal justice" problem area is politically neutral and beneficial to everyone; the Open Philanthropy issue writeup talks about "conservative interest in what has traditionally been a solely liberal cause" after all. I would advise considering dropping The Alliance for Safety and Justice and Cosecha if they intend to do this. There may not be politically neutral charities in this area, or there may not be enough high quality conservative charities to present a politically balanced set of recommendations. Setting up a growing donor advised fund or a prize for nonpartisan progress that genuinely intends to benefit everyone including conservatives, people opposed to illegal immigration and people who are not "of color" might be an option to consider.

We could examine 80,000 hours' choice to back these organisations from a more overall-utilitarian/overall-effectiveness point of view, rather than limiting the analysis to the specific problem area. These two charities don't pass the smell test for altruistic consequentialism, pulling sideways on ropes, finding hidden levers that others are ignoring, etc. Is the best thing you can do with your smart EA money helping a charity that wants to get stuck into the culture war about which skin color is most over-represented in prisons? What about a second charity that wants to help people illegally immigrate at a time when immigration is the most divisive political topic in the western world?

Furthermore, Cosecha's plans for a nationwide strike and potential civil disobedience/showdown with Trump & co could push an already volatile situation in the US into something extremely ugly. The vast majority of people in the world (present and future) are not the specific group that Cosecha aims to help, but the set of people who could be harmed by the uglier versions of a violent and calamitous showdown in the US is basically the whole world. That means that even if P(Cosecha persuades Trump to do a U-turn on illegals) is 10 or 100 times greater than P(Cosecha precipitates a violent crisis in the USA), they may still be net-negative from an expected utility point of view. EA doesn't usually fund causes whose outcome distribution is heavily left-skewed so this argument is a bit unusual to have to make, but there it is. 

Not only is Cosecha a cause that is (a) mind-killing and culture war-ish (b) very tangentially related to the actual problem area it is advertised under by 80,000 hours, but it might also (c) be an anti-charity that produces net disutility (in expectation) in the form of a higher probability a US civil war with money that you donate to it. 

Back on the topic of criminal justice and incarceration: opposition to reform often comes from conservative voters and politicians, so it might seem unlikely to a careful thinker that extra money on the left-wing side is going to be highly effective. Some intellectual judo is required; make conservatives think that it was their idea all along. So promoting the Center for Criminal Justice Reform sounds like the kind of smart, against-the-grain idea that might be highly effective! Well done, Open Philanthropy! Also in favor of this org: they don't copiously mention which races or person-categories they think are most important in their articles about criminal justice reform, the only culture war item I could find on them is the world "conservative" (and given the intellectual judo argument above, this counts as a plus), and they're not planning a national strike or other action with a heavy tail risk. But that's the one that didn't make the cut for the 80,000 hours guide to donating!

The fact that they let Cosecha (and to a lesser extent The Alliance for Safety and Justice) through reduces my confidence in 80,000 hours and the EA movement as a whole. Who thought it would be a good idea to get EA into the culture war with these causes, and also thought that they were plausibly among the most effective things you can do with money? Are they taking effectiveness seriously? What does the political diversity of meetings at 80,000 hours look like? Were there no conservative altruists present in discussions surrounding The Alliance for Safety and Justice and Cosecha, and the promotion of them as "beneficial for everyone" and "effective"? 

Before we finish, I want to emphasize that this post is not intended to start an object-level discussion about which race, gender, political movement or sexual orientation is cooler, and I would encourage moderators to temp-ban people who try to have that kind of argument in the comments of this post.

I also want to emphasize that criticism of professional altruists is a necessary evil; in an ideal world the only thing I would ever want to say to people who dedicate their lives to helping others (Chloe Cockburn in particular, since I mentioned her name above)  is "thank you, you're amazing". Other than that, comments and criticism are welcome, especially anything pointing out any inaccuracies or misunderstandings in this post. Comments from anyone involved in 80,000 hours or Open Philanthropy are welcome. 

Metrics to evaluate a Presidency

0 ArisC 24 January 2017 01:02AM

I got lots of helpful comments in my first post, so I'll try a second: I want to develop a list of criteria by which to evaluate a presidency. Coming up with criteria and metrics on the economy is pretty easy, but I'd like to ask for suggestions on proxies for evaluating:

 

  • Racial relations;
  • Gender equality;
  • Impact on free trade / protectionism;
  • Education;
  • Any other significant factor that would determine whether a president is successful.
Note: a few people have pointed out that the president is restrained by senators and congressmen etc - I realise that; but if we are willing to admit that presidents do have some effect in society, we should be prepared to measure them.

Thanks!
A.

 

[Link] Disgust and Politics

1 gworley 17 January 2017 12:19AM

[Link] Donald Trump, the Dunning-Kruger President

0 morganism 13 January 2017 08:01PM

[Link] Dominic Cummings: how the Brexit referendum was won

16 The_Jaded_One 12 January 2017 09:26PM

[Link] Dominic Cummings: how the Brexit referendum was won

1 The_Jaded_One 12 January 2017 07:26PM

Rationality Considered Harmful (In Politics)

9 The_Jaded_One 08 January 2017 10:36AM

Why you should be very careful about trying to openly seek truth in any political discussion


1. Rationality considered harmful for Scott Aaronson in the great gender debate

In 2015, complexity theorist and rationalist Scott Aaronson was foolhardy enough to step into the Gender Politics war on his blog with a comment stating that extreme feminism that he bought into made him hate himself and try to seek ways to chemically castrate himself. The feminist blogoshere got hold of this and crucified him for it, and he has written a few followup blog posts about it. Recently I saw this comment by him on his blog:

As the comment 171 affair blew up last year, one of my female colleagues in quantum computing remarked to me that the real issue had nothing to do with gender politics; it was really just about the commitment to truth regardless of the social costs—a quality that many of the people attacking me (who were overwhelmingly from outside the hard sciences) had perhaps never encountered before in their lives. That remark cheered me more than anything else at the time

 

2. Rationality considered harmful for Sam Harris in the islamophobia war

I recently heard a very angry, exasperated 2 hour podcast by the new atheist and political commentator Sam Harris about how badly he has been straw-manned, misrepresented and trash talked by his intellectual rivals (who he collectively refers to as the "regressive left"). Sam Harris likes to tackle hard questions such as when torture is justified, which religions are more or less harmful than others, defence of freedom of speech, etc. Several times, Harris goes to the meta-level and sees clearly what is happening:

Rather than a searching and beautiful exercise in human reason to have conversations on these topics [ethics of torture, military intervention, Islam, etc], people are making it just politically so toxic, reputationally so toxic to even raise these issues that smart people, smarter than me, are smart enough not to go near these topics

Everyone on the left at the moment seems to be a mind reader.. no matter how much you try to take their foot out of your mouth, the mere effort itself is going to be counted against you - you're someone who's in denial, or you don't even understand how racist you are, etc

 

3. Rationality considered harmful when talking to your left-wing friends about genetic modification

In the SlateStarCodex comments I posted complaining that many left-wing people were responding very personally (and negatively) to my political views. 

One long term friend openly and pointedly asked whether we should still be friends over the subject of eugenics and genetic engineering, for example altering the human germ-line via genetic engineering to permanently cure a genetic disease. This friend responded to a rational argument about why some modifications of the human germ line may in fact be a good thing by saying that "(s)he was beginning to wonder whether we should still be friends". 

A large comment thread ensued, but the best comment I got was this one:

One of the useful things I have found when confused by something my brain does is to ask what it is *for*. For example: I get angry, the anger is counterproductive, but recognizing that doesn’t make it go away. What is anger *for*? Maybe it is to cause me to plausibly signal violence by making my body ready for violence or some such.

Similarly, when I ask myself what moral/political discourse among friends is *for* I get back something like “signal what sort of ally you would be/broadcast what sort of people you want to ally with.” This makes disagreements more sensible. They are trying to signal things about distribution of resources, I am trying to signal things about truth value, others are trying to signal things about what the tribe should hold sacred etc. Feeling strong emotions is just a way of signaling strong precommitments to these positions (i.e. I will follow the morality I am signaling now because I will be wracked by guilt if I do not. I am a reliable/predictable ally.) They aren’t mad at your positions. They are mad that you are signaling that you would defect when push came to shove about things they think are important.

Let me repeat that last one: moral/political discourse among friends is for “signalling what sort of ally you would be/broadcast what sort of people you want to ally with”. Moral/political discourse probably activates specially evolved brainware in human beings; that brainware has a purpose and it isn't truthseeking. Politics is not about policy

 

4. Takeaways

This post is already getting too long so I deleted the section on lessons to be learned, but if there is interest I'll do a followup. Let me know what you think in the comments!

Nassim Taleb on Election Forecasting

8 NatashaRostova 26 November 2016 07:06PM

Nassim Taleb recently posted this mathematical draft of election forecasting refinement to his Twitter.

The math isn’t super important to see why it’s so cool. His model seems to be that we should try to forecast the election outcome, including uncertainty between now and the end date, rather than build a forecast that takes current poll numbers and implicitly assumes nothing changes.
The mechanism of his model focuses on forming an unbiased time-series, formulated using stochastic methods. The mainstream methods as of now focus on multi-level Bayesian methods that look to see how the election would turn out if it were run today.
 
That seems like it makes more sense. While it’s safe to assume a candidate will always want to have the highest chances of winning, the process by which two candidates interact is highly dynamic and strategic with respect to the election date.

When you stop to think about it, it’s actually remarkable that elections are so incredibly close to 50-50, with a 3-5% victory being generally immense. It captures this underlying dynamic of political game theory.

(At the more local level this isn’t always true, due to issues such as incumbent advantage, local party domination, strategic funding choices, and various other issues. The point though is that when those frictions are ameliorated due to the importance of the presidency, we find ourselves in a scenario where the equilibrium tends to be elections very close to 50-50.)

So back to the mechanism of the model, Taleb imposes a no-arbitrage condition (borrowed from options pricing) to impose time-varying consistency on the Brier score. This is a similar concept to financial options, where you can go bankrupt or make money even before the final event. In Taleb's world, if a guy like Nate Silver is creating forecasts that are varying largely over time prior to the election, this suggests he hasn't put any time dynamic constraints on his model.

The math is based on assumptions though that with high uncertainty, far out from the election, the best forecast is 50-50. This set of assumptions would have to be empirically tested. Still, stepping aside from the math, it does feel intuitive that an election forecast with high variation a year away from the event is not worth relying on, that sticking closer to 50-50 would offer a better full-sample Brier score.


I'm not familiar enough in the practical modelling to say whether this is feasible. Sometime the ideal models are too hard to estimate.

I'm interested in hearing any thoughts on this from people who are familiar with forecasting or have an interest in the modelling behind it.

I also have a specific question to tie this back to a rationality based framework: When you read Silver (or your preferred reputable election forecaster, I like Andrew Gelman) post their forecasts prior to the election, do you accept them as equal or better than any estimate you could come up with? Or do you do a mental adjustment or discounting based on some factor you think they've left out? Whether it's prediction market variations, or adjustments based on perceiving changes in nationalism or politician specific skills (e.g. Scott Adams claimed to be able to predict that Trump would persuade everyone to vote for him. While it's tempting to write him off as a pundit charlatan, or claim he doesn't have sufficient proof, we also can't prove his model was wrong either.) I'm interested in learning the reasons we may disagree or be reasonably skeptical of polls, knowing it of course must be tested to know the true answer.

This is my first LW discussion post -- open to freedback on how it could be improved

[Link] Polarization is the problem, "normalization" is the answer

3 Jacobian 23 November 2016 05:40PM

[Link] Irrationality is the worst problem in politics

-14 Gleb_Tsipursky 21 November 2016 04:53PM

Group rationality -- bridging the gap in a post-truth world

0 rosyatrandom 18 November 2016 01:44PM

Everyone on this site obviously has an interest in being, on a personal level, more rational. That's, without need for argument, a good thing. (Although, if you do want to argue that, I can't stop you...)

But...

As a society, we're clearly not very rational, and it's becoming a huge problem. Look at any political articles out there, and you'll see the same thing: angry people partitioned into angry groups, yelling at each other and confirming their own biases. The level of discourse is... low, shall we say. 

While the obvious facet of rationality is trying to discern the signal above the noise, there's definitely another side: the art of convincing others. That can swing a little too close to Sophistry and putting the emphasis on personal gain, though. What we really need to do is outreach: promote rationality in the world around us. There's probably no-one reading this who hasn't been in an argument where being more rational and right hasn't helped at all, and maybe even made things worse. We've also all probably been on the other side of that, too. Admit it. But possibly the key word in that is 'argument': it frames the discussion as a confrontation, a fight that needs to be won.

Being the calm, rational person in a fight doesn't always work, though. It only takes one party to want a fight to have one, after all. When there's groups involved, the shouty passionate people tend to dominate, too. And they're currently dominating politics, and so all our lives. That's not a status quo any rationalist would be happy with, I think.

One of the problems with political/economic discussions is that we get polarised into taking absurd blanket positions and being unable to admit limitations or counter-arguments. I'm generally pretty far on the Left of the spectrum, but I will freely admit that the Right has both some very good points and a role to play: what is needed is a good dynamic tension between the two sides to ensure we don't go totally doolally either way. (Thesis, Antithesis, Synthesis etc.) And the tension is there, but it's certainly not good. We need to be able to point out failure modes to ourselves and others, encourage constructive criticism.

I think we need ways of both cooling the flames (both 1-on-1 and in groups), and strategies for promoting useful discussion.

So how can we do this? What can we do?

[Link] Maine passes Ranked Choice Voting

6 morganism 14 November 2016 08:07PM

Yudkowsky vs Trump: the nuclear showdown.

9 MrMind 11 November 2016 11:30AM

Sorry for the slightly clickbait-y title.

Some commenters have expressed, in the last open thread, their disappointment that figureheads from or near the rationality sphere seemed to have lost their cool when it came to this US election: when they were supposed to be calm and level-headed, they instead campaigned as if Trump was going to be the Basilisk incarnated.

I've not followed many commenters, mainly Scott Alexander and Eliezer Yudkowsky, and they both endorsed Clinton. I'll try to explain what were their arguments, briefly but as faithfully as possible. I'd like to know if you consider them mindkilled and why.

Please notice: I would like this to be a comment on methodology, about if their arguments were sound given what they knew and believed. I most definitely do not want this to decay in a lamentation about the results, or insults to the obviously stupid side, etc.

Yudkowsky made two arguments against Trump: level B incompetence and high variance. Since the second is also more or less the same as Scott's, I'll just go with those.

Level B incompetence

Eliezer attended a pretty serious and wide diplomatic simulation game, that made him appreciate how difficult is to just maintain a global equilibrium between countries and avoid nuclear annihilation. He says that there are three level in politics:

- level 0, where everything that the media report and the politicians say is taken at face value: every drama is true, every problem is important and every cry of outrage deserves consideration;

- level A, where you understand that politics is as much about theatre and emotions as it is about policies: at this level players operate like in pro-wrestling, creating drama and conflict to steer the more gullible viewers towards the preferred direction; at this level cinicism is high and almost every conflict is a farce and probably staged.

But the bucket doesn't stop here. As the diplomacy simulation taught him, there's also:

- level B, where everything becomes serious and important again. At this level, people work very hard at maintaining the status quo (where outside you have mankind extinction), diplomatic relations and subtle international equilibria shield the world from much worse outcomes. Faux pas at this level in the past had resulted in wars, genocides and general widespread badness.

In August fifty Republican security advisors signed a letter condemning Trump for his position on foreign policy: these are, Yudkowsky warned us, exactly those level B player, and they are saying us that Trump is an ill advised choice.
Trump might be a fantastic level A player, but he is an incompetent level B player, and this might very well turn to disaster.

High variance

The second argument is a more general version of the first: if you look at a normal distribution, it's easy to mistake only two possibilities: you either can do worst than the average, or better. But in a three dimensional world, things are much more complicated. Status quo is fragile (see the first argument), surrounded not by an equal amount of things being good or being bad. Most substantial variations from the equilibrium are disasters, and if you put a high-variance candidate, someone whose main point is to subvert the status quo, in charge, then with overwhelming probability you're headed off to a cliff.
People who voted for Trump are unrealistically optimists, thinking that civilization is robust, the current state is bad and variations can definitely help with getting away from a state of bad equilibrium.

[Link] Major Life Course Change: Making Politics Less Irrational

-8 Gleb_Tsipursky 11 November 2016 03:30AM

[Link] Raising the sanity waterline in politics

-15 Gleb_Tsipursky 08 November 2016 04:10PM

[Link] Voting is like donating hundreds of thousands to charity

-6 Gleb_Tsipursky 02 November 2016 09:22PM

[Link] Trying to make politics less irrational by cognitive bias-checking the US presidential debates

-6 Gleb_Tsipursky 22 October 2016 02:32AM

[Link] Politics Is Upstream of AI

4 iceman 28 September 2016 09:47PM

2016 LessWrong Diaspora Survey Analysis: Part Four (Politics, Calibration & Probability, Futurology, Charity & Effective Altruism)

11 ingres 10 September 2016 03:51AM

Politics

The LessWrong survey has a very involved section dedicated to politics. In previous analysis the benefits of this weren't fully realized. In the 2016 analysis we can look at not just the political affiliation of a respondent, but what beliefs are associated with a certain affiliation. The charts below summarize most of the results.

Political Opinions By Political Affiliation



































Miscellaneous Politics

There were also some other questions in this section which aren't covered by the above charts.

PoliticalInterest

On a scale from 1 (not interested at all) to 5 (extremely interested), how would you describe your level of interest in politics?

1: 67 (2.182%)

2: 257 (8.371%)

3: 461 (15.016%)

4: 595 (19.381%)

5: 312 (10.163%)

Voting

Did you vote in your country's last major national election? (LW Turnout Versus General Election Turnout By Country)
Group Turnout
LessWrong 68.9%
Austrailia 91%
Brazil 78.90%
Britain 66.4%
Canada 68.3%
Finland 70.1%
France 79.48%
Germany 71.5%
India 66.3%
Israel 72%
New Zealand 77.90%
Russia 65.25%
United States 54.9%
Numbers taken from Wikipedia, accurate as of the last general election in each country listed at time of writing.

AmericanParties

If you are an American, what party are you registered with?

Democratic Party: 358 (24.5%)

Republican Party: 72 (4.9%)

Libertarian Party: 26 (1.8%)

Other third party: 16 (1.1%)

Not registered for a party: 451 (30.8%)

(option for non-Americans who want an option): 541 (37.0%)

Calibration And Probability Questions

Calibration Questions

I just couldn't analyze these, sorry guys. I put many hours into trying to get them into a decent format I could even read and that sucked up an incredible amount of time. It's why this part of the survey took so long to get out. Thankfully another LessWrong user, Houshalter, has kindly done their own analysis.

All my calibration questions were meant to satisfy a few essential properties:

  1. They should be 'self contained'. I.E, something you can reasonably answer or at least try to answer with a 5th grade science education and normal life experience.
  2. They should, at least to a certain extent, be Fermi Estimable.
  3. They should progressively scale in difficulty so you can see whether somebody understands basic probability or not. (eg. In an 'or' question do they put a probability of less than 50% of being right?)

At least one person requested a workbook, so I might write more in the future. I'll obviously write more for the survey.

Probability Questions

Question Mean Median Mode Stdev
Please give the obvious answer to this question, so I can automatically throw away all surveys that don't follow the rules: What is the probability of a fair coin coming up heads? 49.821 50.0 50.0 3.033
What is the probability that the Many Worlds interpretation of quantum mechanics is more or less correct? 44.599 50.0 50.0 29.193
What is the probability that non-human, non-Earthly intelligent life exists in the observable universe? 75.727 90.0 99.0 31.893
...in the Milky Way galaxy? 45.966 50.0 10.0 38.395
What is the probability that supernatural events (including God, ghosts, magic, etc) have occurred since the beginning of the universe? 13.575 1.0 1.0 27.576
What is the probability that there is a god, defined as a supernatural intelligent entity who created the universe? 15.474 1.0 1.0 27.891
What is the probability that any of humankind's revealed religions is more or less correct? 10.624 0.5 1.0 26.257
What is the probability that an average person cryonically frozen today will be successfully restored to life at some future time, conditional on no global catastrophe destroying civilization before then? 21.225 10.0 5.0 26.782
What is the probability that at least one person living at this moment will reach an age of one thousand years, conditional on no global catastrophe destroying civilization in that time? 25.263 10.0 1.0 30.510
What is the probability that our universe is a simulation? 25.256 10.0 50.0 28.404
What is the probability that significant global warming is occurring or will soon occur, and is primarily caused by human actions? 83.307 90.0 90.0 23.167
What is the probability that the human race will make it to 2100 without any catastrophe that wipes out more than 90% of humanity? 76.310 80.0 80.0 22.933

 

Probability questions is probably the area of the survey I put the least effort into. My plan for next year is to overhaul these sections entirely and try including some Tetlock-esque forecasting questions, a link to some advice on how to make good predictions, etc.

Futurology

This section got a bit of a facelift this year. Including new cryonics questions, genetic engineering, and technological unemployment in addition to the previous years.

Cryonics

Cryonics

Are you signed up for cryonics?

Yes - signed up or just finishing up paperwork: 48 (2.9%)

No - would like to sign up but unavailable in my area: 104 (6.3%)

No - would like to sign up but haven't gotten around to it: 180 (10.9%)

No - would like to sign up but can't afford it: 229 (13.8%)

No - still considering it: 557 (33.7%)

No - and do not want to sign up for cryonics: 468 (28.3%)

Never thought about it / don't understand: 68 (4.1%)

CryonicsNow

Do you think cryonics, as currently practiced by Alcor/Cryonics Institute will work?

Yes: 106 (6.6%)

Maybe: 1041 (64.4%)

No: 470 (29.1%)

Interestingly enough, of those who think it will work with enough confidence to say 'yes', only 14 are actually signed up for cryonics.

sqlite> select count(*) from data where CryonicsNow="Yes" and Cryonics="Yes - signed up or just finishing up paperwork";

14

sqlite> select count(*) from data where CryonicsNow="Yes" and (Cryonics="Yes - signed up or just finishing up paperwork" OR Cryonics="No - would like to sign up but unavailable in my area" OR "No - would like to sign up but haven't gotten around to it" OR "No - would like to sign up but can't afford it");

34

CryonicsPossibility

Do you think cryonics works in principle?

Yes: 802 (49.3%)

Maybe: 701 (43.1%)

No: 125 (7.7%)

LessWrongers seem to be very bullish on the underlying physics of cryonics even if they're not as enthusiastic about current methods in use.

The Brain Preservation Foundation also did an analysis of cryonics responses to the LessWrong Survey.

Singularity

SingularityYear

By what year do you think the Singularity will occur? Answer such that you think, conditional on the Singularity occurring, there is an even chance of the Singularity falling before or after this year. If you think a singularity is so unlikely you don't even want to condition on it, leave this question blank.

Mean: 8.110300081581755e+16

Median: 2080.0

Mode: 2100.0

Stdev: 2.847858859055733e+18

I didn't bother to filter out the silly answers for this.

Obviously it's a bit hard to see without filtering out the uber-large answers, but the median doesn't seem to have changed much from the 2014 survey.

Genetic Engineering

ModifyOffspring

Would you ever consider having your child genetically modified for any reason?

Yes: 1552 (95.921%)

No: 66 (4.079%)

Well that's fairly overwhelming.

GeneticTreament

Would you be willing to have your child genetically modified to prevent them from getting an inheritable disease?

Yes: 1387 (85.5%)

Depends on the disease: 207 (12.8%)

No: 28 (1.7%)

I find it amusing how the strict "No" group shrinks considerably after this question.

GeneticImprovement

Would you be willing to have your child genetically modified for improvement purposes? (eg. To heighten their intelligence or reduce their risk of schizophrenia.)

Yes : 0 (0.0%)

Maybe a little: 176 (10.9%)

Depends on the strength of the improvements: 262 (16.2%)

No: 84 (5.2%)

Yes I know 'yes' is bugged, I don't know what causes this bug and despite my best efforts I couldn't track it down. There is also an issue here where 'reduce your risk of schizophrenia' is offered as an example which might confuse people, but the actual science of things cuts closer to that than it does to a clean separation between disease risk and 'improvement'.

 

This question is too important to just not have an answer to so I'll do it manually. Unfortunately I can't easily remove the 'excluded' entries so that we're dealing with the exact same distribution but only 13 or so responses are filtered out anyway.

sqlite> select count(*) from data where GeneticImprovement="Yes";

1100

>>> 1100 + 176 + 262 + 84
1622
>>> 1100 / 1622
0.6781750924784217

67.8% are willing to genetically engineer their children for improvements.

GeneticCosmetic

Would you be willing to have your child genetically modified for cosmetic reasons? (eg. To make them taller or have a certain eye color.)

Yes: 500 (31.0%)

Maybe a little: 381 (23.6%)

Depends on the strength of the improvements: 277 (17.2%)

No: 455 (28.2%)

These numbers go about how you would expect, with people being progressively less interested the more 'shallow' a genetic change is seen as.


GeneticOpinionD

What's your overall opinion of other people genetically modifying their children for disease prevention purposes?

Positive: 1177 (71.7%)

Mostly Positive: 311 (19.0%)

No strong opinion: 112 (6.8%)

Mostly Negative: 29 (1.8%)

Negative: 12 (0.7%)

GeneticOpinionI

What's your overall opinion of other people genetically modifying their children for improvement purposes?

Positive: 737 (44.9%)

Mostly Positive: 482 (29.4%)

No strong opinion: 273 (16.6%)

Mostly Negative: 111 (6.8%)

Negative: 38 (2.3%)

GeneticOpinionC

What's your overall opinion of other people genetically modifying their children for cosmetic reasons?

Positive: 291 (17.7%)

Mostly Positive: 290 (17.7%)

No strong opinion: 576 (35.1%)

Mostly Negative: 328 (20.0%)

Negative: 157 (9.6%)

All three of these seem largely consistent with peoples personal preferences about modification. Were I inclined I could do a deeper analysis that actually takes survey respondents row by row and looks at correlation between preference for ones own children and preference for others.

Technological Unemployment

LudditeFallacy

Do you think the Luddite's Fallacy is an actual fallacy?

Yes: 443 (30.936%)

No: 989 (69.064%)

We can use this as an overall measure of worry about technological unemployment, which would seem to be high among the LW demographic.

UnemploymentYear

By what year do you think the majority of people in your country will have trouble finding employment for automation related reasons? If you think this is something that will never happen leave this question blank.

Mean: 2102.9713740458014

Median: 2050.0

Mode: 2050.0

Stdev: 1180.2342850727339

Question is flawed because you can't distinguish answers of "never happen" from people who just didn't see it.

Interesting question that would be fun to take a look at in comparison to the estimates for the singularity.

EndOfWork

Do you think the "end of work" would be a good thing?

Yes: 1238 (81.287%)

No: 285 (18.713%)

Fairly overwhelming consensus, but with a significant minority of people who have a dissenting opinion.

EndOfWorkConcerns

If machines end all or almost all employment, what are your biggest worries? Pick two.

Question Count Percent
People will just idle about in destructive ways 513 16.71%
People need work to be fulfilled and if we eliminate work we'll all feel deep existential angst 543 17.687%
The rich are going to take all the resources for themselves and leave the rest of us to starve or live in poverty 1066 34.723%
The machines won't need us, and we'll starve to death or be otherwise liquidated 416 13.55%
Question is flawed because it demanded the user 'pick two' instead of up to two.

The plurality of worries are about elites who refuse to share their wealth.

Existential Risk

XRiskType

Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?

Nuclear war: +4.800% 326 (20.6%)

Asteroid strike: -0.200% 64 (4.1%)

Unfriendly AI: +1.000% 271 (17.2%)

Nanotech / grey goo: -2.000% 18 (1.1%)

Pandemic (natural): +0.100% 120 (7.6%)

Pandemic (bioengineered): +1.900% 355 (22.5%)

Environmental collapse (including global warming): +1.500% 252 (16.0%)

Economic / political collapse: -1.400% 136 (8.6%)

Other: 35 (2.217%)

Significantly more people worried about Nuclear War than last year. Effect of new respondents, or geopolitical situation? Who knows.

Charity And Effective Altruism

Charitable Giving

Income

What is your approximate annual income in US dollars (non-Americans: convert at www.xe.com)? Obviously you don't need to answer this question if you don't want to. Please don't include commas or dollar signs.

Sum: 66054140.47384

Mean: 64569.052271593355

Median: 40000.0

Mode: 30000.0

Stdev: 107297.53606321265

IncomeCharityPortion

How much money, in number of dollars, have you donated to charity over the past year? (non-Americans: convert to dollars at http://www.xe.com/ ). Please don't include commas or dollar signs in your answer. For example, 4000

Sum: 2389900.6530000004

Mean: 2914.5129914634144

Median: 353.0

Mode: 100.0

Stdev: 9471.962766896671

XriskCharity

How much money have you donated to charities aiming to reduce existential risk (other than MIRI/CFAR) in the past year?

Sum: 169300.89

Mean: 1991.7751764705883

Median: 200.0

Mode: 100.0

Stdev: 9219.941506342007

CharityDonations

How much have you donated in US dollars to the following charities in the past year? (Non-americans: convert to dollars at http://www.xe.com/) Please don't include commas or dollar signs in your answer. Options starting with "any" aren't the name of a charity but a category of charity.

Question Sum Mean Median Mode Stdev
Against Malaria Foundation 483935.027 1905.256 300.0 None 7216.020
Schistosomiasis Control Initiative 47908.0 840.491 200.0 1000.0 1618.785
Deworm the World Initiative 28820.0 565.098 150.0 500.0 1432.712
GiveDirectly 154410.177 1429.723 450.0 50.0 3472.082
Any kind of animal rights charity 83130.47 1093.821 154.235 500.0 2313.493
Any kind of bug rights charity 1083.0 270.75 157.5 None 353.396
Machine Intelligence Research Institute 141792.5 1417.925 100.0 100.0 5370.485
Any charity combating nuclear existential risk 491.0 81.833 75.0 100.0 68.060
Any charity combating global warming 13012.0 245.509 100.0 10.0 365.542
Center For Applied Rationality 127101.0 3177.525 150.0 100.0 12969.096
Strategies for Engineered Negligible Senescence Research Foundation 9429.0 554.647 100.0 20.0 1156.431
Wikipedia 12765.5 53.189 20.0 10.0 126.444
Internet Archive 2975.04 80.406 30.0 50.0 173.791
Any campaign for political office 38443.99 366.133 50.0 50.0 1374.305
Other 564890.46 1661.442 200.0 100.0 4670.805
"Bug Rights" charity was supposed to be a troll fakeout but apparently...

This table is interesting given the recent debates about how much money certain causes are 'taking up' in Effective Altruism.

Effective Altruism

Vegetarian

Do you follow any dietary restrictions related to animal products?

Yes, I am vegan: 54 (3.4%)

Yes, I am vegetarian: 158 (10.0%)

Yes, I restrict meat some other way (pescetarian, flexitarian, try to only eat ethically sourced meat): 375 (23.7%)

No: 996 (62.9%)

EAKnowledge

Do you know what Effective Altruism is?

Yes: 1562 (89.3%)

No but I've heard of it: 114 (6.5%)

No: 74 (4.2%)

EAIdentity

Do you self-identify as an Effective Altruist?

Yes: 665 (39.233%)

No: 1030 (60.767%)

The distribution given by the 2014 survey results does not sum to one, so it's difficult to determine if Effective Altruism's membership actually went up or not but if we take the numbers at face value it experienced an 11.13% increase in membership.

EACommunity

Do you participate in the Effective Altruism community?

Yes: 314 (18.427%)

No: 1390 (81.573%)

Same issue as last, taking the numbers at face value community participation went up by 5.727%

EADonations

Has Effective Altruism caused you to make donations you otherwise wouldn't?

Yes: 666 (39.269%)

No: 1030 (60.731%)

Wowza!

Effective Altruist Anxiety

EAAnxiety

Have you ever had any kind of moral anxiety over Effective Altruism?

Yes: 501 (29.6%)

Yes but only because I worry about everything: 184 (10.9%)

No: 1008 (59.5%)


There's an ongoing debate in Effective Altruism about what kind of rhetorical strategy is best for getting people on board and whether Effective Altruism is causing people significant moral anxiety.

It certainly appears to be. But is moral anxiety effective? Let's look:

Sample Size: 244
Average amount of money donated by people anxious about EA who aren't EAs: 257.5409836065574

Sample Size: 679
Average amount of money donated by people who aren't anxious about EA who aren't EAs: 479.7501384388807

Sample Size: 249 Average amount of money donated by EAs anxious about EA: 1841.5292369477913

Sample Size: 314
Average amount of money donated by EAs not anxious about EA: 1837.8248407643312

It seems fairly conclusive that anxiety is not a good way to get people to donate more than they already are, but is it a good way to get people to become Effective Altruists?

Sample Size: 1685
P(Effective Altruist): 0.3940652818991098
P(EA Anxiety): 0.29554896142433235
P(Effective Altruist | EA Anxiety): 0.5

Maybe. There is of course an argument to be made that sufficient good done by causing people anxiety outweighs feeding into peoples scrupulosity, but it can be discussed after I get through explaining it on the phone to wealthy PR-conscious donors and telling the local all-kill shelter where I want my shipment of dead kittens.

EAOpinion

What's your overall opinion of Effective Altruism?

Positive: 809 (47.6%)

Mostly Positive: 535 (31.5%)

No strong opinion: 258 (15.2%)

Mostly Negative: 75 (4.4%)

Negative: 24 (1.4%)

EA appears to be doing a pretty good job of getting people to like them.

Interesting Tables

Charity Donations By Political Affilation
Affiliation Income Charity Contributions % Income Donated To Charity Total Survey Charity % Sample Size
Anarchist 1677900.0 72386.0 4.314% 3.004% 50
Communist 298700.0 19190.0 6.425% 0.796% 13
Conservative 1963000.04 62945.04 3.207% 2.612% 38
Futarchist 1497494.1099999999 166254.0 11.102% 6.899% 31
Left-Libertarian 9681635.613839999 416084.0 4.298% 17.266% 245
Libertarian 11698523.0 214101.0 1.83% 8.885% 190
Moderate 3225475.0 90518.0 2.806% 3.756% 67
Neoreactionary 1383976.0 30890.0 2.232% 1.282% 28
Objectivist 399000.0 1310.0 0.328% 0.054% 10
Other 3150618.0 85272.0 2.707% 3.539% 132
Pragmatist 5087007.609999999 266836.0 5.245% 11.073% 131
Progressive 8455500.440000001 368742.78 4.361% 15.302% 217
Social Democrat 8000266.54 218052.5 2.726% 9.049% 237
Socialist 2621693.66 78484.0 2.994% 3.257% 126


Number Of Effective Altruists In The Diaspora Communities
Community Count % In Community Sample Size
LessWrong 136 38.418% 354
LessWrong Meetups 109 50.463% 216
LessWrong Facebook Group 83 48.256% 172
LessWrong Slack 22 39.286% 56
SlateStarCodex 343 40.98% 837
Rationalist Tumblr 175 49.716% 352
Rationalist Facebook 89 58.94% 151
Rationalist Twitter 24 40.0% 60
Effective Altruism Hub 86 86.869% 99
Good Judgement(TM) Open 23 74.194% 31
PredictionBook 31 51.667% 60
Hacker News 91 35.968% 253
#lesswrong on freenode 19 24.675% 77
#slatestarcodex on freenode 9 24.324% 37
#chapelperilous on freenode 2 18.182% 11
/r/rational 117 42.545% 275
/r/HPMOR 110 47.414% 232
/r/SlateStarCodex 93 37.959% 245
One or more private 'rationalist' groups 91 47.15% 193


Effective Altruist Donations By Political Affiliation
Affiliation EA Income EA Charity Sample Size
Anarchist 761000.0 57500.0 18
Futarchist 559850.0 114830.0 15
Left-Libertarian 5332856.0 361975.0 112
Libertarian 2725390.0 114732.0 53
Moderate 583247.0 56495.0 22
Other 1428978.0 69950.0 49
Pragmatist 1442211.0 43780.0 43
Progressive 4004097.0 304337.78 107
Social Democrat 3423487.45 149199.0 93
Socialist 678360.0 34751.0 41

Why we may elect our new AI overlords

2 Deku-shrub 04 September 2016 01:07AM

In which I examine some of the latest development in automated fact checking, prediction markets for policies and propose we get rich voting for robot politicians.

http://pirate.london/2016/09/why-we-may-elect-our-new-ai-overlords/

Paid research assistant position focusing on artificial intelligence and existential risk

7 crmflynn 02 May 2016 06:27PM

Yale Assistant Professor of Political Science Allan Dafoe is seeking Research Assistants for a project on the political dimensions of the existential risks posed by advanced artificial intelligence. The project will involve exploring issues related to grand strategy and international politics, reviewing possibilities for social scientific research in this area, and institution building. Familiarity with international relations, existential risk, Effective Altruism, and/or artificial intelligence are a plus but not necessary. The project is done in collaboration with the Future of Humanity Institute, located in the Faculty of Philosophy at the University of Oxford. There are additional career opportunities in this area, including in the coming academic year and in the future at Yale, Oxford, and elsewhere. If interested in the position, please email allan.dafoe@yale.edu with a copy of your CV, a writing sample, an unofficial copy of your transcript, and a short (200-500 word) statement of interest. Work can be done remotely, though being located in New Haven, CT or Oxford, UK is a plus.

[Link] Salon piece analyzing Donald Trump's appeal using rationality

-13 Gleb_Tsipursky 24 April 2016 04:36AM

I'm curious about your thoughts on my piece in Salon analyzing Trump's emotional appeal using rationality-informed ideas. My primary aim is using the Trump hook to get readers to consider the broader role of Systems 1 and 2 in politics, the backfire effect, wishful thinking, emotional intelligence, etc.

 

 

Suppose HBD is True

-12 OrphanWilde 21 April 2016 01:34PM

Suppose, for the purposes of argument, HBD (Human bio-diversity, the claim that distinct populations (I will be avoiding using the word "race" here insomuch as possible) of humans exist and have substantial genetical variance which accounts for some difference in average intelligence from population to population) is true, and that all its proponents are correct in accusing the politicization of science for burying this information.

I seek to ask the more interesting question: Would it matter?

1. Societal Ramifications of HBD: Eugenics

So, we now have some kind of nice, tidy explanation for different characters among different groups of people.  Okay.  We have a theory.  It has explanatory power.  What can we do with it?

Unless you're willing to commit to eugenics of some kind (be it restricting reproduction or genetic alteration), not much of anything.  And even given you're willing to commit to eugenics, HBD doesn't add anything  HBD doesn't actually change any of the arguments for eugenics - below-average people exist in every population group, and insofar as we regard below-average people a problem, the genetic population they happen to belong to doesn't matter.  If the point is to raise the average, the population group doesn't matter.  If the point is to reduce the number of socially dependent individuals, the population group doesn't matter.

Worse, insofar as we use HBD as a determinant in eugenics, our eugenics are less effective.  HBD says your population group has a relationship with intelligence; but if we're interested in intelligence, we have no reason to look at your population group, because we can measure intelligence more directly.  There's no reason to use the proxy of population group if we're interested in intelligence, and indeed, every reason not to; it's significantly less accurate and politically and historically problematic.

Yet still worse for our eugenics advocate, insomuch as population groups do have significant genetic diversity, using population groups instead of direct measurements of intelligence is far more likely to cause disease transmission risks.  (Genetic diversity is very important for population-level disease resistance.  Just look at bananas.)

2. Social Ramifications of HBD: Social Assistance

Let's suppose we're not interested in eugenics.  Let's suppose we're interested in maximizing our societal outcomes.

Well, again, HBD doesn't offer us anything new.  We can already test intelligence, and insofar as HBD is accurate, intelligence tests are more accurate.  So if we aim to streamline society, we don't need HBD to do so.  HBD might offer an argument against affirmative action, in that we have different base expectations for different populations, but affirmative action already takes different base expectations into account (if you live in a city of 50% black people and 50% white people, but 10% of local lawyers are black, your local law firm isn't required to have 50% black lawyers, but 10%).  We might desire to adjust the way we engage in affirmative action, insofar as affirmative action might not lead to the best results, but if you're interested in the best results, you can argue on the basis of best results without needing HBD.

I have yet to encounter someone who argues HBD who also argues we should do something with regard to HELPING PEOPLE on the basis of this, but that might actually be a more significant argument: If there are populations of people who are going to fall behind, that might be a good argument to provide additional resources to these populations of people, particularly if there are geographic correspondences - that is, if HBD is true, and if population groups are geographically segregated, individuals in these population groups will suffer disproportionately relative to their merits, because they don't have the local geographic social capital that equal-advantage people of other population groups would have.  (An average person in a poor region will do worse than an average person in a rich region.)  So HBD provides an argument for desegregation.

Curiously, HBD advocates have a tendency to argue that segregation would lead to the best outcome.  I'd welcome arguments that concentrating an -absence- of social capital is a good idea.

3. Scientific Ramifications of HBD

Well, if HBD were true, it would mean science is politicized.  This might be news to somebody, I guess.

4. Political Ramifications of HBD

We live in a meritocracy.  It's actually not an ideal thing, contrary to the views of some people, because it results in a systematic merit segregation that has completely deprived the lower classes of intellectual resources; talk to older people sometime, who remember, when they worked in the coal mines (or whatever), the one guy who you could trust to be able to answer your questions and provide advice.  Our meritocracy has advanced to the point where we are systematically stripping everybody of value from the lower classes and redistributing them to the middle and upper classes.

HBD might be meaningful here.  Insofar as people take HBD to its absurd extremes, it might actually result in an -improvement- for some lower-class groups, because if we stop taking all the intelligent people out of poor areas, there will still be intelligent people in those poor areas.  But racism as a force of utilitarian good isn't something I care to explore in any great detail, mostly because if I'm wrong it would be a very bad thing, and also because none of its advocates actually suggest anything like this, more interesting in promoting segregation than desegregation.

It doesn't change much else, either.  With HBD we continually run into the same problem - as a theory, it's the product of measuring individual differences, and as a theory, it doesn't add anything to our information that we don't already have with the individual differences.

5. The Big Problem: Individuality

Which is the crucial fault with HBD, iterated multiple times here, in multiple ways: It literally doesn't matter if HBD is true.  All the information it -might- provide us with, we can get with much more accuracy using the same tests we might use to arrive at HBD.  Anything we might want to do with the idea, we can do -better- without it.

HBD might predict we get fewer IQ-115, IQ-130, and IQ-145 people from particular population groups, but it doesn't actually rule them out.  Insofar as this kind of information is useful, it's -more- useful to have more accurate information.  HBD doesn't say "Black people are stupid", instead it says "The average IQ of black people is slightly lower than the average IQ of white people".  But since "black people" isn't a thing that exists, but rather an abstract concept referring to a group of "black persons", and HBD doesn't make any predictions at the individual level we couldn't more accurately obtain through listening to a person speak for five seconds, it doesn't actually make any useful predictions.  It adds literally nothing to our model of the world.

It's not the most important idea of the century.  It's not important at all.

If you think it's true - okay.  What does it -add- to your understanding of the world?  What useful predictions does it make?  How does it permit you to improve society?  I've heard people insist it's this majorly important idea that the scientific and political establishment is suppressing.  I'd like to introduce you to the aether, another idea that had explanatory power but made no useful predictions, and which was abandoned - not because anybody thought it was wrong, but because it didn't even rise to the level of wrong, because it was useless.

And that's what HBD is.  A useless idea.

And even worse, it's a useless idea that's hopelessly politicized.

[Link] Op-Ed on Brussels Attacks

-6 Gleb_Tsipursky 02 April 2016 05:38PM

Trigger warning: politics is hard mode.


"How to you make America safer from terrorists" is the title of my op-ed published in Sun Sentinel, a very prominent newspaper in Florida, one of the most swingiest of the swing states in the US for the presidential election, and the one with the most votes. The maximum length of the op-ed was 450 words, and it was significantly edited by the editor, so it doesn't convey the full message I wanted with all the nuances, but such is life. My primary goal with the piece was to convey methods of thinking more rationally about politics, such as to use probabilistic thinking, evaluating the full consequences of our actions, and avoiding attention bias. I used the example of the proposal to police heavily Muslim neighborhoods as a case study. Hope this helps Floridians think more rationally and raises the sanity waterline regarding politics!

 

 

EDIT: To be totally clear, I used guesstimates for the numbers I suggested. Following Yvain/Scott Alexander's advice, I prefer to use guesstimates rather than vague statements.

Is altruistic deception really necessary? Social activism and the free market

3 PhilGoetz 26 February 2016 06:38AM

I've said before that social reform often seems to require lying.  Only one-sided narratives offering simple solutions motivate humans to act, so reformers manufacture one-sided narratives such as we find in Marxism or radical feminism, which inspire action through indignation.  Suppose you tell someone, "Here's an important problem, but it's difficult and complicated.  If we do X and Y, then after five years, I think we'd have a 40% chance of causing a 15% reduction in symptoms."  They'd probably think they had something better to do.

But the examples I used in that previous post were all arguably bad social reforms: Christianity, Russian communism, and Cuban communism.

The argument that people need to be deceived into social reform assumes either that they're stupid, or that there's some game-theoretic reason why social reform that's very worthwhile to society as a whole isn't worthwhile to any individual in society.

Is that true?  Or are people correct and justified in not making sudden changes until there's a clear problem and a clear solution to it?

continue reading »

The Art of Lawfare and Litigation strategy

-4 [deleted] 17 December 2015 02:34PM

Bertrand Russell, well aware there were health risks of smoking, defended his addiction in a videotaped interview. See if you can spot his fallacy! 

Today on SBS (radio channel in Australia) I heard reporters breaking the news that Nature article reports that Cancer is largely due to choices. I was shocked by what appeared to be gross violations of cultural norms around the blaming of victims. I wanted to investigate further since science reporting is notoriously inaccurate.

The BBC reports:

Earlier this year, researchers sparked a debate after suggesting two-thirds of cancer types were down to luck rather than factors such as smoking.

The new study, in the journal Nature, used four approaches to conclude only 10-30% of cancers were down to the way the body naturally functions or "luck".

"They can't smoke and say it's bad luck if they have cancer."

-Dr Yusuf Hannun, the director of Stony Brook

-http://www.bbc.com/news/health-35111449

The BBC article is roughly concordant with the SBS report. 

I've had a fairly simple relationship with cigarettes. I've smoked others' cigarettes a few times, while drinking. I bought my first cigarette to try soon after I turned of age and discarded the rest of the packet. One of my favourite memories is trying a vanilla flavoured cigar. I still feel tempted to it again whenever I smell a nice scent, or think about that moment. Though now, I regularly reject offers to go to local venues and smoke hookah. Even after my first cigarette, I felt the tug of nicotine and tobacco. Though, I'm unusually sensitive to eve the mildest addictive substances, so that doesn't suprise me in respective. What does suprise me, is that society is starting to take a ubiquitous but increasingly undeniable health issue seriously despite deep entanglement with long standing way of doing things, political ideologues, individual addictions and addiction-driven political behaviour and shareholder's pockets.

Though the truth claim of the article isn't that suprising. The dangers of smoking are publicised everywhere. Emphasis mine:

13 die every day in Victoria as a result of smoking.

Tobacco use (which includes cigarettes, cigars, pipes, snuff, chewing tobacco) is the leading preventable cause of death and illness in our country. It causes more deaths annually than those killed by AIDS, alcohol, automobile accidents, murders, suicides, drugs and fires combined.

So I decided to learn more about the relationship between society and big tobacco, and government and big tobacco to see what other people interested in influencing public policy and public health can learn (effective altruism policy analytics, take note!) about policy tractability in suprising places.

Here's what might make for tractable public policy for public health interventions

Proof of concept

Governments are great at successfully suing the shit out of tobacco. And, big tobacco takes it like a champ:

It started with United State's states experimenting with suing big tobacco. Eventually only a couple of states hadn't done it. Big Tobacco and all those attorney generals gathered and arranged huge ass settlement that resulted in the disestablishment of several shill research institutes supporting big tobacco and big payouts to sponsor anti-smoking advocacy groups (which seem politically unethical, but consequentially good, but I suppose that's a different story). However, what's important to note here is the experimentation within US states culminating with the legitimacy of normative lawfare. It's called 'Diffusion theory' and is described here.

Wait wait wait. I know what you're thinking, non-US LessWrongers - another US centric analysis that isn't too transportable. No. I'm not American in any sense, it's just that the US seems to be a point of diffusion. What's happening regarding marajuana in the US now seems to mirror this in some sense, but it's ironically pro-smoking. That illustrates the cause-neutrality of this phenomenon.

That settlement wasn't the end of the lawfare:

On August 17, 2006, a U.S. district judge issued a landmark opinion in the government's case against Big Tobacco, finding that tobacco companies had violated civil racketeering laws and defrauded consumers by lying about the health risks of smoking.

In a 1,653 page ruling, the judge stated that the tobacco industry had deceived the American public by concealing the addictive nature of nicotine plus had targeted youth in order to get them hooked on cigarettes for life. (Appeals are still pending). 

Victims who ask for help

I also stumbled upon some smokers attitudes to smoking and their, well, seemingly vexacious attitudes to big tobacco when looking up lawsuits and big tobacco. Here's a copy of the comments section on one website. It's really heartbreaking. It's a small sample size but just note their education too - suggesting a socio-economic effect. Note, this comments were posted publicly and are blatant cries for help. This suggests political will at a grassroots level that is yet under-catered for by services and/or political action. That's a powerful thing, perhaps - visible need in public forums addressed to those that are in the relevant space. Note that they commented on a class action website.

http://s10.postimg.org/61h7b1rp5/099090.png

 

Note some of the language:

 

"I feel like I'm being tortured"

You don't see that kind of language used in any effective altruism branded publications.

Villains

Somewhat famous documents exposing the tobacco industries internal motivations and dodginess seem to be quoted everywhere in websites documenting and justifyications of lawfare against the tobacco industry. Public health and personal dangers of smoking don't seem to have been the big catalyst, but rather a villainous enemy. I'm reminded of how the Stop the boats campaign which villainised people smugglers instead of speaking of the potential to save lives of refugees who fall overboard shitty vessals. I think to Open Borders campaigners associated with GiveWell's Open Philanthropy Project, the perception of the project as just about the most intractable policy prospect around (I'd say a moratorium on AI research is up there), but at the same time, non identification of a villain in the picture. That's not entirely unsuprising. I recall the hate I received when I suggested that people should consider prostituting themselves for effective altruism, or soliciting donations from the porn industry where donors struggle to donate since many, particularly relgious charities refuge to accept their donations. Likewise, it's hard to get rid of encultured perceptions of what's good and what's bad, rather then enumerating ('or checking, as Eleizer writes in the sequence) the consequences.

Relative merit

This is something Effective Altruist is doing.

William Savedoff and Albert Alwang recently identified taxes on tobacco as, “the single most cost-effective way to save lives in developing countries” (2015, p.1).

...

Tobacco control programs often pursue many of these aims at once. However, raising taxes appears to be particularly cost-effective — e.g., raising taxes costs $3 - $70 per DALY avoided(Savedoff and Alwang, p.5; Ranson et al. 2002, p.311) — so I will focus solely on taxes. I will also focus only on low and middle income countries (LMICs) because that is where the problem is worst and where taxes can do the most good most cost-effectively.

..

But current trends need not continue. We can prevent deaths from tobacco use. Tobacco taxation is a well-tested and effective means of decreasing the prevalence of smoking—it gets people to stop and prevents others from starting. The reason is that smokers are responsive to price increases,provided that the real price goes up enough

...

Even if these numbers are off by a factor of 2 or 3, tobacco taxation appears to be on par with the most effective interventions identified by GiveWell and Giving What We Can. For example, GiveWell estimates that AMF can prevent a death for $3340 by providing bed nets to prevent malaria and estimates the cost of schistosomiasis deworming at $29 - $71 per DALY.

 

There are a few reasons to balk at recommending tobacco tax advocacy to those aiming to do the most good with their donations, time, and careers.

 

  • Tobacco taxes may not be a tractable issue
  • Tobacco taxes may be a “crowded” cause area
  • Unanswered questions about the empirical basis of cost-effectiveness estimates
  • There may not be a charity to donate to
...
Smoking is very harmful and very common.  Globally, 21% of people over 15 smoke (WHO GHO)

 

-https://www.givingwhatwecan.org/post/2015/09/tobacco-control-best-buy-developing-world/

 

Attributing public responsibility AND incentivising independently private interest in a cause


The Single Best Health Policy in the World: Tobacco Taxes

The single most cost-effective way to save lives in developing countries is in the hands of developing countries themselves: raising tobacco taxes. In fact, raising tobacco taxes is better than cost-effective. It saves lives while increasing revenues and saving poor households money when their members quit smoking.

-http://www.cgdev.org/publication/single-best-health-policy-world-tobacco-taxes)

 

Tobacco lawsuits can be hard to win but if you have been injured because of tobacco or smoking or secondary smoke exposure, you should contact an attorney as soon as possible.

  If you have lung cancer and are now, or were formerly, a smoker or used tobacco products, you may have a claim under the product liability laws. You should contact an experienced product liability attorney or a tobacco lawsuit attorney as soon as possible because a statute of limitations could apply. 

-http://smoking-tobacco.whocanisue.com/

There's a whole bunch of legal literature like this: http://heinonline.org/HOL/LandingPage?handle=hein.journals/clqv86&div=45&id=&page=

that I don't have the background to search for and interpret. So, if I'm missing important things, perhaps it's attributable to that. Point them out please.

So that's my analysis: plausible modifiable variables that influence the tractability of the public health policy initiative: 

(1) Attributing public responsibility AND incentivising independently private interest in a cause

(2) Relative merit

(3) Villains

(4) Victims that ask for help

(5) Low scale proof of concept

Remember, lawfare isn't just the domain of governments. Here's an example of non-government lawfare for public health. They are just better resourced, often, than individuals. They need groups to advocate on their behalf. Perhaps that's a direction the Open Philanthropy Project could take. 

I want to finish by soliciting an answer on the following question that is posed to smokers in a recurring survey by a tobacco control body:

Do you support or oppose the government suing tobacco companies to recover health care costs caused by tobacco use?

Now, there may be some 'reverse causation' at play here for why Tobacco Control has been so politically effect. BECAUSE it's such a good cause, it's a low hanging fruit that's already being picked. 

What's the case for or against this?


The case for it's cause selection: Tobacco control


Importance: high


tobacco is the leading preventable cause of death and disease in both the world (see: http://www.who.int/nmh/publications/fact_sheet_tobacco_en.pdf) and Australia (see: http://www.cancer.org.au/policy-and-advocacy/position-statements/smoking-and-tobacco-control/)


‘Tobacco smoking causes 20% of cancer deaths in Australia, making it the highest individual cancer risk factor. Smoking is a known cause of 16 different cancer types and is the main cause of Australia’s deadliest cancer, lung cancer. Smoking is responsible for 88% of lung cancer deaths in men and 75% of lung cancer cases in women in Australia.’


Tractable: high


The World Health Organization’s Framework Convention on Tobacco Control (FCTC) was the first public health treaty ever negotiate.


Based on private information, the balance of healthcare costs against tax revenues according to health advocates compared to treasury estimates in Australia may have been relevant to Australia’s leadership in tobacco regulation. That submission may or may not be adequate in complexity (ie. taking into account reduced lifespans impact on reduced pension payouts for instance). There is a good article about the behavioural economics of tobacco regulation here (http://baselinescenario.com/2011/03/22/incentives-dont-work/)



Room for advocacy: low


There are many hundreds of consumer support and advocacy groups, and cancer charities across Australia.


Room for employment: low?


Room for consulting: high

 

The rigour of analysis and achievements themselves in the Cancer Council of Australia annual review is underwhelming, as is the Cancer Council of Victoria’s annual report. There is a better organised body of evidence relating to their impact on their Wiki pages about effective interventions and policy priorities. At a glance, there appears to be room for more quantitative, methodologically rigorous and independent evaluation. I will be looking at GiveWell to see what I recommendations can be translated. I will keep records of my findings to formulate draft guidelines for advising organisations in the Cancer Councils’ positions which I estimate by vague memory of GiveWell’s claims are in the majority in the philanthropic space.

[Link] A rational response to the Paris attacks and ISIS

-1 Gleb_Tsipursky 23 November 2015 01:47AM

Here's my op-ed that uses long-term orientation, probabilistic thinking, numeracy, consider the alternative, reaching our actual goals, avoiding intuitive emotional reactions and attention bias, and other rationality techniques to suggest more rational responses to the Paris attacks and the ISIS threat. It's published in the Sunday edition of The Plain Dealer​, a major newspaper (16th in the US). This is part of my broader project, Intentional Insights, of conveying rational thinking, including about politics, to a broad audience to raise the sanity waterline.

Political Debiasing and the Political Bias Test

8 Stefan_Schubert 11 September 2015 07:04PM

Cross-posted from the EA forum. I asked for questions for this test here on LW about a year ago. Thanks to those who contributed.

Rationally, your political values shouldn't affect your factual beliefs. Nevertheless, that often happens. Many factual issues are politically controversial - typically because the true answer makes a certain political course of action more plausible - and on those issues, many partisans tend to disregard politically uncomfortable evidence.

This sort of political bias has been demonstrated in a large number of psychological studies. For instance, Yale professor Dan Kahan and his collaborators showed in a fascinating experiment that on politically controversial questions, people are quite likely to commit mathematical mistakes that help them retain their beliefs, but much less likely to commit mistakes that would force them to give up those belies. Examples like this abound in the literature.

Political bias is likely to be a major cause of misguided policies in democracies (even the main one according to economist Bryan Caplan). If they don’t have any special reason not to, people without special knowledge defer to the scientific consensus on technical issues. Thus, they do not interfere with the experts, who normally get things right. On politically controversial issues, however, they often let their political bias win over science and evidence, which means they’ll end up with false beliefs. And, in a democracy voters having systematically false beliefs obviously more often than not translates into misguided policy.

Can we reduce this kind of political bias? I’m fairly hopeful. One reason for optimism is that debiasing generally seems to be possible to at least some extent. This optimism of mine was strengthened by participating in a CFAR workshop last year. Political bias seems not to be fundamentally different from other kinds of biases and should thus be reducible too. But obviously one could argue against this view of mine. I’m happy to discuss this issue further.

Another reason for optimism is that it seems that the level of political bias is actually lower today than it was historically. People are better at judging politically controversial issues in a detached, scientific way today than they were in, say, the 14th century. This shows that progress is possible. There seems to be no reason to believe it couldn’t continue.

A third reason for optimism is that there seems to be a strong norm against political bias. Few people are consciously and intentionally politically biased. Instead most people seem to believe themselves to be politically rational, and hold that as a very important value (or so I believe). They fail to see their own biases due to the bias blind spot (which disables us from seeing our own biases).

Thus if you could somehow make it salient to people that they are biased, they would actually want to change. And if others saw how biased they are, the incentives to debias would be even stronger.

There are many ways in which you could make political bias salient. For instance, you could meticulously go through political debaters’ arguments and point out fallacies, like I have done on my blog. I will post more about that later. Here I want to focus on another method, however, namely a political bias test which I have constructed with ClearerThinking, run by EA-member Spencer Greenberg. Since learning how the test works might make you answer a bit differently, I will not explain how the test works here, but instead refer either to the explanatory sections of the test, or to Jess Whittlestone’s (also an EA member) Vox.com-article.

Our hope is of course that people taking the test might start thinking more both about their own biases, and about the problem of political bias in general. We want this important topic to be discussed more. Our test is produced for the American market, but hopefully, it could work as a generic template for bias tests in other countries (akin to the Political Compass or Voting Advice Applications).

Here is a guide for making new bias tests (where the main criticisms of our test are also discussed). Also, we hope that the test could inspire academic psychologists and political scientists to construct full-blown scientific political bias tests.

This does not mean, however, that we think that such bias tests in themselves will get rid of the problem of political bias. We need to attack the problem of political bias from many other angles as well.

Pro-Con-lists of arguments and onesidedness points

3 Stefan_Schubert 21 August 2015 02:15PM

Follow-up to Reverse Engineering of Belief Structures

Pro-con-lists of arguments such as ProCon.org and BalancedPolitics.org fill a useful purpose. They give an overview over complex debates, and arguably foster nuance. My network for evidence-based policy is currently in the process of constructing a similar site in Swedish.

 

I'm thinking it might be interesting to add more features to such a site. You could let people create a profile on the site. Then you would let them fill in whether they agree or disagree with the theses under discussion (cannabis legalization, GM foods legalization, etc), and also whether they agree or disagree with the different argument for and against these theses (alternatively, you could let them rate the arguments from 1-5).

Once you have this data, you could use them to give people different kinds of statistics. The most straightforward statistic would be their degree of "onesidedness". If you think that all of the arguments for the theses you believe in are good, and all the arguments against them are bad, then you're defined as onesided. If you, on the other hand, believe that some of your own side's arguments are bad, whereas some of the opponents' arguments are good, you're defined as not being onesided. (The exact mathematical function you would choose could be discussed.)

Once you've told people how one-sided they are, according to the test, you would discuss what might explain onesidedness. My hunch is that the most plausible explanation normally is different kinds of bias. Instead of reviewing new arguments impartially, people treat arguments for their views more leniently than arguments against their views. Hence they end up being onesided, according to the test.

There are other possible explanations, though. One is that all of the arguments against the thesis in question actually are bad. That might happen occassionally, but I don't think that's very common. As Eliezer Yudkowsky says in "Policy Debates Should Not Appear One-sided":

On questions of simple fact (for example, whether Earthly life arose by natural selection) there's a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called "balance of evidence" should reflect this.  Indeed, under the Bayesian definition of evidence, "strong evidence" is just that sort of evidence which we only expect to find on one side of an argument.

But there is no reason for complex actions with many consequences to exhibit this onesidedness property.  

Instead, the reason why people end up with one-sided beliefs is bias, Yudkowsky argues:

Why do people seem to want their policy debates to be one-sided?

Politics is the mind-killer.  Arguments are soldiers.  Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it's like stabbing your soldiers in the back.  If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary.

Especially if you're consistently one-sided in lots of different debates, it's hard to see that any other hypothesis besides bias is plausible. It depends a bit on what kinds of arguments you include in the list, though. In our lists we haven't really checked the quality of the arguments (our purpose is to summarize the debate, rather than to judge it), but you could also do that, of course.

My hope is that such a test would make people more aware both of their own biases, and of the problem of political bias in general. I'm thinking that is the first step towards debiasing. I've also constructed a political bias test with similar methods and purposes together with ClearerThinking, which should be released soon.

 

You could also add other features to a pro-con-list. For instance, you could classify arguments in different ways: ad hominem-arguments, consequentialist arguments, rights-based arguments, etc. (Some arguments might be hard to classify, and then you just wouldn't do that. You wouldn't necessarily have to classify every argument.) Using this info, you could give people a profile: e.g., what kinds of arguments do they find most persuasive? That could make them reflect more on what kinds of arguments really are valid.

You could also combine these two features. For instance, some people might accept ad hominem-arguments when they support their views, but not when they contradict them. That would make your use of ad hominem-arguments onesided.

 

Yet another feature that could be added is a standard political compass. Since people fill in what theses they believe in (cannabis legalization, GM goods legalization, etc) you could calcluate what party is closest to them, based on the parties' stances on these issues. That could potentially make the test more attractive to take.

 

Suggestions of more possible features are welcome, as well as general comments - especially about implementation.

View more: Next